2026-03-10T07:14:26.653 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T07:14:26.657 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T07:14:26.681 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/945 branch: squid description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} email: null first_in_suite: false flavor: default job_id: '945' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: client: debug ms: 1 global: mon election default strategy: 1 ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on pool no app: false osd: debug ms: 1 debug osd: 20 osd class default list: '*' osd class load list: '*' osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - reached quota - but it is still running - overall HEALTH_ - \(POOL_FULL\) - \(SMALLER_PGP_NUM\) - \(CACHE_POOL_NO_HIT_SET\) - \(CACHE_POOL_NEAR_FULL\) - \(POOL_APP_NOT_ENABLED\) - \(PG_AVAILABILITY\) - \(PG_DEGRADED\) - CEPHADM_STRAY_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: cephadm-package install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_packages: - cephadm extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGTqPxzxkCb5vkn6FufbiUwwyb3AEtoA9rV1X9m+zjyWOhQM2S5uddpPUGPlry89s5+b+79rjfTZicLh0T139rQ= vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO7bpIahDeiVrJIFCX98LPxGtuPqsaoH53N4m+rwEmWl/6EYMK2DeeTXnPmsoAG9Uy7Edqa+3s0BrPw7TNUzZ9U= tasks: - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - workunit: clients: client.0: - rados/test.sh - rados/test_pool_quota.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T07:14:26.681 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T07:14:26.681 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T07:14:26.681 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T07:14:26.682 INFO:teuthology.task.internal:Checking packages... 2026-03-10T07:14:26.682 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T07:14:26.682 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T07:14:26.682 INFO:teuthology.packaging:ref: None 2026-03-10T07:14:26.682 INFO:teuthology.packaging:tag: None 2026-03-10T07:14:26.682 INFO:teuthology.packaging:branch: squid 2026-03-10T07:14:26.682 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:14:26.682 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-10T07:14:27.274 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-10T07:14:27.275 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T07:14:27.276 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T07:14:27.276 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T07:14:27.277 INFO:teuthology.task.internal:Saving configuration 2026-03-10T07:14:27.281 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T07:14:27.282 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T07:14:27.289 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/945', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 07:13:11.581819', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGTqPxzxkCb5vkn6FufbiUwwyb3AEtoA9rV1X9m+zjyWOhQM2S5uddpPUGPlry89s5+b+79rjfTZicLh0T139rQ='} 2026-03-10T07:14:27.296 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/945', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 07:13:11.582338', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO7bpIahDeiVrJIFCX98LPxGtuPqsaoH53N4m+rwEmWl/6EYMK2DeeTXnPmsoAG9Uy7Edqa+3s0BrPw7TNUzZ9U='} 2026-03-10T07:14:27.296 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T07:14:27.297 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-10T07:14:27.297 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-10T07:14:27.297 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T07:14:27.305 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-10T07:14:27.313 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-03-10T07:14:27.313 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fdc4b822170>, signals=[15]) 2026-03-10T07:14:27.313 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T07:14:27.314 INFO:teuthology.task.internal:Opening connections... 2026-03-10T07:14:27.314 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-10T07:14:27.315 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T07:14:27.376 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-03-10T07:14:27.376 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T07:14:27.432 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T07:14:27.434 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-10T07:14:27.446 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-10T07:14:27.446 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:NAME="Ubuntu" 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="22.04" 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_CODENAME=jammy 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:ID=ubuntu 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE=debian 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T07:14:27.492 INFO:teuthology.orchestra.run.vm00.stdout:UBUNTU_CODENAME=jammy 2026-03-10T07:14:27.492 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-10T07:14:27.497 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-03-10T07:14:27.503 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-03-10T07:14:27.503 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:NAME="Ubuntu" 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="22.04" 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_CODENAME=jammy 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:ID=ubuntu 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE=debian 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T07:14:27.549 INFO:teuthology.orchestra.run.vm03.stdout:UBUNTU_CODENAME=jammy 2026-03-10T07:14:27.549 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-03-10T07:14:27.555 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T07:14:27.557 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T07:14:27.557 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T07:14:27.558 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-10T07:14:27.559 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-03-10T07:14:27.593 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T07:14:27.594 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T07:14:27.594 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-10T07:14:27.602 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-03-10T07:14:27.605 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T07:14:27.637 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T07:14:27.638 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T07:14:27.647 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-10T07:14:27.652 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:14:27.930 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-03-10T07:14:27.933 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:14:28.273 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T07:14:28.275 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T07:14:28.275 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T07:14:28.276 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T07:14:28.279 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T07:14:28.281 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T07:14:28.282 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T07:14:28.282 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T07:14:28.322 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T07:14:28.327 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T07:14:28.329 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T07:14:28.329 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T07:14:28.371 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:14:28.372 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T07:14:28.374 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:14:28.374 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T07:14:28.413 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T07:14:28.421 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T07:14:28.426 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T07:14:28.427 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T07:14:28.431 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T07:14:28.432 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T07:14:28.434 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T07:14:28.434 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T07:14:28.470 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T07:14:28.481 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T07:14:28.484 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T07:14:28.484 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T07:14:28.522 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T07:14:28.525 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T07:14:28.568 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T07:14:28.612 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:14:28.612 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T07:14:28.660 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T07:14:28.664 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T07:14:28.709 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:14:28.709 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T07:14:28.758 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-10T07:14:28.759 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-03-10T07:14:28.818 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T07:14:28.820 INFO:teuthology.task.internal:Starting timer... 2026-03-10T07:14:28.820 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T07:14:28.823 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T07:14:28.826 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-10T07:14:28.826 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-03-10T07:14:28.827 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T07:14:28.827 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T07:14:28.827 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T07:14:28.827 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T07:14:28.828 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T07:14:28.829 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T07:14:28.830 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T07:14:29.426 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T07:14:29.432 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T07:14:29.433 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventory6jktqvna --limit vm00.local,vm03.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T07:17:12.624 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local'), Remote(name='ubuntu@vm03.local')] 2026-03-10T07:17:12.624 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-10T07:17:12.625 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T07:17:12.687 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-10T07:17:12.924 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-10T07:17:12.925 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-03-10T07:17:12.925 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T07:17:12.985 DEBUG:teuthology.orchestra.run.vm03:> true 2026-03-10T07:17:13.212 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-03-10T07:17:13.213 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T07:17:13.215 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T07:17:13.216 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T07:17:13.216 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T07:17:13.217 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T07:17:13.217 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: Command line: ntpd -gq 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: ---------------------------------------------------- 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: corporation. Support and training for ntp-4 are 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: available at https://www.nwtime.org/support 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: ---------------------------------------------------- 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: proto: precision = 0.029 usec (-25) 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: basedate set to 2022-02-04 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: gps base set to 2022-02-06 (week 2196) 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T07:17:13.233 INFO:teuthology.orchestra.run.vm00.stderr:10 Mar 07:17:13 ntpd[16087]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T07:17:13.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T07:17:13.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T07:17:13.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T07:17:13.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: Listen normally on 3 ens3 192.168.123.100:123 2026-03-10T07:17:13.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: Listen normally on 4 lo [::1]:123 2026-03-10T07:17:13.235 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:0%2]:123 2026-03-10T07:17:13.235 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:13 ntpd[16087]: Listening on routing socket on fd #22 for interface updates 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: Command line: ntpd -gq 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: ---------------------------------------------------- 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: corporation. Support and training for ntp-4 are 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: available at https://www.nwtime.org/support 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: ---------------------------------------------------- 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: proto: precision = 0.029 usec (-25) 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: basedate set to 2022-02-04 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: gps base set to 2022-02-06 (week 2196) 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T07:17:13.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T07:17:13.272 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T07:17:13.272 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T07:17:13.272 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: Listen normally on 3 ens3 192.168.123.103:123 2026-03-10T07:17:13.272 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: Listen normally on 4 lo [::1]:123 2026-03-10T07:17:13.272 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:3%2]:123 2026-03-10T07:17:13.272 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:13 ntpd[16104]: Listening on routing socket on fd #22 for interface updates 2026-03-10T07:17:13.272 INFO:teuthology.orchestra.run.vm03.stderr:10 Mar 07:17:13 ntpd[16104]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T07:17:14.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:14 ntpd[16087]: Soliciting pool server 172.104.138.148 2026-03-10T07:17:14.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:14 ntpd[16104]: Soliciting pool server 172.104.138.148 2026-03-10T07:17:15.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:15 ntpd[16087]: Soliciting pool server 176.9.157.155 2026-03-10T07:17:15.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:15 ntpd[16087]: Soliciting pool server 37.221.195.24 2026-03-10T07:17:15.270 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:15 ntpd[16104]: Soliciting pool server 176.9.157.155 2026-03-10T07:17:15.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:15 ntpd[16104]: Soliciting pool server 37.221.195.24 2026-03-10T07:17:16.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:16 ntpd[16087]: Soliciting pool server 162.159.200.1 2026-03-10T07:17:16.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:16 ntpd[16087]: Soliciting pool server 139.144.71.56 2026-03-10T07:17:16.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:16 ntpd[16087]: Soliciting pool server 46.224.156.215 2026-03-10T07:17:16.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:16 ntpd[16104]: Soliciting pool server 162.159.200.1 2026-03-10T07:17:16.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:16 ntpd[16104]: Soliciting pool server 139.144.71.56 2026-03-10T07:17:16.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:16 ntpd[16104]: Soliciting pool server 46.224.156.215 2026-03-10T07:17:17.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:17 ntpd[16087]: Soliciting pool server 116.203.244.102 2026-03-10T07:17:17.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:17 ntpd[16087]: Soliciting pool server 178.215.228.24 2026-03-10T07:17:17.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:17 ntpd[16087]: Soliciting pool server 185.252.140.125 2026-03-10T07:17:17.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:17 ntpd[16087]: Soliciting pool server 159.195.55.239 2026-03-10T07:17:17.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:17 ntpd[16104]: Soliciting pool server 116.203.244.102 2026-03-10T07:17:17.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:17 ntpd[16104]: Soliciting pool server 178.215.228.24 2026-03-10T07:17:17.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:17 ntpd[16104]: Soliciting pool server 185.252.140.125 2026-03-10T07:17:17.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:17 ntpd[16104]: Soliciting pool server 159.195.55.239 2026-03-10T07:17:18.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:18 ntpd[16087]: Soliciting pool server 130.61.89.107 2026-03-10T07:17:18.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:18 ntpd[16087]: Soliciting pool server 85.220.190.246 2026-03-10T07:17:18.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:18 ntpd[16087]: Soliciting pool server 213.172.105.106 2026-03-10T07:17:18.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:18 ntpd[16087]: Soliciting pool server 185.125.190.56 2026-03-10T07:17:18.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:18 ntpd[16104]: Soliciting pool server 130.61.89.107 2026-03-10T07:17:18.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:18 ntpd[16104]: Soliciting pool server 85.220.190.246 2026-03-10T07:17:18.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:18 ntpd[16104]: Soliciting pool server 213.172.105.106 2026-03-10T07:17:18.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:18 ntpd[16104]: Soliciting pool server 185.125.190.56 2026-03-10T07:17:19.233 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:19 ntpd[16087]: Soliciting pool server 91.189.91.157 2026-03-10T07:17:19.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:19 ntpd[16087]: Soliciting pool server 77.37.65.181 2026-03-10T07:17:19.234 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:19 ntpd[16087]: Soliciting pool server 2a01:239:25e:bd00:: 2026-03-10T07:17:19.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:19 ntpd[16104]: Soliciting pool server 91.189.91.157 2026-03-10T07:17:19.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:19 ntpd[16104]: Soliciting pool server 77.37.65.181 2026-03-10T07:17:19.271 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:19 ntpd[16104]: Soliciting pool server 2a01:239:25e:bd00:: 2026-03-10T07:17:22.262 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 07:17:22 ntpd[16087]: ntpd: time slew -0.002864 s 2026-03-10T07:17:22.262 INFO:teuthology.orchestra.run.vm00.stdout:ntpd: time slew -0.002864s 2026-03-10T07:17:22.282 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T07:17:22.282 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-10T07:17:22.282 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:17:22.282 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:17:22.282 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:17:22.282 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:17:22.282 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:17:22.299 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 07:17:22 ntpd[16104]: ntpd: time slew -0.000906 s 2026-03-10T07:17:22.299 INFO:teuthology.orchestra.run.vm03.stdout:ntpd: time slew -0.000906s 2026-03-10T07:17:22.319 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T07:17:22.319 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-10T07:17:22.320 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:17:22.320 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:17:22.320 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:17:22.320 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:17:22.320 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:17:22.320 INFO:teuthology.run_tasks:Running task install... 2026-03-10T07:17:22.322 DEBUG:teuthology.task.install:project ceph 2026-03-10T07:17:22.322 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_packages': ['cephadm'], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T07:17:22.322 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T07:17:22.322 INFO:teuthology.task.install:Using flavor: default 2026-03-10T07:17:22.324 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T07:17:22.324 INFO:teuthology.task.install:extra packages: [] 2026-03-10T07:17:22.324 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-key list | grep Ceph 2026-03-10T07:17:22.324 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-key list | grep Ceph 2026-03-10T07:17:22.363 INFO:teuthology.orchestra.run.vm00.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-10T07:17:22.382 INFO:teuthology.orchestra.run.vm00.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-10T07:17:22.382 INFO:teuthology.orchestra.run.vm00.stdout:uid [ unknown] Ceph.com (release key) 2026-03-10T07:17:22.383 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-10T07:17:22.383 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-10T07:17:22.383 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:17:22.470 INFO:teuthology.orchestra.run.vm03.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-10T07:17:22.471 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-10T07:17:22.471 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph.com (release key) 2026-03-10T07:17:22.471 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-10T07:17:22.471 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-10T07:17:22.471 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:17:23.053 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-10T07:17:23.053 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T07:17:23.114 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-10T07:17:23.114 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T07:17:23.575 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:17:23.575 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-10T07:17:23.583 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-10T07:17:23.595 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:17:23.595 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-10T07:17:23.603 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-get update 2026-03-10T07:17:23.782 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T07:17:23.790 INFO:teuthology.orchestra.run.vm03.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T07:17:23.801 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T07:17:23.811 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T07:17:23.954 INFO:teuthology.orchestra.run.vm00.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T07:17:23.994 INFO:teuthology.orchestra.run.vm00.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T07:17:24.040 INFO:teuthology.orchestra.run.vm00.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T07:17:24.059 INFO:teuthology.orchestra.run.vm00.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T07:17:24.261 INFO:teuthology.orchestra.run.vm03.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-10T07:17:24.267 INFO:teuthology.orchestra.run.vm00.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-10T07:17:24.378 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-10T07:17:24.385 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-10T07:17:24.495 INFO:teuthology.orchestra.run.vm03.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-10T07:17:24.506 INFO:teuthology.orchestra.run.vm00.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-10T07:17:24.614 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-10T07:17:24.624 INFO:teuthology.orchestra.run.vm00.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-10T07:17:24.689 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 25.8 kB in 1s (27.0 kB/s) 2026-03-10T07:17:24.702 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 25.8 kB in 1s (27.5 kB/s) 2026-03-10T07:17:25.416 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T07:17:25.417 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T07:17:25.429 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-10T07:17:25.430 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-10T07:17:25.462 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T07:17:25.464 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T07:17:25.665 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T07:17:25.665 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T07:17:25.669 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T07:17:25.669 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T07:17:25.830 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T07:17:25.830 INFO:teuthology.orchestra.run.vm00.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T07:17:25.830 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T07:17:25.830 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T07:17:25.830 INFO:teuthology.orchestra.run.vm00.stdout:The following additional packages will be installed: 2026-03-10T07:17:25.830 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-10T07:17:25.830 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-10T07:17:25.831 INFO:teuthology.orchestra.run.vm00.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-10T07:17:25.832 INFO:teuthology.orchestra.run.vm00.stdout:Suggested packages: 2026-03-10T07:17:25.832 INFO:teuthology.orchestra.run.vm00.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-10T07:17:25.832 INFO:teuthology.orchestra.run.vm00.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-10T07:17:25.832 INFO:teuthology.orchestra.run.vm00.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-10T07:17:25.832 INFO:teuthology.orchestra.run.vm00.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-10T07:17:25.832 INFO:teuthology.orchestra.run.vm00.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-10T07:17:25.832 INFO:teuthology.orchestra.run.vm00.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-10T07:17:25.832 INFO:teuthology.orchestra.run.vm00.stdout: smart-notifier mailx | mailutils 2026-03-10T07:17:25.832 INFO:teuthology.orchestra.run.vm00.stdout:Recommended packages: 2026-03-10T07:17:25.832 INFO:teuthology.orchestra.run.vm00.stdout: btrfs-tools 2026-03-10T07:17:25.849 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T07:17:25.849 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T07:17:25.849 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T07:17:25.850 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T07:17:25.850 INFO:teuthology.orchestra.run.vm03.stdout:The following additional packages will be installed: 2026-03-10T07:17:25.850 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-10T07:17:25.850 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-10T07:17:25.850 INFO:teuthology.orchestra.run.vm03.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T07:17:25.850 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-10T07:17:25.850 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-10T07:17:25.851 INFO:teuthology.orchestra.run.vm03.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-10T07:17:25.852 INFO:teuthology.orchestra.run.vm03.stdout:Suggested packages: 2026-03-10T07:17:25.852 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-10T07:17:25.852 INFO:teuthology.orchestra.run.vm03.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-10T07:17:25.852 INFO:teuthology.orchestra.run.vm03.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-10T07:17:25.852 INFO:teuthology.orchestra.run.vm03.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-10T07:17:25.852 INFO:teuthology.orchestra.run.vm03.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-10T07:17:25.852 INFO:teuthology.orchestra.run.vm03.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-10T07:17:25.852 INFO:teuthology.orchestra.run.vm03.stdout: smart-notifier mailx | mailutils 2026-03-10T07:17:25.852 INFO:teuthology.orchestra.run.vm03.stdout:Recommended packages: 2026-03-10T07:17:25.852 INFO:teuthology.orchestra.run.vm03.stdout: btrfs-tools 2026-03-10T07:17:25.869 INFO:teuthology.orchestra.run.vm00.stdout:The following NEW packages will be installed: 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T07:17:25.870 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout: socat unzip xmlstarlet zip 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be upgraded: 2026-03-10T07:17:25.871 INFO:teuthology.orchestra.run.vm00.stdout: librados2 librbd1 2026-03-10T07:17:25.891 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-10T07:17:25.891 INFO:teuthology.orchestra.run.vm03.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-10T07:17:25.891 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-10T07:17:25.891 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-10T07:17:25.891 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-10T07:17:25.891 INFO:teuthology.orchestra.run.vm03.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-10T07:17:25.891 INFO:teuthology.orchestra.run.vm03.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-10T07:17:25.891 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: socat unzip xmlstarlet zip 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be upgraded: 2026-03-10T07:17:25.892 INFO:teuthology.orchestra.run.vm03.stdout: librados2 librbd1 2026-03-10T07:17:25.975 INFO:teuthology.orchestra.run.vm00.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T07:17:25.975 INFO:teuthology.orchestra.run.vm00.stdout:Need to get 178 MB of archives. 2026-03-10T07:17:25.975 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-10T07:17:25.975 INFO:teuthology.orchestra.run.vm00.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-10T07:17:26.028 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-10T07:17:26.030 INFO:teuthology.orchestra.run.vm00.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-10T07:17:26.042 INFO:teuthology.orchestra.run.vm00.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-10T07:17:26.081 INFO:teuthology.orchestra.run.vm00.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-10T07:17:26.083 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-10T07:17:26.089 INFO:teuthology.orchestra.run.vm00.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-10T07:17:26.091 INFO:teuthology.orchestra.run.vm00.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-10T07:17:26.092 INFO:teuthology.orchestra.run.vm00.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-10T07:17:26.092 INFO:teuthology.orchestra.run.vm00.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-10T07:17:26.092 INFO:teuthology.orchestra.run.vm00.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-10T07:17:26.112 INFO:teuthology.orchestra.run.vm00.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-10T07:17:26.113 INFO:teuthology.orchestra.run.vm00.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-10T07:17:26.114 INFO:teuthology.orchestra.run.vm00.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-10T07:17:26.114 INFO:teuthology.orchestra.run.vm00.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-10T07:17:26.115 INFO:teuthology.orchestra.run.vm00.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-10T07:17:26.115 INFO:teuthology.orchestra.run.vm00.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-10T07:17:26.116 INFO:teuthology.orchestra.run.vm00.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-10T07:17:26.117 INFO:teuthology.orchestra.run.vm00.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-10T07:17:26.118 INFO:teuthology.orchestra.run.vm00.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-10T07:17:26.120 INFO:teuthology.orchestra.run.vm00.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-10T07:17:26.128 INFO:teuthology.orchestra.run.vm00.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-10T07:17:26.128 INFO:teuthology.orchestra.run.vm00.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-10T07:17:26.129 INFO:teuthology.orchestra.run.vm00.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-10T07:17:26.129 INFO:teuthology.orchestra.run.vm00.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-10T07:17:26.129 INFO:teuthology.orchestra.run.vm00.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-10T07:17:26.130 INFO:teuthology.orchestra.run.vm00.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-10T07:17:26.130 INFO:teuthology.orchestra.run.vm00.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-10T07:17:26.152 INFO:teuthology.orchestra.run.vm00.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-10T07:17:26.152 INFO:teuthology.orchestra.run.vm00.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-10T07:17:26.153 INFO:teuthology.orchestra.run.vm00.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-10T07:17:26.153 INFO:teuthology.orchestra.run.vm00.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-10T07:17:26.153 INFO:teuthology.orchestra.run.vm00.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-10T07:17:26.154 INFO:teuthology.orchestra.run.vm00.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-10T07:17:26.154 INFO:teuthology.orchestra.run.vm00.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-10T07:17:26.155 INFO:teuthology.orchestra.run.vm00.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-10T07:17:26.155 INFO:teuthology.orchestra.run.vm00.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-10T07:17:26.161 INFO:teuthology.orchestra.run.vm00.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-10T07:17:26.161 INFO:teuthology.orchestra.run.vm00.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-10T07:17:26.169 INFO:teuthology.orchestra.run.vm00.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-10T07:17:26.170 INFO:teuthology.orchestra.run.vm00.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-10T07:17:26.170 INFO:teuthology.orchestra.run.vm00.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-10T07:17:26.172 INFO:teuthology.orchestra.run.vm00.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-10T07:17:26.173 INFO:teuthology.orchestra.run.vm00.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-10T07:17:26.175 INFO:teuthology.orchestra.run.vm00.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-10T07:17:26.175 INFO:teuthology.orchestra.run.vm00.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-10T07:17:26.180 INFO:teuthology.orchestra.run.vm00.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-10T07:17:26.212 INFO:teuthology.orchestra.run.vm00.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-10T07:17:26.228 INFO:teuthology.orchestra.run.vm00.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-10T07:17:26.228 INFO:teuthology.orchestra.run.vm00.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-10T07:17:26.244 INFO:teuthology.orchestra.run.vm00.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-10T07:17:26.245 INFO:teuthology.orchestra.run.vm00.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-10T07:17:26.245 INFO:teuthology.orchestra.run.vm00.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-10T07:17:26.245 INFO:teuthology.orchestra.run.vm00.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-10T07:17:26.246 INFO:teuthology.orchestra.run.vm00.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-10T07:17:26.246 INFO:teuthology.orchestra.run.vm00.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-10T07:17:26.248 INFO:teuthology.orchestra.run.vm00.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-10T07:17:26.249 INFO:teuthology.orchestra.run.vm00.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-10T07:17:26.249 INFO:teuthology.orchestra.run.vm00.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-10T07:17:26.253 INFO:teuthology.orchestra.run.vm00.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-10T07:17:26.256 INFO:teuthology.orchestra.run.vm00.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-10T07:17:26.260 INFO:teuthology.orchestra.run.vm00.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-10T07:17:26.261 INFO:teuthology.orchestra.run.vm00.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-10T07:17:26.262 INFO:teuthology.orchestra.run.vm00.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-10T07:17:26.267 INFO:teuthology.orchestra.run.vm00.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-10T07:17:26.268 INFO:teuthology.orchestra.run.vm00.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-10T07:17:26.282 INFO:teuthology.orchestra.run.vm00.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-10T07:17:26.283 INFO:teuthology.orchestra.run.vm00.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-10T07:17:26.283 INFO:teuthology.orchestra.run.vm00.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-10T07:17:26.284 INFO:teuthology.orchestra.run.vm00.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-10T07:17:26.284 INFO:teuthology.orchestra.run.vm00.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-10T07:17:26.285 INFO:teuthology.orchestra.run.vm00.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-10T07:17:26.294 INFO:teuthology.orchestra.run.vm00.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-10T07:17:26.294 INFO:teuthology.orchestra.run.vm00.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-10T07:17:26.295 INFO:teuthology.orchestra.run.vm00.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-10T07:17:26.296 INFO:teuthology.orchestra.run.vm00.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-10T07:17:26.296 INFO:teuthology.orchestra.run.vm00.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-10T07:17:26.324 INFO:teuthology.orchestra.run.vm00.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-10T07:17:26.364 INFO:teuthology.orchestra.run.vm03.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T07:17:26.364 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 178 MB of archives. 2026-03-10T07:17:26.364 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-10T07:17:26.364 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-10T07:17:26.459 INFO:teuthology.orchestra.run.vm00.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-10T07:17:26.776 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-10T07:17:26.835 INFO:teuthology.orchestra.run.vm03.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-10T07:17:26.850 INFO:teuthology.orchestra.run.vm03.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-10T07:17:26.946 INFO:teuthology.orchestra.run.vm03.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-10T07:17:27.224 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-10T07:17:27.240 INFO:teuthology.orchestra.run.vm03.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-10T07:17:27.257 INFO:teuthology.orchestra.run.vm00.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-10T07:17:27.277 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-10T07:17:27.288 INFO:teuthology.orchestra.run.vm03.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-10T07:17:27.291 INFO:teuthology.orchestra.run.vm03.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-10T07:17:27.292 INFO:teuthology.orchestra.run.vm03.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-10T07:17:27.293 INFO:teuthology.orchestra.run.vm03.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-10T07:17:27.315 INFO:teuthology.orchestra.run.vm03.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-10T07:17:27.320 INFO:teuthology.orchestra.run.vm03.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-10T07:17:27.326 INFO:teuthology.orchestra.run.vm03.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-10T07:17:27.377 INFO:teuthology.orchestra.run.vm00.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-10T07:17:27.390 INFO:teuthology.orchestra.run.vm00.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-10T07:17:27.396 INFO:teuthology.orchestra.run.vm00.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-10T07:17:27.397 INFO:teuthology.orchestra.run.vm00.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-10T07:17:27.399 INFO:teuthology.orchestra.run.vm00.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-10T07:17:27.400 INFO:teuthology.orchestra.run.vm00.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-10T07:17:27.406 INFO:teuthology.orchestra.run.vm00.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-10T07:17:27.420 INFO:teuthology.orchestra.run.vm03.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-10T07:17:27.420 INFO:teuthology.orchestra.run.vm03.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-10T07:17:27.423 INFO:teuthology.orchestra.run.vm03.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-10T07:17:27.426 INFO:teuthology.orchestra.run.vm03.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-10T07:17:27.428 INFO:teuthology.orchestra.run.vm03.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-10T07:17:27.428 INFO:teuthology.orchestra.run.vm03.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-10T07:17:27.429 INFO:teuthology.orchestra.run.vm03.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-10T07:17:27.430 INFO:teuthology.orchestra.run.vm03.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-10T07:17:27.525 INFO:teuthology.orchestra.run.vm03.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-10T07:17:27.526 INFO:teuthology.orchestra.run.vm03.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-10T07:17:27.526 INFO:teuthology.orchestra.run.vm03.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-10T07:17:27.527 INFO:teuthology.orchestra.run.vm03.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-10T07:17:27.625 INFO:teuthology.orchestra.run.vm03.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-10T07:17:27.625 INFO:teuthology.orchestra.run.vm03.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-10T07:17:27.631 INFO:teuthology.orchestra.run.vm03.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-10T07:17:27.631 INFO:teuthology.orchestra.run.vm03.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-10T07:17:27.632 INFO:teuthology.orchestra.run.vm03.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-10T07:17:27.632 INFO:teuthology.orchestra.run.vm03.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-10T07:17:27.719 INFO:teuthology.orchestra.run.vm00.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-10T07:17:27.720 INFO:teuthology.orchestra.run.vm00.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-10T07:17:27.724 INFO:teuthology.orchestra.run.vm00.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-10T07:17:27.727 INFO:teuthology.orchestra.run.vm03.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-10T07:17:27.727 INFO:teuthology.orchestra.run.vm03.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-10T07:17:27.728 INFO:teuthology.orchestra.run.vm03.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-10T07:17:27.729 INFO:teuthology.orchestra.run.vm03.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-10T07:17:27.827 INFO:teuthology.orchestra.run.vm03.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-10T07:17:27.834 INFO:teuthology.orchestra.run.vm03.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-10T07:17:27.834 INFO:teuthology.orchestra.run.vm03.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-10T07:17:27.835 INFO:teuthology.orchestra.run.vm03.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-10T07:17:27.835 INFO:teuthology.orchestra.run.vm03.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-10T07:17:27.837 INFO:teuthology.orchestra.run.vm03.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-10T07:17:27.926 INFO:teuthology.orchestra.run.vm03.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-10T07:17:27.927 INFO:teuthology.orchestra.run.vm03.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-10T07:17:27.930 INFO:teuthology.orchestra.run.vm03.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-10T07:17:27.930 INFO:teuthology.orchestra.run.vm03.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-10T07:17:28.026 INFO:teuthology.orchestra.run.vm03.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-10T07:17:28.057 INFO:teuthology.orchestra.run.vm03.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-10T07:17:28.061 INFO:teuthology.orchestra.run.vm03.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-10T07:17:28.061 INFO:teuthology.orchestra.run.vm03.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-10T07:17:28.143 INFO:teuthology.orchestra.run.vm03.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-10T07:17:28.144 INFO:teuthology.orchestra.run.vm03.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-10T07:17:28.144 INFO:teuthology.orchestra.run.vm03.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-10T07:17:28.144 INFO:teuthology.orchestra.run.vm03.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-10T07:17:28.145 INFO:teuthology.orchestra.run.vm03.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-10T07:17:28.145 INFO:teuthology.orchestra.run.vm03.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-10T07:17:28.224 INFO:teuthology.orchestra.run.vm03.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-10T07:17:28.226 INFO:teuthology.orchestra.run.vm03.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-10T07:17:28.228 INFO:teuthology.orchestra.run.vm03.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-10T07:17:28.335 INFO:teuthology.orchestra.run.vm03.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-10T07:17:28.339 INFO:teuthology.orchestra.run.vm03.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-10T07:17:28.344 INFO:teuthology.orchestra.run.vm03.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-10T07:17:28.345 INFO:teuthology.orchestra.run.vm03.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-10T07:17:28.346 INFO:teuthology.orchestra.run.vm03.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-10T07:17:28.351 INFO:teuthology.orchestra.run.vm03.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-10T07:17:28.351 INFO:teuthology.orchestra.run.vm03.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-10T07:17:28.666 INFO:teuthology.orchestra.run.vm00.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-10T07:17:28.850 INFO:teuthology.orchestra.run.vm03.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-10T07:17:28.850 INFO:teuthology.orchestra.run.vm03.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-10T07:17:28.851 INFO:teuthology.orchestra.run.vm03.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-10T07:17:28.851 INFO:teuthology.orchestra.run.vm03.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-10T07:17:28.853 INFO:teuthology.orchestra.run.vm03.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-10T07:17:28.853 INFO:teuthology.orchestra.run.vm03.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-10T07:17:28.863 INFO:teuthology.orchestra.run.vm03.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-10T07:17:28.863 INFO:teuthology.orchestra.run.vm03.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-10T07:17:28.864 INFO:teuthology.orchestra.run.vm03.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-10T07:17:28.865 INFO:teuthology.orchestra.run.vm03.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-10T07:17:28.883 INFO:teuthology.orchestra.run.vm00.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-10T07:17:28.884 INFO:teuthology.orchestra.run.vm00.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-10T07:17:28.885 INFO:teuthology.orchestra.run.vm00.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-10T07:17:28.944 INFO:teuthology.orchestra.run.vm00.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-10T07:17:28.958 INFO:teuthology.orchestra.run.vm03.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-10T07:17:29.065 INFO:teuthology.orchestra.run.vm03.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-10T07:17:29.172 INFO:teuthology.orchestra.run.vm00.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-10T07:17:30.007 INFO:teuthology.orchestra.run.vm00.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-10T07:17:30.008 INFO:teuthology.orchestra.run.vm00.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-10T07:17:30.039 INFO:teuthology.orchestra.run.vm00.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-10T07:17:30.128 INFO:teuthology.orchestra.run.vm00.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-10T07:17:30.172 INFO:teuthology.orchestra.run.vm00.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-10T07:17:30.173 INFO:teuthology.orchestra.run.vm00.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-10T07:17:30.245 INFO:teuthology.orchestra.run.vm00.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-10T07:17:30.579 INFO:teuthology.orchestra.run.vm00.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-10T07:17:30.579 INFO:teuthology.orchestra.run.vm00.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-10T07:17:32.599 INFO:teuthology.orchestra.run.vm00.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-10T07:17:32.599 INFO:teuthology.orchestra.run.vm00.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-10T07:17:32.599 INFO:teuthology.orchestra.run.vm00.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-10T07:17:33.080 INFO:teuthology.orchestra.run.vm00.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-10T07:17:33.439 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 178 MB in 7s (24.7 MB/s) 2026-03-10T07:17:33.469 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-10T07:17:33.513 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-10T07:17:33.515 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-10T07:17:33.517 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T07:17:33.537 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-10T07:17:33.543 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-10T07:17:33.544 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T07:17:33.560 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-10T07:17:33.565 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-10T07:17:33.566 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T07:17:33.588 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-10T07:17:33.594 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T07:17:33.598 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:17:33.655 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-10T07:17:33.661 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T07:17:33.662 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:17:33.684 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-10T07:17:33.690 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T07:17:33.691 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:17:33.720 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-10T07:17:33.726 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-10T07:17:33.726 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T07:17:33.753 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:33.755 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T07:17:33.843 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:33.845 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T07:17:33.913 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libnbd0. 2026-03-10T07:17:33.919 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-10T07:17:33.919 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-10T07:17:33.937 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libcephfs2. 2026-03-10T07:17:33.943 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:33.943 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:33.974 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rados. 2026-03-10T07:17:33.976 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:33.977 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:33.998 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-10T07:17:34.002 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:17:34.003 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:34.017 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cephfs. 2026-03-10T07:17:34.022 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:34.023 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:34.040 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-10T07:17:34.045 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:17:34.046 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:34.065 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-10T07:17:34.070 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-10T07:17:34.071 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T07:17:34.089 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-prettytable. 2026-03-10T07:17:34.096 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-10T07:17:34.097 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-10T07:17:34.112 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rbd. 2026-03-10T07:17:34.117 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:34.118 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:34.139 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-10T07:17:34.144 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-10T07:17:34.145 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T07:17:34.165 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-10T07:17:34.170 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-10T07:17:34.171 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T07:17:34.188 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-10T07:17:34.192 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-10T07:17:34.193 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T07:17:34.212 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua5.1. 2026-03-10T07:17:34.216 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-10T07:17:34.217 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-10T07:17:34.233 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-any. 2026-03-10T07:17:34.236 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-10T07:17:34.237 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-10T07:17:34.248 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package zip. 2026-03-10T07:17:34.252 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-10T07:17:34.252 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking zip (3.0-12build2) ... 2026-03-10T07:17:34.269 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package unzip. 2026-03-10T07:17:34.272 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-10T07:17:34.274 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-10T07:17:34.291 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package luarocks. 2026-03-10T07:17:34.294 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-10T07:17:34.295 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-10T07:17:34.344 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package librgw2. 2026-03-10T07:17:34.350 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:34.350 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:34.542 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rgw. 2026-03-10T07:17:34.542 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:34.542 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:34.558 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-10T07:17:34.562 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-10T07:17:34.563 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T07:17:34.576 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libradosstriper1. 2026-03-10T07:17:34.581 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:34.582 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:34.608 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-common. 2026-03-10T07:17:34.614 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:34.618 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:35.001 INFO:teuthology.orchestra.run.vm03.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-10T07:17:35.195 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-base. 2026-03-10T07:17:35.202 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:35.207 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:35.413 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-10T07:17:35.419 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-10T07:17:35.421 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-10T07:17:35.438 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cheroot. 2026-03-10T07:17:35.443 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-10T07:17:35.445 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T07:17:35.466 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-10T07:17:35.472 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-10T07:17:35.473 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-10T07:17:35.491 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-10T07:17:35.498 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-10T07:17:35.499 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-10T07:17:35.516 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-10T07:17:35.523 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-10T07:17:35.524 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-10T07:17:35.542 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-tempora. 2026-03-10T07:17:35.548 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-10T07:17:35.549 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-10T07:17:35.567 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-portend. 2026-03-10T07:17:35.573 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-10T07:17:35.574 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-10T07:17:35.593 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-10T07:17:35.600 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-10T07:17:35.601 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-10T07:17:35.622 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-10T07:17:35.628 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-10T07:17:35.629 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-10T07:17:35.660 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-natsort. 2026-03-10T07:17:35.666 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-10T07:17:35.668 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-10T07:17:35.686 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-logutils. 2026-03-10T07:17:35.692 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-10T07:17:35.693 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-10T07:17:35.712 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-mako. 2026-03-10T07:17:35.719 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-10T07:17:35.720 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T07:17:35.743 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-10T07:17:35.749 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-10T07:17:35.751 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-10T07:17:35.769 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-10T07:17:35.775 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-10T07:17:35.776 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-10T07:17:35.794 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-webob. 2026-03-10T07:17:35.801 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-10T07:17:35.802 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T07:17:35.826 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-waitress. 2026-03-10T07:17:35.832 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-10T07:17:35.835 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T07:17:35.854 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-tempita. 2026-03-10T07:17:35.860 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-10T07:17:35.861 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T07:17:35.881 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-paste. 2026-03-10T07:17:35.887 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-10T07:17:35.888 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T07:17:35.926 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-10T07:17:35.934 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-10T07:17:35.936 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T07:17:35.953 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-10T07:17:35.960 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-10T07:17:35.961 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-10T07:17:35.981 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-webtest. 2026-03-10T07:17:35.990 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-10T07:17:35.991 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-10T07:17:36.011 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pecan. 2026-03-10T07:17:36.018 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-10T07:17:36.019 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T07:17:36.059 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-10T07:17:36.065 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-10T07:17:36.066 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T07:17:36.092 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-10T07:17:36.098 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:17:36.099 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:36.147 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-10T07:17:36.154 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:36.155 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:36.176 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr. 2026-03-10T07:17:36.183 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:36.184 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:36.223 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mon. 2026-03-10T07:17:36.229 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:36.230 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:36.350 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-10T07:17:36.354 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-10T07:17:36.356 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T07:17:36.378 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-osd. 2026-03-10T07:17:36.381 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:36.382 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:36.796 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph. 2026-03-10T07:17:36.803 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:36.804 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:36.821 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-fuse. 2026-03-10T07:17:36.827 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:36.828 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:36.868 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mds. 2026-03-10T07:17:36.875 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:36.876 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:36.951 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package cephadm. 2026-03-10T07:17:36.958 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:36.959 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:36.980 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-10T07:17:36.986 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T07:17:36.986 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T07:17:37.022 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-10T07:17:37.028 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:17:37.028 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:37.055 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-10T07:17:37.062 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-10T07:17:37.063 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-10T07:17:37.082 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-routes. 2026-03-10T07:17:37.088 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-10T07:17:37.089 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T07:17:37.318 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-10T07:17:37.324 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:17:37.325 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:37.918 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-10T07:17:37.924 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-10T07:17:37.924 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T07:17:38.004 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-joblib. 2026-03-10T07:17:38.010 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-10T07:17:38.011 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T07:17:38.050 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-10T07:17:38.057 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-10T07:17:38.058 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-10T07:17:38.078 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-sklearn. 2026-03-10T07:17:38.085 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-10T07:17:38.086 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T07:17:38.240 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-10T07:17:38.245 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:17:38.246 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:38.256 INFO:teuthology.orchestra.run.vm03.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-10T07:17:38.602 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cachetools. 2026-03-10T07:17:38.606 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-10T07:17:38.607 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-10T07:17:38.622 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rsa. 2026-03-10T07:17:38.628 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-10T07:17:38.629 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-10T07:17:38.651 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-google-auth. 2026-03-10T07:17:38.656 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-10T07:17:38.657 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-10T07:17:38.678 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-10T07:17:38.684 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-10T07:17:38.685 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T07:17:38.702 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-websocket. 2026-03-10T07:17:38.707 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-10T07:17:38.708 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-10T07:17:38.732 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-10T07:17:38.738 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-10T07:17:38.751 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T07:17:38.962 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-10T07:17:38.968 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:17:38.969 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:38.984 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-10T07:17:38.990 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-10T07:17:38.991 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T07:17:39.009 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-10T07:17:39.015 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T07:17:39.016 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T07:17:39.033 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package jq. 2026-03-10T07:17:39.040 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T07:17:39.041 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-10T07:17:39.057 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package socat. 2026-03-10T07:17:39.062 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-10T07:17:39.063 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-10T07:17:39.091 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package xmlstarlet. 2026-03-10T07:17:39.097 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-10T07:17:39.098 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-10T07:17:39.148 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-test. 2026-03-10T07:17:39.153 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:39.154 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:39.180 INFO:teuthology.orchestra.run.vm03.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-10T07:17:39.527 INFO:teuthology.orchestra.run.vm03.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-10T07:17:39.641 INFO:teuthology.orchestra.run.vm03.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-10T07:17:39.757 INFO:teuthology.orchestra.run.vm03.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-10T07:17:39.872 INFO:teuthology.orchestra.run.vm03.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-10T07:17:40.105 INFO:teuthology.orchestra.run.vm03.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-10T07:17:40.391 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-volume. 2026-03-10T07:17:40.399 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:17:40.400 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:40.428 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-10T07:17:40.433 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:40.434 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:40.448 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-10T07:17:40.453 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-10T07:17:40.453 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T07:17:40.480 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-10T07:17:40.487 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-10T07:17:40.488 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-10T07:17:40.509 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package nvme-cli. 2026-03-10T07:17:40.516 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-10T07:17:40.517 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T07:17:40.567 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package pkg-config. 2026-03-10T07:17:40.573 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-10T07:17:40.574 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T07:17:40.592 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-10T07:17:40.597 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T07:17:40.599 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T07:17:40.676 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-10T07:17:40.681 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-10T07:17:40.683 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-10T07:17:40.698 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pastescript. 2026-03-10T07:17:40.703 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-10T07:17:40.704 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-10T07:17:40.725 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pluggy. 2026-03-10T07:17:40.731 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-10T07:17:40.732 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-10T07:17:40.749 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-psutil. 2026-03-10T07:17:40.757 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-10T07:17:40.758 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-10T07:17:40.784 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-py. 2026-03-10T07:17:40.790 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-10T07:17:40.791 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-10T07:17:40.815 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pygments. 2026-03-10T07:17:40.823 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-10T07:17:40.824 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T07:17:40.905 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-10T07:17:40.911 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-10T07:17:40.912 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-10T07:17:40.930 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-toml. 2026-03-10T07:17:40.936 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-10T07:17:40.937 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-10T07:17:40.957 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pytest. 2026-03-10T07:17:40.963 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-10T07:17:40.964 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T07:17:40.995 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-simplejson. 2026-03-10T07:17:41.001 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-10T07:17:41.002 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-10T07:17:41.030 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-10T07:17:41.033 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-10T07:17:41.033 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-10T07:17:41.178 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package radosgw. 2026-03-10T07:17:41.184 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:41.184 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:41.500 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package rbd-fuse. 2026-03-10T07:17:41.506 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:41.506 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:41.523 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package smartmontools. 2026-03-10T07:17:41.529 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-10T07:17:41.536 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T07:17:41.580 INFO:teuthology.orchestra.run.vm00.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T07:17:41.836 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-10T07:17:41.836 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-10T07:17:42.170 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-10T07:17:42.243 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T07:17:42.245 INFO:teuthology.orchestra.run.vm00.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T07:17:42.310 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T07:17:42.547 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-10T07:17:42.939 INFO:teuthology.orchestra.run.vm00.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-10T07:17:42.945 INFO:teuthology.orchestra.run.vm00.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-10T07:17:42.997 INFO:teuthology.orchestra.run.vm00.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:43.051 INFO:teuthology.orchestra.run.vm00.stdout:Adding system user cephadm....done 2026-03-10T07:17:43.059 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T07:17:43.141 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-10T07:17:43.206 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T07:17:43.209 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-10T07:17:43.273 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-10T07:17:43.342 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T07:17:43.345 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-10T07:17:43.438 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T07:17:43.568 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-10T07:17:43.638 INFO:teuthology.orchestra.run.vm00.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-10T07:17:43.648 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-10T07:17:43.718 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-10T07:17:43.793 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:43.870 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T07:17:43.873 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-10T07:17:43.876 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T07:17:43.878 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T07:17:43.880 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T07:17:43.883 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-10T07:17:43.888 INFO:teuthology.orchestra.run.vm00.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-10T07:17:43.890 INFO:teuthology.orchestra.run.vm00.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-10T07:17:43.892 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T07:17:43.895 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-10T07:17:44.024 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-10T07:17:44.114 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T07:17:44.188 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-10T07:17:44.272 INFO:teuthology.orchestra.run.vm00.stdout:Setting up zip (3.0-12build2) ... 2026-03-10T07:17:44.275 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T07:17:44.563 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T07:17:44.636 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T07:17:44.638 INFO:teuthology.orchestra.run.vm00.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-10T07:17:44.641 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T07:17:44.745 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T07:17:44.905 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T07:17:45.044 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T07:17:45.134 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T07:17:45.194 INFO:teuthology.orchestra.run.vm03.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-10T07:17:45.221 INFO:teuthology.orchestra.run.vm03.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-10T07:17:45.263 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-10T07:17:45.332 INFO:teuthology.orchestra.run.vm00.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-10T07:17:45.334 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:45.424 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T07:17:45.441 INFO:teuthology.orchestra.run.vm03.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-10T07:17:46.018 INFO:teuthology.orchestra.run.vm00.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T07:17:46.040 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:17:46.045 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-10T07:17:46.118 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T07:17:46.120 INFO:teuthology.orchestra.run.vm00.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-10T07:17:46.122 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-10T07:17:46.190 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-10T07:17:46.256 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:17:46.258 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-10T07:17:46.330 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-10T07:17:46.402 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-10T07:17:46.474 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-10T07:17:46.546 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-10T07:17:46.613 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-10T07:17:46.693 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T07:17:46.696 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-10T07:17:46.777 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T07:17:46.780 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T07:17:46.851 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T07:17:46.963 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T07:17:47.057 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-10T07:17:47.129 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T07:17:47.131 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-10T07:17:47.134 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T07:17:47.136 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T07:17:47.278 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-10T07:17:47.350 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-10T07:17:47.352 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-10T07:17:47.420 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:17:47.423 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-10T07:17:47.503 INFO:teuthology.orchestra.run.vm00.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-10T07:17:47.505 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-10T07:17:47.594 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-10T07:17:47.738 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-10T07:17:47.944 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T07:17:48.273 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T07:17:48.597 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:48.603 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:48.605 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T07:17:49.306 INFO:teuthology.orchestra.run.vm00.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-10T07:17:49.313 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.315 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.317 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.320 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.322 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.383 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T07:17:49.383 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T07:17:49.728 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.730 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.733 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.736 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.738 INFO:teuthology.orchestra.run.vm00.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.741 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.744 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.746 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:49.780 INFO:teuthology.orchestra.run.vm00.stdout:Adding group ceph....done 2026-03-10T07:17:49.820 INFO:teuthology.orchestra.run.vm00.stdout:Adding system user ceph....done 2026-03-10T07:17:49.828 INFO:teuthology.orchestra.run.vm00.stdout:Setting system user ceph properties....done 2026-03-10T07:17:49.832 INFO:teuthology.orchestra.run.vm00.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-10T07:17:49.898 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-10T07:17:50.141 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-10T07:17:50.480 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:50.482 INFO:teuthology.orchestra.run.vm00.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:50.735 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T07:17:50.735 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T07:17:50.969 INFO:teuthology.orchestra.run.vm03.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-10T07:17:51.132 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:51.219 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-10T07:17:51.452 INFO:teuthology.orchestra.run.vm03.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-10T07:17:51.544 INFO:teuthology.orchestra.run.vm03.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-10T07:17:51.547 INFO:teuthology.orchestra.run.vm03.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-10T07:17:51.584 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:51.646 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T07:17:51.646 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T07:17:51.660 INFO:teuthology.orchestra.run.vm03.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-10T07:17:52.030 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:52.090 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T07:17:52.090 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T07:17:52.197 INFO:teuthology.orchestra.run.vm03.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-10T07:17:52.435 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:52.511 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T07:17:52.511 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T07:17:52.875 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:52.878 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:52.890 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:52.949 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T07:17:52.949 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T07:17:53.363 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:53.377 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:53.379 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:53.393 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:53.518 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T07:17:53.528 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T07:17:53.544 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T07:17:53.622 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-10T07:17:53.864 INFO:teuthology.orchestra.run.vm03.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-10T07:17:53.864 INFO:teuthology.orchestra.run.vm03.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-10T07:17:53.928 INFO:teuthology.orchestra.run.vm03.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-10T07:17:54.092 INFO:teuthology.orchestra.run.vm03.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-10T07:17:54.101 INFO:teuthology.orchestra.run.vm03.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-10T07:17:54.108 INFO:teuthology.orchestra.run.vm03.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-10T07:17:54.111 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:54.111 INFO:teuthology.orchestra.run.vm00.stdout:Running kernel seems to be up-to-date. 2026-03-10T07:17:54.111 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:54.111 INFO:teuthology.orchestra.run.vm00.stdout:Services to be restarted: 2026-03-10T07:17:54.119 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart packagekit.service 2026-03-10T07:17:54.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:54.123 INFO:teuthology.orchestra.run.vm00.stdout:Service restarts being deferred: 2026-03-10T07:17:54.123 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T07:17:54.123 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart unattended-upgrades.service 2026-03-10T07:17:54.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:54.123 INFO:teuthology.orchestra.run.vm00.stdout:No containers need to be restarted. 2026-03-10T07:17:54.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:54.123 INFO:teuthology.orchestra.run.vm00.stdout:No user sessions are running outdated binaries. 2026-03-10T07:17:54.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:54.124 INFO:teuthology.orchestra.run.vm00.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T07:17:54.230 INFO:teuthology.orchestra.run.vm03.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-10T07:17:54.740 INFO:teuthology.orchestra.run.vm03.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-10T07:17:54.741 INFO:teuthology.orchestra.run.vm03.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-10T07:17:55.114 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T07:17:55.117 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-10T07:17:55.195 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T07:17:55.398 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T07:17:55.398 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T07:17:55.563 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T07:17:55.563 INFO:teuthology.orchestra.run.vm00.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T07:17:55.564 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T07:17:55.564 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T07:17:55.580 INFO:teuthology.orchestra.run.vm00.stdout:The following NEW packages will be installed: 2026-03-10T07:17:55.580 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath python3-xmltodict 2026-03-10T07:17:55.835 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T07:17:55.836 INFO:teuthology.orchestra.run.vm00.stdout:Need to get 34.3 kB of archives. 2026-03-10T07:17:55.836 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-10T07:17:55.836 INFO:teuthology.orchestra.run.vm00.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-10T07:17:55.942 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-10T07:17:56.147 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 34.3 kB in 0s (94.5 kB/s) 2026-03-10T07:17:56.162 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jmespath. 2026-03-10T07:17:56.193 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-10T07:17:56.195 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-10T07:17:56.196 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-10T07:17:56.213 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-10T07:17:56.219 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-10T07:17:56.220 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-10T07:17:56.248 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-10T07:17:56.315 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-10T07:17:56.660 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:56.660 INFO:teuthology.orchestra.run.vm00.stdout:Running kernel seems to be up-to-date. 2026-03-10T07:17:56.660 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:56.660 INFO:teuthology.orchestra.run.vm00.stdout:Services to be restarted: 2026-03-10T07:17:56.667 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart packagekit.service 2026-03-10T07:17:56.671 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:56.671 INFO:teuthology.orchestra.run.vm00.stdout:Service restarts being deferred: 2026-03-10T07:17:56.671 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T07:17:56.671 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart unattended-upgrades.service 2026-03-10T07:17:56.671 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:56.671 INFO:teuthology.orchestra.run.vm00.stdout:No containers need to be restarted. 2026-03-10T07:17:56.671 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:56.671 INFO:teuthology.orchestra.run.vm00.stdout:No user sessions are running outdated binaries. 2026-03-10T07:17:56.671 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:17:56.672 INFO:teuthology.orchestra.run.vm00.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T07:17:57.001 INFO:teuthology.orchestra.run.vm03.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-10T07:17:57.002 INFO:teuthology.orchestra.run.vm03.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-10T07:17:57.002 INFO:teuthology.orchestra.run.vm03.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-10T07:17:57.571 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T07:17:57.575 DEBUG:teuthology.parallel:result is None 2026-03-10T07:17:57.930 INFO:teuthology.orchestra.run.vm03.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-10T07:17:58.245 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 178 MB in 32s (5560 kB/s) 2026-03-10T07:17:58.662 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-10T07:17:58.699 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-10T07:17:58.701 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-10T07:17:58.704 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T07:17:58.727 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-10T07:17:58.733 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-10T07:17:58.734 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T07:17:58.749 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-10T07:17:58.754 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-10T07:17:58.755 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T07:17:58.779 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-10T07:17:58.785 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T07:17:58.789 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:17:58.845 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-10T07:17:58.850 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T07:17:58.851 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:17:58.870 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-10T07:17:58.875 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T07:17:58.876 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:17:58.906 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-10T07:17:58.911 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-10T07:17:58.912 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T07:17:58.939 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:58.941 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T07:17:59.039 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:59.042 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T07:17:59.113 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libnbd0. 2026-03-10T07:17:59.119 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-10T07:17:59.121 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-10T07:17:59.136 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs2. 2026-03-10T07:17:59.142 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:59.142 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:59.170 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rados. 2026-03-10T07:17:59.175 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:59.176 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:59.197 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-10T07:17:59.203 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:17:59.204 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:59.219 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cephfs. 2026-03-10T07:17:59.225 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:59.226 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:59.244 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-10T07:17:59.250 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:17:59.251 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:59.271 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-10T07:17:59.279 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-10T07:17:59.280 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T07:17:59.300 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-prettytable. 2026-03-10T07:17:59.308 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-10T07:17:59.309 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-10T07:17:59.327 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rbd. 2026-03-10T07:17:59.333 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:59.334 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:59.355 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-10T07:17:59.361 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-10T07:17:59.362 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T07:17:59.384 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-10T07:17:59.390 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-10T07:17:59.391 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T07:17:59.411 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-10T07:17:59.417 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-10T07:17:59.418 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T07:17:59.440 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua5.1. 2026-03-10T07:17:59.446 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-10T07:17:59.447 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-10T07:17:59.469 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-any. 2026-03-10T07:17:59.475 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-10T07:17:59.477 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-10T07:17:59.494 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package zip. 2026-03-10T07:17:59.502 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-10T07:17:59.503 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking zip (3.0-12build2) ... 2026-03-10T07:17:59.525 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package unzip. 2026-03-10T07:17:59.531 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-10T07:17:59.532 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-10T07:17:59.553 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package luarocks. 2026-03-10T07:17:59.559 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-10T07:17:59.560 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-10T07:17:59.611 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librgw2. 2026-03-10T07:17:59.617 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:59.618 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:59.785 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rgw. 2026-03-10T07:17:59.791 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:59.792 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:59.809 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-10T07:17:59.816 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-10T07:17:59.817 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T07:17:59.833 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libradosstriper1. 2026-03-10T07:17:59.840 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:59.841 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:17:59.866 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-common. 2026-03-10T07:17:59.872 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:17:59.873 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:00.619 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-base. 2026-03-10T07:18:00.625 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:00.630 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:00.882 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-10T07:18:00.888 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-10T07:18:00.889 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-10T07:18:00.904 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cheroot. 2026-03-10T07:18:00.911 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-10T07:18:00.911 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T07:18:00.931 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-10T07:18:00.937 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-10T07:18:00.938 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-10T07:18:00.952 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-10T07:18:00.959 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-10T07:18:00.959 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-10T07:18:00.974 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-10T07:18:00.980 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-10T07:18:00.981 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-10T07:18:00.994 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempora. 2026-03-10T07:18:00.999 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-10T07:18:01.000 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-10T07:18:01.015 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-portend. 2026-03-10T07:18:01.021 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-10T07:18:01.021 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-10T07:18:01.036 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-10T07:18:01.042 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-10T07:18:01.042 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-10T07:18:01.057 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-10T07:18:01.063 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-10T07:18:01.063 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-10T07:18:01.096 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-natsort. 2026-03-10T07:18:01.102 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-10T07:18:01.102 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-10T07:18:01.120 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-logutils. 2026-03-10T07:18:01.126 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-10T07:18:01.127 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-10T07:18:01.143 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-mako. 2026-03-10T07:18:01.148 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-10T07:18:01.149 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T07:18:01.169 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-10T07:18:01.174 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-10T07:18:01.175 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-10T07:18:01.191 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-10T07:18:01.197 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-10T07:18:01.197 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-10T07:18:01.213 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webob. 2026-03-10T07:18:01.219 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-10T07:18:01.220 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T07:18:01.242 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-waitress. 2026-03-10T07:18:01.247 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-10T07:18:01.250 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T07:18:01.270 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempita. 2026-03-10T07:18:01.275 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-10T07:18:01.276 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T07:18:01.293 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-paste. 2026-03-10T07:18:01.298 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-10T07:18:01.299 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T07:18:01.340 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-10T07:18:01.345 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-10T07:18:01.346 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T07:18:01.363 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-10T07:18:01.369 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-10T07:18:01.370 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-10T07:18:01.389 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webtest. 2026-03-10T07:18:01.395 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-10T07:18:01.396 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-10T07:18:01.415 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pecan. 2026-03-10T07:18:01.423 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-10T07:18:01.424 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T07:18:01.456 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-10T07:18:01.462 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-10T07:18:01.462 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T07:18:01.487 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-10T07:18:01.493 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:18:01.494 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:01.537 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-10T07:18:01.543 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:01.544 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:01.562 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr. 2026-03-10T07:18:01.568 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:01.569 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:01.608 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mon. 2026-03-10T07:18:01.615 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:01.615 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:01.885 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-10T07:18:01.891 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-10T07:18:01.892 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T07:18:01.915 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-osd. 2026-03-10T07:18:01.923 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:01.924 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:02.408 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph. 2026-03-10T07:18:02.414 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:02.415 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:02.429 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-fuse. 2026-03-10T07:18:02.435 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:02.436 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:02.470 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mds. 2026-03-10T07:18:02.476 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:02.476 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:02.537 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package cephadm. 2026-03-10T07:18:02.543 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:02.543 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:02.563 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-10T07:18:02.570 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T07:18:02.571 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T07:18:02.600 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-10T07:18:02.606 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:18:02.607 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:02.633 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-10T07:18:02.639 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-10T07:18:02.639 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-10T07:18:02.657 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-routes. 2026-03-10T07:18:02.663 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-10T07:18:02.664 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T07:18:02.690 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-10T07:18:02.696 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:18:02.697 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:03.474 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-10T07:18:03.477 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-10T07:18:03.478 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T07:18:03.556 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-joblib. 2026-03-10T07:18:03.562 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-10T07:18:03.563 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T07:18:03.599 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-10T07:18:03.607 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-10T07:18:03.608 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-10T07:18:03.624 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn. 2026-03-10T07:18:03.630 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-10T07:18:03.631 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T07:18:03.812 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-10T07:18:03.818 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:18:03.819 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:04.206 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cachetools. 2026-03-10T07:18:04.208 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-10T07:18:04.209 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-10T07:18:04.228 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rsa. 2026-03-10T07:18:04.234 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-10T07:18:04.235 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-10T07:18:04.257 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-google-auth. 2026-03-10T07:18:04.263 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-10T07:18:04.264 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-10T07:18:04.285 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-10T07:18:04.291 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-10T07:18:04.292 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T07:18:04.309 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-websocket. 2026-03-10T07:18:04.315 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-10T07:18:04.315 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-10T07:18:04.336 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-10T07:18:04.343 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-10T07:18:04.356 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T07:18:04.540 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-10T07:18:04.547 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:18:04.548 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:04.567 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-10T07:18:04.572 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-10T07:18:04.573 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T07:18:04.592 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-10T07:18:04.598 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T07:18:04.598 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T07:18:04.616 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package jq. 2026-03-10T07:18:04.622 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T07:18:04.624 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-10T07:18:04.640 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package socat. 2026-03-10T07:18:04.646 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-10T07:18:04.647 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-10T07:18:04.671 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package xmlstarlet. 2026-03-10T07:18:04.677 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-10T07:18:04.678 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-10T07:18:04.729 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-test. 2026-03-10T07:18:04.735 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:04.736 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:05.913 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-volume. 2026-03-10T07:18:05.920 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T07:18:05.921 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:05.950 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-10T07:18:05.955 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:05.956 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:05.974 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-10T07:18:05.982 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-10T07:18:05.983 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T07:18:06.069 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-10T07:18:06.076 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-10T07:18:06.077 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-10T07:18:06.098 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package nvme-cli. 2026-03-10T07:18:06.104 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-10T07:18:06.105 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T07:18:06.149 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package pkg-config. 2026-03-10T07:18:06.155 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-10T07:18:06.155 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T07:18:06.175 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-10T07:18:06.182 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T07:18:06.183 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T07:18:06.232 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-10T07:18:06.238 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-10T07:18:06.240 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-10T07:18:06.256 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastescript. 2026-03-10T07:18:06.262 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-10T07:18:06.263 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-10T07:18:06.286 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pluggy. 2026-03-10T07:18:06.292 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-10T07:18:06.293 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-10T07:18:06.313 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-psutil. 2026-03-10T07:18:06.320 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-10T07:18:06.320 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-10T07:18:06.346 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-py. 2026-03-10T07:18:06.352 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-10T07:18:06.353 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-10T07:18:06.380 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pygments. 2026-03-10T07:18:06.386 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-10T07:18:06.387 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T07:18:06.467 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-10T07:18:06.473 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-10T07:18:06.474 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-10T07:18:06.490 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-toml. 2026-03-10T07:18:06.495 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-10T07:18:06.496 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-10T07:18:06.514 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pytest. 2026-03-10T07:18:06.519 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-10T07:18:06.520 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T07:18:06.553 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplejson. 2026-03-10T07:18:06.559 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-10T07:18:06.560 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-10T07:18:06.580 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-10T07:18:06.586 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-10T07:18:06.587 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-10T07:18:06.794 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package radosgw. 2026-03-10T07:18:06.800 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:06.801 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:07.041 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package rbd-fuse. 2026-03-10T07:18:07.047 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T07:18:07.048 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:07.066 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package smartmontools. 2026-03-10T07:18:07.072 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-10T07:18:07.080 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T07:18:07.128 INFO:teuthology.orchestra.run.vm03.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T07:18:07.361 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-10T07:18:07.361 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-10T07:18:07.726 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-10T07:18:07.794 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T07:18:07.796 INFO:teuthology.orchestra.run.vm03.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T07:18:07.866 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T07:18:08.072 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-10T07:18:08.470 INFO:teuthology.orchestra.run.vm03.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-10T07:18:08.476 INFO:teuthology.orchestra.run.vm03.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-10T07:18:08.478 INFO:teuthology.orchestra.run.vm03.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:08.520 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user cephadm....done 2026-03-10T07:18:08.529 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T07:18:08.607 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-10T07:18:08.678 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T07:18:08.681 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-10T07:18:08.744 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-10T07:18:08.812 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T07:18:08.814 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-10T07:18:08.905 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T07:18:09.029 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-10T07:18:09.096 INFO:teuthology.orchestra.run.vm03.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-10T07:18:09.104 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-10T07:18:09.174 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-10T07:18:09.242 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:09.309 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T07:18:09.311 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-10T07:18:09.313 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T07:18:09.316 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T07:18:09.318 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T07:18:09.320 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-10T07:18:09.325 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-10T07:18:09.327 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-10T07:18:09.329 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T07:18:09.331 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-10T07:18:09.456 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-10T07:18:09.526 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T07:18:09.595 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-10T07:18:09.676 INFO:teuthology.orchestra.run.vm03.stdout:Setting up zip (3.0-12build2) ... 2026-03-10T07:18:09.678 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T07:18:09.965 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T07:18:10.039 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T07:18:10.041 INFO:teuthology.orchestra.run.vm03.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-10T07:18:10.043 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T07:18:10.148 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T07:18:10.280 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T07:18:10.412 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T07:18:10.505 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T07:18:10.645 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-10T07:18:10.759 INFO:teuthology.orchestra.run.vm03.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-10T07:18:10.761 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:10.855 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T07:18:11.440 INFO:teuthology.orchestra.run.vm03.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T07:18:11.462 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:18:11.466 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-10T07:18:11.533 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T07:18:11.535 INFO:teuthology.orchestra.run.vm03.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-10T07:18:11.537 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-10T07:18:11.608 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-10T07:18:11.675 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:18:11.677 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-10T07:18:11.745 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-10T07:18:11.815 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-10T07:18:11.882 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-10T07:18:11.953 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-10T07:18:12.020 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-10T07:18:12.090 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T07:18:12.092 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-10T07:18:12.166 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T07:18:12.169 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T07:18:12.237 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T07:18:12.330 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T07:18:12.420 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-10T07:18:12.492 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T07:18:12.494 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-10T07:18:12.496 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T07:18:12.498 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T07:18:12.654 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-10T07:18:12.739 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-10T07:18:12.741 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-10T07:18:12.811 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T07:18:12.813 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-10T07:18:12.893 INFO:teuthology.orchestra.run.vm03.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-10T07:18:12.895 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-10T07:18:12.973 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-10T07:18:13.202 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-10T07:18:13.447 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T07:18:13.561 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T07:18:13.563 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:13.565 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:13.568 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T07:18:14.212 INFO:teuthology.orchestra.run.vm03.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-10T07:18:14.219 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.221 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.223 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.226 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.228 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.290 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T07:18:14.290 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T07:18:14.699 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.701 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.703 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.706 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.708 INFO:teuthology.orchestra.run.vm03.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.710 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.713 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.715 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:14.749 INFO:teuthology.orchestra.run.vm03.stdout:Adding group ceph....done 2026-03-10T07:18:14.784 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user ceph....done 2026-03-10T07:18:14.792 INFO:teuthology.orchestra.run.vm03.stdout:Setting system user ceph properties....done 2026-03-10T07:18:14.796 INFO:teuthology.orchestra.run.vm03.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-10T07:18:14.860 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-10T07:18:15.097 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-10T07:18:15.490 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:15.492 INFO:teuthology.orchestra.run.vm03.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:15.744 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T07:18:15.744 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T07:18:16.146 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:16.233 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-10T07:18:16.632 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:16.702 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T07:18:16.702 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T07:18:17.055 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:17.117 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T07:18:17.117 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T07:18:17.511 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:17.596 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T07:18:17.596 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T07:18:17.988 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:17.991 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:18.004 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:18.065 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T07:18:18.065 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T07:18:18.464 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:18.477 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:18.480 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:18.493 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T07:18:18.626 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T07:18:18.635 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T07:18:18.650 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T07:18:18.729 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-10T07:18:19.106 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:19.106 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-10T07:18:19.106 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:19.106 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-10T07:18:19.113 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-10T07:18:19.116 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:19.116 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-10T07:18:19.116 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T07:18:19.116 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-10T07:18:19.116 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:19.116 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-10T07:18:19.116 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:19.116 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-10T07:18:19.116 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:19.116 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T07:18:20.038 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T07:18:20.040 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-10T07:18:20.119 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T07:18:20.307 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T07:18:20.307 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T07:18:20.490 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T07:18:20.490 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T07:18:20.490 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T07:18:20.490 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T07:18:20.505 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-10T07:18:20.505 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath python3-xmltodict 2026-03-10T07:18:20.705 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T07:18:20.706 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 34.3 kB of archives. 2026-03-10T07:18:20.706 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-10T07:18:20.706 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-10T07:18:20.784 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-10T07:18:20.985 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 34.3 kB in 0s (124 kB/s) 2026-03-10T07:18:21.000 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jmespath. 2026-03-10T07:18:21.025 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-10T07:18:21.026 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-10T07:18:21.027 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-10T07:18:21.045 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-10T07:18:21.049 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-10T07:18:21.050 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-10T07:18:21.076 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-10T07:18:21.144 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-10T07:18:21.498 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:21.498 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-10T07:18:21.498 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:21.498 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-10T07:18:21.505 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-10T07:18:21.508 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:21.508 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-10T07:18:21.508 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T07:18:21.508 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-10T07:18:21.508 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:21.508 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-10T07:18:21.508 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:21.508 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-10T07:18:21.508 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:18:21.508 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T07:18:22.356 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T07:18:22.360 DEBUG:teuthology.parallel:result is None 2026-03-10T07:18:22.360 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:18:22.957 DEBUG:teuthology.orchestra.run.vm00:> dpkg-query -W -f '${Version}' ceph 2026-03-10T07:18:22.968 INFO:teuthology.orchestra.run.vm00.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-10T07:18:22.968 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T07:18:22.968 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-10T07:18:22.970 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:18:23.611 DEBUG:teuthology.orchestra.run.vm03:> dpkg-query -W -f '${Version}' ceph 2026-03-10T07:18:23.619 INFO:teuthology.orchestra.run.vm03.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-10T07:18:23.619 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T07:18:23.619 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-10T07:18:23.620 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T07:18:23.620 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:18:23.621 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T07:18:23.628 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:18:23.628 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T07:18:23.671 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T07:18:23.671 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:18:23.672 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T07:18:23.681 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T07:18:23.729 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:18:23.729 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T07:18:23.737 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T07:18:23.785 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T07:18:23.785 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:18:23.785 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T07:18:23.793 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T07:18:23.842 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:18:23.842 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T07:18:23.849 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T07:18:23.898 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T07:18:23.898 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:18:23.898 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T07:18:23.906 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T07:18:23.958 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:18:23.958 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T07:18:23.966 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T07:18:24.013 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T07:18:24.071 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'client': {'debug ms': 1}, 'global': {'mon election default strategy': 1, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon warn on pool no app': False}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd class default list': '*', 'osd class load list': '*', 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'reached quota', 'but it is still running', 'overall HEALTH_', '\\(POOL_FULL\\)', '\\(SMALLER_PGP_NUM\\)', '\\(CACHE_POOL_NO_HIT_SET\\)', '\\(CACHE_POOL_NEAR_FULL\\)', '\\(POOL_APP_NOT_ENABLED\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', 'CEPHADM_STRAY_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'cephadm-package'} 2026-03-10T07:18:24.071 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:18:24.072 INFO:tasks.cephadm:Cluster fsid is 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:18:24.072 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T07:18:24.072 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.100', 'mon.c': '[v2:192.168.123.100:3301,v1:192.168.123.100:6790]', 'mon.b': '192.168.123.103'} 2026-03-10T07:18:24.072 INFO:tasks.cephadm:First mon is mon.a on vm00 2026-03-10T07:18:24.072 INFO:tasks.cephadm:First mgr is y 2026-03-10T07:18:24.072 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T07:18:24.072 DEBUG:teuthology.orchestra.run.vm00:> sudo hostname $(hostname -s) 2026-03-10T07:18:24.080 DEBUG:teuthology.orchestra.run.vm03:> sudo hostname $(hostname -s) 2026-03-10T07:18:24.091 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T07:18:24.091 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T07:18:24.123 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T07:18:24.213 INFO:teuthology.orchestra.run.vm00.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T07:18:24.222 INFO:teuthology.orchestra.run.vm03.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T07:19:09.186 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-10T07:19:09.186 INFO:teuthology.orchestra.run.vm00.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T07:19:09.186 INFO:teuthology.orchestra.run.vm00.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T07:19:09.186 INFO:teuthology.orchestra.run.vm00.stdout: "repo_digests": [ 2026-03-10T07:19:09.186 INFO:teuthology.orchestra.run.vm00.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T07:19:09.187 INFO:teuthology.orchestra.run.vm00.stdout: ] 2026-03-10T07:19:09.187 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-10T07:19:25.683 INFO:teuthology.orchestra.run.vm03.stdout:{ 2026-03-10T07:19:25.683 INFO:teuthology.orchestra.run.vm03.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T07:19:25.683 INFO:teuthology.orchestra.run.vm03.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T07:19:25.683 INFO:teuthology.orchestra.run.vm03.stdout: "repo_digests": [ 2026-03-10T07:19:25.683 INFO:teuthology.orchestra.run.vm03.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T07:19:25.683 INFO:teuthology.orchestra.run.vm03.stdout: ] 2026-03-10T07:19:25.683 INFO:teuthology.orchestra.run.vm03.stdout:} 2026-03-10T07:19:25.706 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph 2026-03-10T07:19:25.716 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph 2026-03-10T07:19:25.724 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /etc/ceph 2026-03-10T07:19:25.764 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /etc/ceph 2026-03-10T07:19:25.773 INFO:tasks.cephadm:Writing seed config... 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [client] debug ms = 1 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [mon] mon warn on pool no app = False 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [osd] osd class default list = * 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [osd] osd class load list = * 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T07:19:25.774 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-10T07:19:25.774 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:19:25.774 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T07:19:25.808 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 534d9c8a-1c51-11f1-ac87-d1fb9a119953 mon election default strategy = 1 ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd class default list = * osd class load list = * osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 mon warn on pool no app = False [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true [client] debug ms = 1 2026-03-10T07:19:25.808 DEBUG:teuthology.orchestra.run.vm00:mon.a> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.a.service 2026-03-10T07:19:25.851 DEBUG:teuthology.orchestra.run.vm00:mgr.y> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mgr.y.service 2026-03-10T07:19:25.894 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T07:19:25.894 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.100 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:19:26.031 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-10T07:19:26.032 INFO:teuthology.orchestra.run.vm00.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '534d9c8a-1c51-11f1-ac87-d1fb9a119953', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.100', '--skip-admin-label'] 2026-03-10T07:19:26.032 INFO:teuthology.orchestra.run.vm00.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T07:19:26.032 INFO:teuthology.orchestra.run.vm00.stdout:Verifying podman|docker is present... 2026-03-10T07:19:26.032 INFO:teuthology.orchestra.run.vm00.stdout:Verifying lvm2 is present... 2026-03-10T07:19:26.032 INFO:teuthology.orchestra.run.vm00.stdout:Verifying time synchronization is in place... 2026-03-10T07:19:26.035 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T07:19:26.035 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T07:19:26.037 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T07:19:26.037 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T07:19:26.039 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T07:19:26.039 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T07:19:26.042 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T07:19:26.042 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T07:19:26.044 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T07:19:26.044 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout masked 2026-03-10T07:19:26.046 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T07:19:26.046 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T07:19:26.049 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T07:19:26.049 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T07:19:26.051 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T07:19:26.051 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T07:19:26.055 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T07:19:26.057 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T07:19:26.057 INFO:teuthology.orchestra.run.vm00.stdout:Unit ntp.service is enabled and running 2026-03-10T07:19:26.057 INFO:teuthology.orchestra.run.vm00.stdout:Repeating the final host check... 2026-03-10T07:19:26.057 INFO:teuthology.orchestra.run.vm00.stdout:docker (/usr/bin/docker) is present 2026-03-10T07:19:26.057 INFO:teuthology.orchestra.run.vm00.stdout:systemctl is present 2026-03-10T07:19:26.057 INFO:teuthology.orchestra.run.vm00.stdout:lvcreate is present 2026-03-10T07:19:26.059 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T07:19:26.059 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T07:19:26.061 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T07:19:26.061 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T07:19:26.063 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T07:19:26.063 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T07:19:26.065 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T07:19:26.065 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T07:19:26.068 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T07:19:26.068 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout masked 2026-03-10T07:19:26.070 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T07:19:26.070 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T07:19:26.072 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T07:19:26.072 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T07:19:26.074 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T07:19:26.074 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T07:19:26.077 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T07:19:26.079 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T07:19:26.079 INFO:teuthology.orchestra.run.vm00.stdout:Unit ntp.service is enabled and running 2026-03-10T07:19:26.079 INFO:teuthology.orchestra.run.vm00.stdout:Host looks OK 2026-03-10T07:19:26.079 INFO:teuthology.orchestra.run.vm00.stdout:Cluster fsid: 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:26.079 INFO:teuthology.orchestra.run.vm00.stdout:Acquiring lock 140513093634608 on /run/cephadm/534d9c8a-1c51-11f1-ac87-d1fb9a119953.lock 2026-03-10T07:19:26.079 INFO:teuthology.orchestra.run.vm00.stdout:Lock 140513093634608 acquired on /run/cephadm/534d9c8a-1c51-11f1-ac87-d1fb9a119953.lock 2026-03-10T07:19:26.079 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 3300 ... 2026-03-10T07:19:26.079 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 6789 ... 2026-03-10T07:19:26.080 INFO:teuthology.orchestra.run.vm00.stdout:Base mon IP(s) is [192.168.123.100:3300, 192.168.123.100:6789], mon addrv is [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T07:19:26.081 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.100 metric 100 2026-03-10T07:19:26.081 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-10T07:19:26.081 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.100 metric 100 2026-03-10T07:19:26.081 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.100 metric 100 2026-03-10T07:19:26.082 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T07:19:26.082 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-10T07:19:26.083 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T07:19:26.083 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T07:19:26.083 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T07:19:26.083 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-10T07:19:26.083 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:0/64 scope link 2026-03-10T07:19:26.083 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T07:19:26.084 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T07:19:26.084 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T07:19:26.084 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.1/32` 2026-03-10T07:19:26.084 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.1/32` 2026-03-10T07:19:26.084 INFO:teuthology.orchestra.run.vm00.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-10T07:19:26.084 INFO:teuthology.orchestra.run.vm00.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T07:19:26.084 INFO:teuthology.orchestra.run.vm00.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T07:19:27.179 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-10T07:19:27.179 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T07:19:27.179 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:19:27.179 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:19:27.358 INFO:teuthology.orchestra.run.vm00.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T07:19:27.358 INFO:teuthology.orchestra.run.vm00.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T07:19:27.358 INFO:teuthology.orchestra.run.vm00.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T07:19:27.500 INFO:teuthology.orchestra.run.vm00.stdout:stat: stdout 167 167 2026-03-10T07:19:27.500 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial keys... 2026-03-10T07:19:27.689 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQD/xa9poq3uJhAApCPRL3ydrpacmZeywtqRrQ== 2026-03-10T07:19:27.802 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQD/xa9pU5vDLRAAMO8/zkwd7Fdhf2EfQUAbSQ== 2026-03-10T07:19:27.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQD/xa9pXlIMNBAAel6XYub2Cuf8+mqF5oJqag== 2026-03-10T07:19:27.929 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial monmap... 2026-03-10T07:19:28.032 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T07:19:28.032 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T07:19:28.032 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:28.032 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T07:19:28.032 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool for a [v2:192.168.123.100:3300,v1:192.168.123.100:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T07:19:28.032 INFO:teuthology.orchestra.run.vm00.stdout:setting min_mon_release = quincy 2026-03-10T07:19:28.032 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: set fsid to 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:28.032 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T07:19:28.032 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:19:28.032 INFO:teuthology.orchestra.run.vm00.stdout:Creating mon... 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 1 imported monmap: 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-10T07:19:27.999189+0000 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 0 /usr/bin/ceph-mon: set fsid to 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Git sha 0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: DB SUMMARY 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: DB Session ID: E7EJCV3Q3HH7W7O6LPMN 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.create_if_missing: 1 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.env: 0x55b462be9dc0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.info_log: 0x55b49326eda0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.statistics: (nil) 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.use_fsync: 0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T07:19:28.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.db_log_dir: 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.wal_dir: 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.write_buffer_manager: 0x55b4932655e0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.unordered_write: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.row_cache: None 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.wal_filter: None 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.wal_compression: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.max_open_files: -1 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T07:19:28.189 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Compression algorithms supported: 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: kZSTD supported: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.125+0000 7f5e2f7ead80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.merge_operator: 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_filter: None 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b493261520) 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x55b493287350 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-10T07:19:28.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compression: NoCompression 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.num_levels: 7 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T07:19:28.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.ttl: 2592000 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6036f867-0119-4270-a0f7-8ef658da81e7 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.129+0000 7f5e2f7ead80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.133+0000 7f5e2f7ead80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b493288e00 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.133+0000 7f5e2f7ead80 4 rocksdb: DB pointer 0x55b49336c000 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.133+0000 7f5e26f74640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.133+0000 7f5e26f74640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:19:28.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x55b493287350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.8e-05 secs_since: 0 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.133+0000 7f5e2f7ead80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.137+0000 7f5e2f7ead80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:19:28.137+0000 7f5e2f7ead80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-10T07:19:28.193 INFO:teuthology.orchestra.run.vm00.stdout:create mon.a on 2026-03-10T07:19:28.349 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-10T07:19:28.521 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T07:19:28.694 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953.target → /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953.target. 2026-03-10T07:19:28.694 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953.target → /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953.target. 2026-03-10T07:19:28.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:28 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:19:28.897 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.a 2026-03-10T07:19:28.897 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.a.service: Unit ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.a.service not loaded. 2026-03-10T07:19:29.085 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953.target.wants/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.a.service → /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service. 2026-03-10T07:19:29.179 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T07:19:29.179 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T07:19:29.179 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon to start... 2026-03-10T07:19:29.179 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon... 2026-03-10T07:19:29.360 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:28 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:19:29.360 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:29 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:19:29.360 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:29 vm00 systemd[1]: Started Ceph mon.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout id: 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout services: 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.0848932s) 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout data: 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.505+0000 7f6795111640 1 Processor -- start 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.505+0000 7f6795111640 1 -- start start 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f6795111640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f6790108c60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f6795111640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6790109230 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f678ed76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f6790108c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f678ed76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f6790108c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:43470/0 (socket says 192.168.123.100:43470) 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f678ed76640 1 -- 192.168.123.100:0/3121492497 learned_addr learned my addr 192.168.123.100:0/3121492497 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f678ed76640 1 -- 192.168.123.100:0/3121492497 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6790109a60 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f678ed76640 1 --2- 192.168.123.100:0/3121492497 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f6790108c60 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f6778009b80 tx=0x7f677802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=bb69e0805a7f7413 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f678dd74640 1 -- 192.168.123.100:0/3121492497 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f677803c070 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f678dd74640 1 -- 192.168.123.100:0/3121492497 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f677802fb40 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f678dd74640 1 -- 192.168.123.100:0/3121492497 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f677802fe40 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f6795111640 1 -- 192.168.123.100:0/3121492497 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 msgr2=0x7f6790108c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f6795111640 1 --2- 192.168.123.100:0/3121492497 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f6790108c60 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f6778009b80 tx=0x7f677802f190 comp rx=0 tx=0).stop 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f6795111640 1 -- 192.168.123.100:0/3121492497 shutdown_connections 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f6795111640 1 --2- 192.168.123.100:0/3121492497 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f6790108c60 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f6795111640 1 -- 192.168.123.100:0/3121492497 >> 192.168.123.100:0/3121492497 conn(0x7f679007bda0 msgr2=0x7f679007c1b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f6795111640 1 -- 192.168.123.100:0/3121492497 shutdown_connections 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.509+0000 7f6795111640 1 -- 192.168.123.100:0/3121492497 wait complete. 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f6795111640 1 Processor -- start 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f6795111640 1 -- start start 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f6795111640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f679019deb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f6795111640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6790109f90 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f678ed76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f679019deb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f678ed76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f679019deb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:43474/0 (socket says 192.168.123.100:43474) 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f678ed76640 1 -- 192.168.123.100:0/4037744312 learned_addr learned my addr 192.168.123.100:0/4037744312 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f678ed76640 1 -- 192.168.123.100:0/4037744312 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f679019e3f0 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f678ed76640 1 --2- 192.168.123.100:0/4037744312 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f679019deb0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f6778002410 tx=0x7f67780047c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f676ffff640 1 -- 192.168.123.100:0/4037744312 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f677803c050 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f676ffff640 1 -- 192.168.123.100:0/4037744312 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f677803d040 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f676ffff640 1 -- 192.168.123.100:0/4037744312 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f67780375a0 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f6795111640 1 -- 192.168.123.100:0/4037744312 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f679019e680 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f6795111640 1 -- 192.168.123.100:0/4037744312 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f679019eaa0 con 0x7f6790108860 2026-03-10T07:19:29.603 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f676ffff640 1 -- 192.168.123.100:0/4037744312 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f6778037740 con 0x7f6790108860 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f676ffff640 1 -- 192.168.123.100:0/4037744312 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f6778041800 con 0x7f6790108860 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.513+0000 7f6795111640 1 -- 192.168.123.100:0/4037744312 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f675c005180 con 0x7f6790108860 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.517+0000 7f676ffff640 1 -- 192.168.123.100:0/4037744312 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f677802fa50 con 0x7f6790108860 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.545+0000 7f6795111640 1 -- 192.168.123.100:0/4037744312 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "status"} v 0) -- 0x7f675c005740 con 0x7f6790108860 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.549+0000 7f676ffff640 1 -- 192.168.123.100:0/4037744312 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "status"}]=0 v0) ==== 54+0+318 (secure 0 0 0) 0x7f6778037b50 con 0x7f6790108860 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.549+0000 7f6795111640 1 -- 192.168.123.100:0/4037744312 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 msgr2=0x7f679019deb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.549+0000 7f6795111640 1 --2- 192.168.123.100:0/4037744312 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f679019deb0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f6778002410 tx=0x7f67780047c0 comp rx=0 tx=0).stop 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.549+0000 7f6795111640 1 -- 192.168.123.100:0/4037744312 shutdown_connections 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.549+0000 7f6795111640 1 --2- 192.168.123.100:0/4037744312 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6790108860 0x7f679019deb0 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.549+0000 7f6795111640 1 -- 192.168.123.100:0/4037744312 >> 192.168.123.100:0/4037744312 conn(0x7f679007bda0 msgr2=0x7f67901921d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.549+0000 7f6795111640 1 -- 192.168.123.100:0/4037744312 shutdown_connections 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:29.549+0000 7f6795111640 1 -- 192.168.123.100:0/4037744312 wait complete. 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:mon is available 2026-03-10T07:19:29.604 INFO:teuthology.orchestra.run.vm00.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T07:19:29.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:29 vm00 bash[20219]: cluster 2026-03-10T07:19:29.470311+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:29.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:29 vm00 bash[20219]: cluster 2026-03-10T07:19:29.470311+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:29.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:29 vm00 bash[20219]: cluster 2026-03-10T07:19:29.464238+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T07:19:29.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:29 vm00 bash[20219]: cluster 2026-03-10T07:19:29.464238+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T07:19:30.210 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:30.210 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T07:19:30.210 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:30.210 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T07:19:30.210 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T07:19:30.210 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T07:19:30.210 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T07:19:30.210 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T07:19:30.210 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T07:19:30.210 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:30.210 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.097+0000 7fc1fa8fa640 1 Processor -- start 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.097+0000 7fc1fa8fa640 1 -- start start 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.097+0000 7fc1fa8fa640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4108c60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fc1f4109230 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f3fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4108c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f3fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4108c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:43484/0 (socket says 192.168.123.100:43484) 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f3fff640 1 -- 192.168.123.100:0/1097256218 learned_addr learned my addr 192.168.123.100:0/1097256218 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f3fff640 1 -- 192.168.123.100:0/1097256218 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc1f4109a60 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f3fff640 1 --2- 192.168.123.100:0/1097256218 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4108c60 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fc1e4009920 tx=0x7fc1e402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=409b8c78c5d98d33 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f2ffd640 1 -- 192.168.123.100:0/1097256218 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc1e403c070 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f2ffd640 1 -- 192.168.123.100:0/1097256218 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fc1e402fae0 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/1097256218 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 msgr2=0x7fc1f4108c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 --2- 192.168.123.100:0/1097256218 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4108c60 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fc1e4009920 tx=0x7fc1e402ef20 comp rx=0 tx=0).stop 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/1097256218 shutdown_connections 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 --2- 192.168.123.100:0/1097256218 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4108c60 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/1097256218 >> 192.168.123.100:0/1097256218 conn(0x7fc1f407bda0 msgr2=0x7fc1f407c1b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/1097256218 shutdown_connections 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/1097256218 wait complete. 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 Processor -- start 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 -- start start 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4080470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fc1f4109f90 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f3fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4080470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f3fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4080470 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:43498/0 (socket says 192.168.123.100:43498) 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f3fff640 1 -- 192.168.123.100:0/3433314575 learned_addr learned my addr 192.168.123.100:0/3433314575 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f3fff640 1 -- 192.168.123.100:0/3433314575 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc1f40809b0 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f3fff640 1 --2- 192.168.123.100:0/3433314575 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4080470 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fc1e402f450 tx=0x7fc1e40047c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f17fa640 1 -- 192.168.123.100:0/3433314575 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc1e4047020 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f17fa640 1 -- 192.168.123.100:0/3433314575 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fc1e4042660 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1f17fa640 1 -- 192.168.123.100:0/3433314575 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc1e403c040 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/3433314575 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fc1f4080c40 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.101+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/3433314575 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fc1f407cfb0 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.105+0000 7fc1f17fa640 1 -- 192.168.123.100:0/3433314575 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7fc1e4054050 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.105+0000 7fc1f17fa640 1 -- 192.168.123.100:0/3433314575 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fc1e4043920 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.105+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/3433314575 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc1b8005180 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.105+0000 7fc1f17fa640 1 -- 192.168.123.100:0/3433314575 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7fc1e4042de0 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.137+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/3433314575 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7fc1b8003c00 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.141+0000 7fc1f17fa640 1 -- 192.168.123.100:0/3433314575 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v2) ==== 70+0+380 (secure 0 0 0) 0x7fc1e4043420 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.141+0000 7fc1f17fa640 1 -- 192.168.123.100:0/3433314575 <== mon.0 v2:192.168.123.100:3300/0 8 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fc1e402fa10 con 0x7fc1f4108860 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.145+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/3433314575 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 msgr2=0x7fc1f4080470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.145+0000 7fc1fa8fa640 1 --2- 192.168.123.100:0/3433314575 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4080470 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fc1e402f450 tx=0x7fc1e40047c0 comp rx=0 tx=0).stop 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.145+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/3433314575 shutdown_connections 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.145+0000 7fc1fa8fa640 1 --2- 192.168.123.100:0/3433314575 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1f4108860 0x7fc1f4080470 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.145+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/3433314575 >> 192.168.123.100:0/3433314575 conn(0x7fc1f407bda0 msgr2=0x7fc1f4194a60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.145+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/3433314575 shutdown_connections 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.145+0000 7fc1fa8fa640 1 -- 192.168.123.100:0/3433314575 wait complete. 2026-03-10T07:19:30.211 INFO:teuthology.orchestra.run.vm00.stdout:Generating new minimal ceph.conf... 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.321+0000 7fd77c847640 1 Processor -- start 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.321+0000 7fd77c847640 1 -- start start 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.321+0000 7fd77c847640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd778107200 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.321+0000 7fd77c847640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd77807a6e0 con 0x7fd778104df0 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.321+0000 7fd776575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd778107200 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.321+0000 7fd776575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd778107200 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:43512/0 (socket says 192.168.123.100:43512) 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.321+0000 7fd776575640 1 -- 192.168.123.100:0/4081146239 learned_addr learned my addr 192.168.123.100:0/4081146239 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.321+0000 7fd776575640 1 -- 192.168.123.100:0/4081146239 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd778107740 con 0x7fd778104df0 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.321+0000 7fd776575640 1 --2- 192.168.123.100:0/4081146239 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd778107200 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7fd76c009920 tx=0x7fd76c02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=248c4c8e5bbb73d5 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.321+0000 7fd775573640 1 -- 192.168.123.100:0/4081146239 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd76c03c070 con 0x7fd778104df0 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.321+0000 7fd775573640 1 -- 192.168.123.100:0/4081146239 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fd76c037440 con 0x7fd778104df0 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 -- 192.168.123.100:0/4081146239 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 msgr2=0x7fd778107200 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 --2- 192.168.123.100:0/4081146239 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd778107200 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7fd76c009920 tx=0x7fd76c02ef20 comp rx=0 tx=0).stop 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 -- 192.168.123.100:0/4081146239 shutdown_connections 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 --2- 192.168.123.100:0/4081146239 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd778107200 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 -- 192.168.123.100:0/4081146239 >> 192.168.123.100:0/4081146239 conn(0x7fd778100c60 msgr2=0x7fd778103080 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 -- 192.168.123.100:0/4081146239 shutdown_connections 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 -- 192.168.123.100:0/4081146239 wait complete. 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 Processor -- start 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 -- start start 2026-03-10T07:19:30.418 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd7781a2bb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd7781085c0 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd776575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd7781a2bb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd776575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd7781a2bb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:43522/0 (socket says 192.168.123.100:43522) 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd776575640 1 -- 192.168.123.100:0/3768743452 learned_addr learned my addr 192.168.123.100:0/3768743452 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd776575640 1 -- 192.168.123.100:0/3768743452 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd7781a30f0 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd776575640 1 --2- 192.168.123.100:0/3768743452 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd7781a2bb0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fd76c037b40 tx=0x7fd76c037b70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd7637fe640 1 -- 192.168.123.100:0/3768743452 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd76c03c070 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd7637fe640 1 -- 192.168.123.100:0/3768743452 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fd76c045070 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 -- 192.168.123.100:0/3768743452 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd7781a3380 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd7637fe640 1 -- 192.168.123.100:0/3768743452 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd76c040ab0 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd77c847640 1 -- 192.168.123.100:0/3768743452 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd7781a6070 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.325+0000 7fd7637fe640 1 -- 192.168.123.100:0/3768743452 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7fd76c04a460 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.329+0000 7fd7637fe640 1 -- 192.168.123.100:0/3768743452 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fd76c037d30 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.329+0000 7fd77c847640 1 -- 192.168.123.100:0/3768743452 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd7781051f0 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.329+0000 7fd7637fe640 1 -- 192.168.123.100:0/3768743452 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7fd76c040d60 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.361+0000 7fd77c847640 1 -- 192.168.123.100:0/3768743452 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7fd77819bf10 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.365+0000 7fd7637fe640 1 -- 192.168.123.100:0/3768743452 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v2) ==== 76+0+181 (secure 0 0 0) 0x7fd76c035990 con 0x7fd778104df0 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.365+0000 7fd77c847640 1 -- 192.168.123.100:0/3768743452 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 msgr2=0x7fd7781a2bb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.365+0000 7fd77c847640 1 --2- 192.168.123.100:0/3768743452 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd7781a2bb0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fd76c037b40 tx=0x7fd76c037b70 comp rx=0 tx=0).stop 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.365+0000 7fd77c847640 1 -- 192.168.123.100:0/3768743452 shutdown_connections 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.365+0000 7fd77c847640 1 --2- 192.168.123.100:0/3768743452 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd778104df0 0x7fd7781a2bb0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.365+0000 7fd77c847640 1 -- 192.168.123.100:0/3768743452 >> 192.168.123.100:0/3768743452 conn(0x7fd778100c60 msgr2=0x7fd77810ad00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.365+0000 7fd77c847640 1 -- 192.168.123.100:0/3768743452 shutdown_connections 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.365+0000 7fd77c847640 1 -- 192.168.123.100:0/3768743452 wait complete. 2026-03-10T07:19:30.419 INFO:teuthology.orchestra.run.vm00.stdout:Restarting the monitor... 2026-03-10T07:19:30.599 INFO:teuthology.orchestra.run.vm00.stdout:Setting public_network to 192.168.123.1/32,192.168.123.0/24 in mon config section 2026-03-10T07:19:30.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 systemd[1]: Stopping Ceph mon.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T07:19:30.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20219]: debug 2026-03-10T07:19:30.449+0000 7fb42fe81640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T07:19:30.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20219]: debug 2026-03-10T07:19:30.449+0000 7fb42fe81640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T07:19:30.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20617]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-mon-a 2026-03-10T07:19:30.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.a.service: Deactivated successfully. 2026-03-10T07:19:30.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 systemd[1]: Stopped Ceph mon.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:19:30.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 systemd[1]: Started Ceph mon.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.721+0000 7fad01e88d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.721+0000 7fad01e88d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.721+0000 7fad01e88d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 0 load: jerasure load: lrc 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Git sha 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: DB SUMMARY 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: DB Session ID: 88N1FZEPJAQWBSAX8ZHY 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 77697 ; 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.env: 0x55898dc30dc0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.info_log: 0x558995abc700 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T07:19:31.012 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.db_log_dir: 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.wal_dir: 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.write_buffer_manager: 0x558995ac1900 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.row_cache: None 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.wal_filter: None 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T07:19:31.013 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Compression algorithms supported: 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: kZSTD supported: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.merge_operator: 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558995abc640) 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cache_index_and_filter_blocks: 1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: pin_top_level_index_and_filter: 1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: index_type: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: data_block_index_type: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: index_shortening: 1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: checksum: 4 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: no_block_cache: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: block_cache: 0x558995ae3350 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: block_cache_name: BinnedLRUCache 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: block_cache_options: 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: capacity : 536870912 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: num_shard_bits : 4 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: strict_capacity_limit : 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: high_pri_pool_ratio: 0.000 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: block_cache_compressed: (nil) 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: persistent_cache: (nil) 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: block_size: 4096 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: block_size_deviation: 10 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: block_restart_interval: 16 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: index_block_restart_interval: 1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: metadata_block_size: 4096 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: partition_filters: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: use_delta_encoding: 1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: filter_policy: bloomfilter 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: whole_key_filtering: 1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: verify_compression: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: read_amp_bytes_per_bit: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: format_version: 5 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: enable_index_compression: 1 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: block_align: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: max_auto_readahead_size: 262144 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: prepopulate_block_cache: 0 2026-03-10T07:19:31.014 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: initial_auto_readahead_size: 8192 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: num_file_reads_for_auto_readahead: 2 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.num_levels: 7 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T07:19:31.015 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.725+0000 7fad01e88d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.733+0000 7fad01e88d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.733+0000 7fad01e88d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.733+0000 7fad01e88d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6036f867-0119-4270-a0f7-8ef658da81e7 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.733+0000 7fad01e88d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127170741280, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.733+0000 7fad01e88d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.737+0000 7fad01e88d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127170742989, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 74603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 235, "table_properties": {"data_size": 72803, "index_size": 189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 10270, "raw_average_key_size": 49, "raw_value_size": 67021, "raw_average_value_size": 325, "num_data_blocks": 8, "num_entries": 206, "num_filter_entries": 206, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773127170, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6036f867-0119-4270-a0f7-8ef658da81e7", "db_session_id": "88N1FZEPJAQWBSAX8ZHY", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.737+0000 7fad01e88d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127170743039, "job": 1, "event": "recovery_finished"} 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.737+0000 7fad01e88d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.741+0000 7fad01e88d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.741+0000 7fad01e88d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558995ae4e00 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.741+0000 7fad01e88d80 4 rocksdb: DB pointer 0x558995bfa000 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.741+0000 7facf7c52640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.741+0000 7facf7c52640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: ** DB Stats ** 2026-03-10T07:19:31.016 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: ** Compaction Stats [default] ** 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: L0 2/0 74.71 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 47.1 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Sum 2/0 74.71 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 47.1 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 47.1 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: ** Compaction Stats [default] ** 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 47.1 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: AddFile(Keys): cumulative 0, interval 0 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Cumulative compaction: 0.00 GB write, 4.46 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Interval compaction: 0.00 GB write, 4.46 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Block cache BinnedLRUCache@0x558995ae3350#7 capacity: 512.00 MB usage: 26.75 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6e-06 secs_since: 0 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: Block cache entry stats(count,size,portion): DataBlock(3,25.61 KB,0.0048846%) FilterBlock(2,0.77 KB,0.000146031%) IndexBlock(2,0.38 KB,7.15256e-05%) Misc(1,0.00 KB,0%) 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: ** File Read Latency Histogram By Level [default] ** 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] at bind addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 1 mon.a@-1(???) e1 preinit fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 0 mon.a@-1(???).mds e1 new map 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 0 mon.a@-1(???).mds e1 print_map 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: e1 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: btime 2026-03-10T07:19:29:469789+0000 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: legacy client fscid: -1 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: No filesystems configured 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 4 mon.a@-1(???).mgr e0 loading version 1 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 4 mon.a@-1(???).mgr e1 active server: (0) 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: debug 2026-03-10T07:19:30.745+0000 7fad01e88d80 4 mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756185+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756185+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756210+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756210+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756213+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756213+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756215+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T07:19:27.999189+0000 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756215+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T07:19:27.999189+0000 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756222+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756222+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756225+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756225+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756227+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756227+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T07:19:31.017 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756230+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:19:31.018 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756230+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:19:31.018 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756454+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T07:19:31.018 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756454+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T07:19:31.018 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756465+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T07:19:31.018 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756465+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T07:19:31.018 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756929+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T07:19:31.018 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:30 vm00 bash[20701]: cluster 2026-03-10T07:19:30.756929+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.749+0000 7ffb0ca03640 1 Processor -- start 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.749+0000 7ffb0ca03640 1 -- start start 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.749+0000 7ffb0ca03640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb0810e780 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.749+0000 7ffb0ca03640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ffb0810ecc0 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.749+0000 7ffb06575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb0810e780 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.749+0000 7ffb06575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb0810e780 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:43526/0 (socket says 192.168.123.100:43526) 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.749+0000 7ffb06575640 1 -- 192.168.123.100:0/1646001924 learned_addr learned my addr 192.168.123.100:0/1646001924 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.749+0000 7ffb06575640 1 -- 192.168.123.100:0/1646001924 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 msgr2=0x7ffb0810e780 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_bulk peer close file descriptor 12 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.749+0000 7ffb06575640 1 -- 192.168.123.100:0/1646001924 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 msgr2=0x7ffb0810e780 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_until read failed 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.749+0000 7ffb06575640 1 --2- 192.168.123.100:0/1646001924 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb0810e780 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.749+0000 7ffb06575640 1 --2- 192.168.123.100:0/1646001924 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb0810e780 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb06575640 1 --2- 192.168.123.100:0/1646001924 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb0810e780 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb06575640 1 -- 192.168.123.100:0/1646001924 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ffb0810ee40 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb06575640 1 --2- 192.168.123.100:0/1646001924 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb0810e780 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7ffb00009ad0 tx=0x7ffb0002f540 comp rx=0 tx=0).ready entity=mon.0 client_cookie=749769f0aefcb281 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb05573640 1 -- 192.168.123.100:0/1646001924 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ffb0003d070 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb05573640 1 -- 192.168.123.100:0/1646001924 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7ffb000359d0 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb05573640 1 -- 192.168.123.100:0/1646001924 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ffb000388c0 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1646001924 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 msgr2=0x7ffb0810e780 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 --2- 192.168.123.100:0/1646001924 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb0810e780 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7ffb00009ad0 tx=0x7ffb0002f540 comp rx=0 tx=0).stop 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1646001924 shutdown_connections 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 --2- 192.168.123.100:0/1646001924 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb0810e780 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1646001924 >> 192.168.123.100:0/1646001924 conn(0x7ffb0806fc30 msgr2=0x7ffb08072050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1646001924 shutdown_connections 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1646001924 wait complete. 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 Processor -- start 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 -- start start 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb081ab400 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ffb08074c50 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb06575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb081ab400 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb06575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb081ab400 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:43546/0 (socket says 192.168.123.100:43546) 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb06575640 1 -- 192.168.123.100:0/1827183523 learned_addr learned my addr 192.168.123.100:0/1827183523 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb06575640 1 -- 192.168.123.100:0/1827183523 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ffb081ab940 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb06575640 1 --2- 192.168.123.100:0/1827183523 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb081ab400 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7ffb00037910 tx=0x7ffb00037940 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffaf77fe640 1 -- 192.168.123.100:0/1827183523 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ffb00047020 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffaf77fe640 1 -- 192.168.123.100:0/1827183523 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7ffb00035df0 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1827183523 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ffb081abbd0 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1827183523 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ffb081ae8c0 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffaf77fe640 1 -- 192.168.123.100:0/1827183523 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ffb0003d070 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.953+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1827183523 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ffacc005180 con 0x7ffb0810e380 2026-03-10T07:19:31.050 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.957+0000 7ffaf77fe640 1 -- 192.168.123.100:0/1827183523 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7ffb000435d0 con 0x7ffb0810e380 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.957+0000 7ffaf77fe640 1 -- 192.168.123.100:0/1827183523 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7ffb00035070 con 0x7ffb0810e380 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.957+0000 7ffaf77fe640 1 -- 192.168.123.100:0/1827183523 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7ffb0004c5c0 con 0x7ffb0810e380 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.989+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1827183523 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command([{prefix=config set, name=public_network}] v 0) -- 0x7ffacc005470 con 0x7ffb0810e380 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.993+0000 7ffaf77fe640 1 -- 192.168.123.100:0/1827183523 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{prefix=config set, name=public_network}]=0 v3)=0 v3) ==== 144+0+0 (secure 0 0 0) 0x7ffb000438f0 con 0x7ffb0810e380 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.993+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1827183523 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 msgr2=0x7ffb081ab400 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.993+0000 7ffb0ca03640 1 --2- 192.168.123.100:0/1827183523 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb081ab400 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7ffb00037910 tx=0x7ffb00037940 comp rx=0 tx=0).stop 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.993+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1827183523 shutdown_connections 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.993+0000 7ffb0ca03640 1 --2- 192.168.123.100:0/1827183523 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ffb0810e380 0x7ffb081ab400 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.993+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1827183523 >> 192.168.123.100:0/1827183523 conn(0x7ffb0806fc30 msgr2=0x7ffb080706d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.993+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1827183523 shutdown_connections 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:30.993+0000 7ffb0ca03640 1 -- 192.168.123.100:0/1827183523 wait complete. 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:Creating mgr... 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T07:19:31.051 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T07:19:31.238 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mgr.y 2026-03-10T07:19:31.238 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mgr.y.service: Unit ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mgr.y.service not loaded. 2026-03-10T07:19:31.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:31 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:19:31.436 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953.target.wants/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mgr.y.service → /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service. 2026-03-10T07:19:31.443 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T07:19:31.443 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T07:19:31.443 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T07:19:31.443 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T07:19:31.443 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr to start... 2026-03-10T07:19:31.443 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr... 2026-03-10T07:19:31.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:31 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "534d9c8a-1c51-11f1-ac87-d1fb9a119953", 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T07:19:31.712 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T07:19:29:469789+0000", 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T07:19:29.470592+0000", 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 Processor -- start 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 -- start start 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f0074b30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe9f0075100 con 0x7fe9f0074730 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f5a71640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f0074b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f5a71640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f0074b30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:43572/0 (socket says 192.168.123.100:43572) 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f5a71640 1 -- 192.168.123.100:0/1327900057 learned_addr learned my addr 192.168.123.100:0/1327900057 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f5a71640 1 -- 192.168.123.100:0/1327900057 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe9f010e1c0 con 0x7fe9f0074730 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f5a71640 1 --2- 192.168.123.100:0/1327900057 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f0074b30 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7fe9e0009b80 tx=0x7fe9e002f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b4b156f3fac4c511 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f4a6f640 1 -- 192.168.123.100:0/1327900057 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe9e003c070 con 0x7fe9f0074730 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f4a6f640 1 -- 192.168.123.100:0/1327900057 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fe9e0037440 con 0x7fe9f0074730 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 -- 192.168.123.100:0/1327900057 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 msgr2=0x7fe9f0074b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 --2- 192.168.123.100:0/1327900057 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f0074b30 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7fe9e0009b80 tx=0x7fe9e002f190 comp rx=0 tx=0).stop 2026-03-10T07:19:31.713 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 -- 192.168.123.100:0/1327900057 shutdown_connections 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 --2- 192.168.123.100:0/1327900057 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f0074b30 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 -- 192.168.123.100:0/1327900057 >> 192.168.123.100:0/1327900057 conn(0x7fe9f006fa60 msgr2=0x7fe9f0071ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 -- 192.168.123.100:0/1327900057 shutdown_connections 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 -- 192.168.123.100:0/1327900057 wait complete. 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 Processor -- start 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 -- start start 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f01a2a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.593+0000 7fe9f7cfc640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe9f010eec0 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9f5a71640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f01a2a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9f5a71640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f01a2a60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:43574/0 (socket says 192.168.123.100:43574) 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9f5a71640 1 -- 192.168.123.100:0/3986233816 learned_addr learned my addr 192.168.123.100:0/3986233816 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9f5a71640 1 -- 192.168.123.100:0/3986233816 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe9f01a2fa0 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9f5a71640 1 --2- 192.168.123.100:0/3986233816 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f01a2a60 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fe9e002f6c0 tx=0x7fe9e00039b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9deffd640 1 -- 192.168.123.100:0/3986233816 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe9e0047070 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9f7cfc640 1 -- 192.168.123.100:0/3986233816 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe9f01a3230 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9deffd640 1 -- 192.168.123.100:0/3986233816 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fe9e0037440 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9deffd640 1 -- 192.168.123.100:0/3986233816 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe9e003c040 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9f7cfc640 1 -- 192.168.123.100:0/3986233816 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe9f01a5f20 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9deffd640 1 -- 192.168.123.100:0/3986233816 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7fe9e0003bd0 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9deffd640 1 -- 192.168.123.100:0/3986233816 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fe9e004c430 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9f7cfc640 1 -- 192.168.123.100:0/3986233816 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe9c0005180 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.597+0000 7fe9deffd640 1 -- 192.168.123.100:0/3986233816 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7fe9e0035690 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.629+0000 7fe9f7cfc640 1 -- 192.168.123.100:0/3986233816 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7fe9c0005740 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.629+0000 7fe9deffd640 1 -- 192.168.123.100:0/3986233816 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (secure 0 0 0) 0x7fe9e004c680 con 0x7fe9f0074730 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.633+0000 7fe9dcff9640 1 -- 192.168.123.100:0/3986233816 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 msgr2=0x7fe9f01a2a60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.633+0000 7fe9dcff9640 1 --2- 192.168.123.100:0/3986233816 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f01a2a60 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fe9e002f6c0 tx=0x7fe9e00039b0 comp rx=0 tx=0).stop 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.633+0000 7fe9dcff9640 1 -- 192.168.123.100:0/3986233816 shutdown_connections 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.633+0000 7fe9dcff9640 1 --2- 192.168.123.100:0/3986233816 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe9f0074730 0x7fe9f01a2a60 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.633+0000 7fe9dcff9640 1 -- 192.168.123.100:0/3986233816 >> 192.168.123.100:0/3986233816 conn(0x7fe9f006fa60 msgr2=0x7fe9f0071ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.633+0000 7fe9dcff9640 1 -- 192.168.123.100:0/3986233816 shutdown_connections 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:31.633+0000 7fe9dcff9640 1 -- 192.168.123.100:0/3986233816 wait complete. 2026-03-10T07:19:31.714 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (1/15)... 2026-03-10T07:19:32.004 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:31 vm00 bash[20971]: debug 2026-03-10T07:19:31.797+0000 7fb98fdfa140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T07:19:32.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:31 vm00 bash[20701]: audit 2026-03-10T07:19:30.997564+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/1827183523' entity='client.admin' 2026-03-10T07:19:32.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:31 vm00 bash[20701]: audit 2026-03-10T07:19:30.997564+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/1827183523' entity='client.admin' 2026-03-10T07:19:32.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:31 vm00 bash[20701]: audit 2026-03-10T07:19:31.636954+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/3986233816' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:19:32.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:31 vm00 bash[20701]: audit 2026-03-10T07:19:31.636954+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/3986233816' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:19:32.391 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:32 vm00 bash[20971]: debug 2026-03-10T07:19:32.101+0000 7fb98fdfa140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T07:19:32.778 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:32 vm00 bash[20971]: debug 2026-03-10T07:19:32.553+0000 7fb98fdfa140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T07:19:32.779 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:32 vm00 bash[20971]: debug 2026-03-10T07:19:32.641+0000 7fb98fdfa140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T07:19:33.055 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:32 vm00 bash[20971]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T07:19:33.055 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:32 vm00 bash[20971]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T07:19:33.055 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:32 vm00 bash[20971]: from numpy import show_config as show_numpy_config 2026-03-10T07:19:33.055 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:32 vm00 bash[20971]: debug 2026-03-10T07:19:32.773+0000 7fb98fdfa140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T07:19:33.055 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:32 vm00 bash[20971]: debug 2026-03-10T07:19:32.917+0000 7fb98fdfa140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T07:19:33.055 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:32 vm00 bash[20971]: debug 2026-03-10T07:19:32.957+0000 7fb98fdfa140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T07:19:33.055 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:33 vm00 bash[20971]: debug 2026-03-10T07:19:32.997+0000 7fb98fdfa140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T07:19:33.391 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:33 vm00 bash[20971]: debug 2026-03-10T07:19:33.041+0000 7fb98fdfa140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T07:19:33.391 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:33 vm00 bash[20971]: debug 2026-03-10T07:19:33.097+0000 7fb98fdfa140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T07:19:33.800 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:33 vm00 bash[20971]: debug 2026-03-10T07:19:33.533+0000 7fb98fdfa140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T07:19:33.800 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:33 vm00 bash[20971]: debug 2026-03-10T07:19:33.573+0000 7fb98fdfa140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T07:19:33.800 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:33 vm00 bash[20971]: debug 2026-03-10T07:19:33.617+0000 7fb98fdfa140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "534d9c8a-1c51-11f1-ac87-d1fb9a119953", 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T07:19:33.957 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T07:19:29:469789+0000", 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T07:19:29.470592+0000", 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 Processor -- start 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 -- start start 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc074120 0x7f19bc074520 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f19bc074a60 con 0x7f19bc074120 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c11bb640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc074120 0x7f19bc074520 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c11bb640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc074120 0x7f19bc074520 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40008/0 (socket says 192.168.123.100:40008) 2026-03-10T07:19:33.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c11bb640 1 -- 192.168.123.100:0/49617670 learned_addr learned my addr 192.168.123.100:0/49617670 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c11bb640 1 -- 192.168.123.100:0/49617670 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f19bc074be0 con 0x7f19bc074120 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c11bb640 1 --2- 192.168.123.100:0/49617670 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc074120 0x7f19bc074520 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f19b0009920 tx=0x7f19b002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=eb8cb46496ff564d server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19affff640 1 -- 192.168.123.100:0/49617670 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f19b003c070 con 0x7f19bc074120 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19affff640 1 -- 192.168.123.100:0/49617670 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f19b0037440 con 0x7f19bc074120 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 -- 192.168.123.100:0/49617670 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc074120 msgr2=0x7f19bc074520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 --2- 192.168.123.100:0/49617670 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc074120 0x7f19bc074520 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f19b0009920 tx=0x7f19b002ef20 comp rx=0 tx=0).stop 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 -- 192.168.123.100:0/49617670 shutdown_connections 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 --2- 192.168.123.100:0/49617670 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc074120 0x7f19bc074520 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 -- 192.168.123.100:0/49617670 >> 192.168.123.100:0/49617670 conn(0x7f19bc06fa30 msgr2=0x7f19bc071e70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 -- 192.168.123.100:0/49617670 shutdown_connections 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 -- 192.168.123.100:0/49617670 wait complete. 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 Processor -- start 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 -- start start 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc0869a0 0x7f19bc086dc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f19bc07bae0 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c11bb640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc0869a0 0x7f19bc086dc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c11bb640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc0869a0 0x7f19bc086dc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40022/0 (socket says 192.168.123.100:40022) 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c11bb640 1 -- 192.168.123.100:0/2084913738 learned_addr learned my addr 192.168.123.100:0/2084913738 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c11bb640 1 -- 192.168.123.100:0/2084913738 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f19bc089ea0 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c11bb640 1 --2- 192.168.123.100:0/2084913738 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc0869a0 0x7f19bc086dc0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f19b002f450 tx=0x7f19b0035b50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19ae7fc640 1 -- 192.168.123.100:0/2084913738 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f19b0045070 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 -- 192.168.123.100:0/2084913738 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f19bc08a130 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19c21bd640 1 -- 192.168.123.100:0/2084913738 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f19bc087550 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19ae7fc640 1 -- 192.168.123.100:0/2084913738 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f19b002fdd0 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.853+0000 7f19ae7fc640 1 -- 192.168.123.100:0/2084913738 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f19b003c050 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.857+0000 7f19ae7fc640 1 -- 192.168.123.100:0/2084913738 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f19b0040a70 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.857+0000 7f19ae7fc640 1 -- 192.168.123.100:0/2084913738 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f19b003f3d0 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.869+0000 7f19c21bd640 1 -- 192.168.123.100:0/2084913738 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f198c005180 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.869+0000 7f19ae7fc640 1 -- 192.168.123.100:0/2084913738 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f19b0035ce0 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.901+0000 7f19c21bd640 1 -- 192.168.123.100:0/2084913738 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f198c005740 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.901+0000 7f19ae7fc640 1 -- 192.168.123.100:0/2084913738 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (secure 0 0 0) 0x7f19b002f9b0 con 0x7f19bc0869a0 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.905+0000 7f1987fff640 1 -- 192.168.123.100:0/2084913738 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc0869a0 msgr2=0x7f19bc086dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.905+0000 7f1987fff640 1 --2- 192.168.123.100:0/2084913738 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc0869a0 0x7f19bc086dc0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f19b002f450 tx=0x7f19b0035b50 comp rx=0 tx=0).stop 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.905+0000 7f1987fff640 1 -- 192.168.123.100:0/2084913738 shutdown_connections 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.905+0000 7f1987fff640 1 --2- 192.168.123.100:0/2084913738 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f19bc0869a0 0x7f19bc086dc0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.905+0000 7f1987fff640 1 -- 192.168.123.100:0/2084913738 >> 192.168.123.100:0/2084913738 conn(0x7f19bc06fa30 msgr2=0x7f19bc070360 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.905+0000 7f1987fff640 1 -- 192.168.123.100:0/2084913738 shutdown_connections 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:33.905+0000 7f1987fff640 1 -- 192.168.123.100:0/2084913738 wait complete. 2026-03-10T07:19:33.959 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (2/15)... 2026-03-10T07:19:34.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:33 vm00 bash[20971]: debug 2026-03-10T07:19:33.789+0000 7fb98fdfa140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T07:19:34.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:33 vm00 bash[20971]: debug 2026-03-10T07:19:33.833+0000 7fb98fdfa140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T07:19:34.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:33 vm00 bash[20971]: debug 2026-03-10T07:19:33.881+0000 7fb98fdfa140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T07:19:34.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:34 vm00 bash[20971]: debug 2026-03-10T07:19:34.021+0000 7fb98fdfa140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:19:34.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:33 vm00 bash[20701]: audit 2026-03-10T07:19:33.908523+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/2084913738' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:19:34.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:33 vm00 bash[20701]: audit 2026-03-10T07:19:33.908523+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/2084913738' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:19:34.452 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:34 vm00 bash[20971]: debug 2026-03-10T07:19:34.181+0000 7fb98fdfa140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T07:19:34.452 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:34 vm00 bash[20971]: debug 2026-03-10T07:19:34.357+0000 7fb98fdfa140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T07:19:34.452 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:34 vm00 bash[20971]: debug 2026-03-10T07:19:34.393+0000 7fb98fdfa140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T07:19:34.726 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:34 vm00 bash[20971]: debug 2026-03-10T07:19:34.441+0000 7fb98fdfa140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T07:19:34.727 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:34 vm00 bash[20971]: debug 2026-03-10T07:19:34.657+0000 7fb98fdfa140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:19:35.391 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:35 vm00 bash[20971]: debug 2026-03-10T07:19:35.017+0000 7fb98fdfa140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: cluster 2026-03-10T07:19:35.024145+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: cluster 2026-03-10T07:19:35.024145+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: cluster 2026-03-10T07:19:35.030139+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00611743s) 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: cluster 2026-03-10T07:19:35.030139+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00611743s) 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: audit 2026-03-10T07:19:35.036618+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: audit 2026-03-10T07:19:35.036618+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: audit 2026-03-10T07:19:35.036995+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: audit 2026-03-10T07:19:35.036995+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: audit 2026-03-10T07:19:35.037983+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: audit 2026-03-10T07:19:35.037983+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: audit 2026-03-10T07:19:35.038678+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: audit 2026-03-10T07:19:35.038678+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: audit 2026-03-10T07:19:35.039190+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: audit 2026-03-10T07:19:35.039190+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: cluster 2026-03-10T07:19:35.047116+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T07:19:35.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:35 vm00 bash[20701]: cluster 2026-03-10T07:19:35.047116+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: audit 2026-03-10T07:19:35.088146+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: audit 2026-03-10T07:19:35.088146+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: audit 2026-03-10T07:19:35.091384+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: audit 2026-03-10T07:19:35.091384+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: audit 2026-03-10T07:19:35.091738+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: audit 2026-03-10T07:19:35.091738+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: audit 2026-03-10T07:19:35.094694+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: audit 2026-03-10T07:19:35.094694+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: audit 2026-03-10T07:19:35.099324+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: audit 2026-03-10T07:19:35.099324+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: cluster 2026-03-10T07:19:36.061876+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.03785s) 2026-03-10T07:19:36.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:36 vm00 bash[20701]: cluster 2026-03-10T07:19:36.061876+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.03785s) 2026-03-10T07:19:36.436 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:36.436 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:19:36.436 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "534d9c8a-1c51-11f1-ac87-d1fb9a119953", 2026-03-10T07:19:36.436 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T07:19:36.437 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T07:19:36.438 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T07:19:29:469789+0000", 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:19:36.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T07:19:29.470592+0000", 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.177+0000 7f4488ece640 1 Processor -- start 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.177+0000 7f4488ece640 1 -- start start 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.177+0000 7f4488ece640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484108c60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.177+0000 7f4488ece640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f4484109230 con 0x7f4484108860 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.177+0000 7f4482575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484108c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.177+0000 7f4482575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484108c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40108/0 (socket says 192.168.123.100:40108) 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.177+0000 7f4482575640 1 -- 192.168.123.100:0/818245568 learned_addr learned my addr 192.168.123.100:0/818245568 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.177+0000 7f4482575640 1 -- 192.168.123.100:0/818245568 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4484109a60 con 0x7f4484108860 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4482575640 1 --2- 192.168.123.100:0/818245568 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484108c60 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f4470009920 tx=0x7f447002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=e12e129d2adb717f server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4481573640 1 -- 192.168.123.100:0/818245568 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f447003c070 con 0x7f4484108860 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4481573640 1 -- 192.168.123.100:0/818245568 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f4470037440 con 0x7f4484108860 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 -- 192.168.123.100:0/818245568 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 msgr2=0x7f4484108c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 --2- 192.168.123.100:0/818245568 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484108c60 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f4470009920 tx=0x7f447002ef20 comp rx=0 tx=0).stop 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 -- 192.168.123.100:0/818245568 shutdown_connections 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 --2- 192.168.123.100:0/818245568 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484108c60 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:36.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 -- 192.168.123.100:0/818245568 >> 192.168.123.100:0/818245568 conn(0x7f448407bda0 msgr2=0x7f448407c1b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 -- 192.168.123.100:0/818245568 shutdown_connections 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 -- 192.168.123.100:0/818245568 wait complete. 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 Processor -- start 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 -- start start 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484080470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f448410a760 con 0x7f4484108860 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4482575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484080470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4482575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484080470 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40124/0 (socket says 192.168.123.100:40124) 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4482575640 1 -- 192.168.123.100:0/3766816708 learned_addr learned my addr 192.168.123.100:0/3766816708 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4482575640 1 -- 192.168.123.100:0/3766816708 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f44840809b0 con 0x7f4484108860 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4482575640 1 --2- 192.168.123.100:0/3766816708 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484080470 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f4470037b80 tx=0x7f4470037bb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f446f7fe640 1 -- 192.168.123.100:0/3766816708 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f447003c070 con 0x7f4484108860 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f446f7fe640 1 -- 192.168.123.100:0/3766816708 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f4470045070 con 0x7f4484108860 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 -- 192.168.123.100:0/3766816708 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4484080c40 con 0x7f4484108860 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.181+0000 7f4488ece640 1 -- 192.168.123.100:0/3766816708 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f448407cfb0 con 0x7f4484108860 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.185+0000 7f446f7fe640 1 -- 192.168.123.100:0/3766816708 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4470040a10 con 0x7f4484108860 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.185+0000 7f4488ece640 1 -- 192.168.123.100:0/3766816708 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4448005180 con 0x7f4484108860 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.189+0000 7f446f7fe640 1 -- 192.168.123.100:0/3766816708 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 3) ==== 50130+0+0 (secure 0 0 0) 0x7f4470040c70 con 0x7f4484108860 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.189+0000 7f446f7fe640 1 --2- 192.168.123.100:0/3766816708 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f445803d760 0x7f445803fc20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.189+0000 7f4481d74640 1 --2- 192.168.123.100:0/3766816708 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f445803d760 0x7f445803fc20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:36.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.189+0000 7f446f7fe640 1 -- 192.168.123.100:0/3766816708 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f44700770a0 con 0x7f4484108860 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.189+0000 7f4481d74640 1 --2- 192.168.123.100:0/3766816708 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f445803d760 0x7f445803fc20 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f44780099c0 tx=0x7f4478006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.189+0000 7f446f7fe640 1 -- 192.168.123.100:0/3766816708 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4470035860 con 0x7f4484108860 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.377+0000 7f4488ece640 1 -- 192.168.123.100:0/3766816708 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f4448005470 con 0x7f4484108860 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.381+0000 7f446f7fe640 1 -- 192.168.123.100:0/3766816708 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1290 (secure 0 0 0) 0x7f4470036e20 con 0x7f4484108860 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.381+0000 7f4488ece640 1 -- 192.168.123.100:0/3766816708 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f445803d760 msgr2=0x7f445803fc20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.381+0000 7f4488ece640 1 --2- 192.168.123.100:0/3766816708 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f445803d760 0x7f445803fc20 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f44780099c0 tx=0x7f4478006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.381+0000 7f4488ece640 1 -- 192.168.123.100:0/3766816708 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 msgr2=0x7f4484080470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.381+0000 7f4488ece640 1 --2- 192.168.123.100:0/3766816708 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484080470 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f4470037b80 tx=0x7f4470037bb0 comp rx=0 tx=0).stop 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.385+0000 7f4488ece640 1 -- 192.168.123.100:0/3766816708 shutdown_connections 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.385+0000 7f4488ece640 1 --2- 192.168.123.100:0/3766816708 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f445803d760 0x7f445803fc20 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.385+0000 7f4488ece640 1 --2- 192.168.123.100:0/3766816708 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4484108860 0x7f4484080470 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.385+0000 7f4488ece640 1 -- 192.168.123.100:0/3766816708 >> 192.168.123.100:0/3766816708 conn(0x7f448407bda0 msgr2=0x7f4484106050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.385+0000 7f4488ece640 1 -- 192.168.123.100:0/3766816708 shutdown_connections 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.385+0000 7f4488ece640 1 -- 192.168.123.100:0/3766816708 wait complete. 2026-03-10T07:19:36.442 INFO:teuthology.orchestra.run.vm00.stdout:mgr is available 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2478d61640 1 Processor -- start 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2478d61640 1 -- start start 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2478d61640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f2474108c60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2478d61640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f2474109230 con 0x7f2474108860 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2472575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f2474108c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2472575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f2474108c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40130/0 (socket says 192.168.123.100:40130) 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2472575640 1 -- 192.168.123.100:0/2306608243 learned_addr learned my addr 192.168.123.100:0/2306608243 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:36.714 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2472575640 1 -- 192.168.123.100:0/2306608243 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2474109a60 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2472575640 1 --2- 192.168.123.100:0/2306608243 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f2474108c60 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f2468009920 tx=0x7f246802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=9e2edf69b5e24071 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2471573640 1 -- 192.168.123.100:0/2306608243 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f246803c070 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2471573640 1 -- 192.168.123.100:0/2306608243 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f2468037440 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2478d61640 1 -- 192.168.123.100:0/2306608243 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 msgr2=0x7f2474108c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2478d61640 1 --2- 192.168.123.100:0/2306608243 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f2474108c60 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f2468009920 tx=0x7f246802ef20 comp rx=0 tx=0).stop 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.557+0000 7f2478d61640 1 -- 192.168.123.100:0/2306608243 shutdown_connections 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2478d61640 1 --2- 192.168.123.100:0/2306608243 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f2474108c60 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2478d61640 1 -- 192.168.123.100:0/2306608243 >> 192.168.123.100:0/2306608243 conn(0x7f247407bda0 msgr2=0x7f247407c1b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2478d61640 1 -- 192.168.123.100:0/2306608243 shutdown_connections 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2478d61640 1 -- 192.168.123.100:0/2306608243 wait complete. 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2478d61640 1 Processor -- start 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2478d61640 1 -- start start 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2478d61640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f247419e510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2478d61640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f247410a760 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2472575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f247419e510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2472575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f247419e510 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40146/0 (socket says 192.168.123.100:40146) 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2472575640 1 -- 192.168.123.100:0/2353862342 learned_addr learned my addr 192.168.123.100:0/2353862342 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2472575640 1 -- 192.168.123.100:0/2353862342 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f247419ea50 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2472575640 1 --2- 192.168.123.100:0/2353862342 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f247419e510 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f2468006fd0 tx=0x7f2468035d50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f245b7fe640 1 -- 192.168.123.100:0/2353862342 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2468045070 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f245b7fe640 1 -- 192.168.123.100:0/2353862342 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f2468040430 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f245b7fe640 1 -- 192.168.123.100:0/2353862342 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f246803c050 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2478d61640 1 -- 192.168.123.100:0/2353862342 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f247419ece0 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2478d61640 1 -- 192.168.123.100:0/2353862342 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f24741a19d0 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f2478d61640 1 -- 192.168.123.100:0/2353862342 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f247410cc90 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f245b7fe640 1 -- 192.168.123.100:0/2353862342 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 3) ==== 50130+0+0 (secure 0 0 0) 0x7f24680405e0 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f245b7fe640 1 --2- 192.168.123.100:0/2353862342 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f245003d6c0 0x7f245003fb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.561+0000 7f245b7fe640 1 -- 192.168.123.100:0/2353862342 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f246803f8f0 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.565+0000 7f2471d74640 1 --2- 192.168.123.100:0/2353862342 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f245003d6c0 0x7f245003fb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.565+0000 7f245b7fe640 1 -- 192.168.123.100:0/2353862342 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2468072580 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.565+0000 7f2471d74640 1 --2- 192.168.123.100:0/2353862342 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f245003d6c0 0x7f245003fb80 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f245c009a10 tx=0x7f245c006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.661+0000 7f2478d61640 1 -- 192.168.123.100:0/2353862342 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7f2474108c60 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.665+0000 7f245b7fe640 1 -- 192.168.123.100:0/2353862342 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v3) ==== 70+0+380 (secure 0 0 0) 0x7f2468048e40 con 0x7f2474108860 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.665+0000 7f2478d61640 1 -- 192.168.123.100:0/2353862342 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f245003d6c0 msgr2=0x7f245003fb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.665+0000 7f2478d61640 1 --2- 192.168.123.100:0/2353862342 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f245003d6c0 0x7f245003fb80 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f245c009a10 tx=0x7f245c006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.665+0000 7f2478d61640 1 -- 192.168.123.100:0/2353862342 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 msgr2=0x7f247419e510 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.665+0000 7f2478d61640 1 --2- 192.168.123.100:0/2353862342 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f247419e510 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f2468006fd0 tx=0x7f2468035d50 comp rx=0 tx=0).stop 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.665+0000 7f2478d61640 1 -- 192.168.123.100:0/2353862342 shutdown_connections 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.665+0000 7f2478d61640 1 --2- 192.168.123.100:0/2353862342 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f245003d6c0 0x7f245003fb80 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.665+0000 7f2478d61640 1 --2- 192.168.123.100:0/2353862342 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2474108860 0x7f247419e510 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.665+0000 7f2478d61640 1 -- 192.168.123.100:0/2353862342 >> 192.168.123.100:0/2353862342 conn(0x7f247407bda0 msgr2=0x7f2474105dc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.665+0000 7f2478d61640 1 -- 192.168.123.100:0/2353862342 shutdown_connections 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.665+0000 7f2478d61640 1 -- 192.168.123.100:0/2353862342 wait complete. 2026-03-10T07:19:36.715 INFO:teuthology.orchestra.run.vm00.stdout:Enabling cephadm module... 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae14993640 1 Processor -- start 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae14993640 1 -- start start 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae14993640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae10108c60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae14993640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fae10109230 con 0x7fae10108860 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae0e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae10108c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae0e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae10108c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40156/0 (socket says 192.168.123.100:40156) 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae0e575640 1 -- 192.168.123.100:0/1734110024 learned_addr learned my addr 192.168.123.100:0/1734110024 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae0e575640 1 -- 192.168.123.100:0/1734110024 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fae10109a60 con 0x7fae10108860 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae0e575640 1 --2- 192.168.123.100:0/1734110024 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae10108c60 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fadf8009b80 tx=0x7fadf802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=eda7e67c1d7dd9c1 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae0d573640 1 -- 192.168.123.100:0/1734110024 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fadf803c070 con 0x7fae10108860 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae0d573640 1 -- 192.168.123.100:0/1734110024 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fadf8037440 con 0x7fae10108860 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae14993640 1 -- 192.168.123.100:0/1734110024 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 msgr2=0x7fae10108c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae14993640 1 --2- 192.168.123.100:0/1734110024 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae10108c60 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fadf8009b80 tx=0x7fadf802f190 comp rx=0 tx=0).stop 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae14993640 1 -- 192.168.123.100:0/1734110024 shutdown_connections 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae14993640 1 --2- 192.168.123.100:0/1734110024 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae10108c60 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae14993640 1 -- 192.168.123.100:0/1734110024 >> 192.168.123.100:0/1734110024 conn(0x7fae1007bda0 msgr2=0x7fae1007c1b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae14993640 1 -- 192.168.123.100:0/1734110024 shutdown_connections 2026-03-10T07:19:37.394 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.833+0000 7fae14993640 1 -- 192.168.123.100:0/1734110024 wait complete. 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae14993640 1 Processor -- start 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae14993640 1 -- start start 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae14993640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae1019e6f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae14993640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fae1010a760 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae0e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae1019e6f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae0e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae1019e6f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40170/0 (socket says 192.168.123.100:40170) 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae0e575640 1 -- 192.168.123.100:0/4069365900 learned_addr learned my addr 192.168.123.100:0/4069365900 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae0e575640 1 -- 192.168.123.100:0/4069365900 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fae1019ec30 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae0e575640 1 --2- 192.168.123.100:0/4069365900 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae1019e6f0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fadf802f6c0 tx=0x7fadf80039b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fadff7fe640 1 -- 192.168.123.100:0/4069365900 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fadf8047070 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fadff7fe640 1 -- 192.168.123.100:0/4069365900 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fadf8037440 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fadff7fe640 1 -- 192.168.123.100:0/4069365900 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fadf803c040 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae14993640 1 -- 192.168.123.100:0/4069365900 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fae1019eec0 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae14993640 1 -- 192.168.123.100:0/4069365900 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fae1019f2e0 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fadff7fe640 1 -- 192.168.123.100:0/4069365900 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 3) ==== 50130+0+0 (secure 0 0 0) 0x7fadf8035690 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fadff7fe640 1 --2- 192.168.123.100:0/4069365900 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7fade403dad0 0x7fade403ff90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fadff7fe640 1 -- 192.168.123.100:0/4069365900 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fadf8077330 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae0dd74640 1 --2- 192.168.123.100:0/4069365900 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7fade403dad0 0x7fade403ff90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae0dd74640 1 --2- 192.168.123.100:0/4069365900 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7fade403dad0 0x7fade403ff90 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fae040099c0 tx=0x7fae04006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.837+0000 7fae14993640 1 -- 192.168.123.100:0/4069365900 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fadd4005180 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.841+0000 7fadff7fe640 1 -- 192.168.123.100:0/4069365900 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fadf803f380 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:36.957+0000 7fae14993640 1 -- 192.168.123.100:0/4069365900 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) -- 0x7fadd4005470 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.317+0000 7fadff7fe640 1 -- 192.168.123.100:0/4069365900 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "cephadm"}]=0 v4) ==== 86+0+0 (secure 0 0 0) 0x7fadf804ad90 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.325+0000 7fadff7fe640 1 -- 192.168.123.100:0/4069365900 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mgrmap(e 4) ==== 50247+0+0 (secure 0 0 0) 0x7fadf8048d00 con 0x7fae10108860 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.325+0000 7fae14993640 1 -- 192.168.123.100:0/4069365900 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7fade403dad0 msgr2=0x7fade403ff90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.325+0000 7fae14993640 1 --2- 192.168.123.100:0/4069365900 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7fade403dad0 0x7fade403ff90 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fae040099c0 tx=0x7fae04006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.325+0000 7fae14993640 1 -- 192.168.123.100:0/4069365900 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 msgr2=0x7fae1019e6f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.325+0000 7fae14993640 1 --2- 192.168.123.100:0/4069365900 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae1019e6f0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fadf802f6c0 tx=0x7fadf80039b0 comp rx=0 tx=0).stop 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.325+0000 7fae14993640 1 -- 192.168.123.100:0/4069365900 shutdown_connections 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.325+0000 7fae14993640 1 --2- 192.168.123.100:0/4069365900 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7fade403dad0 0x7fade403ff90 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.325+0000 7fae14993640 1 --2- 192.168.123.100:0/4069365900 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae10108860 0x7fae1019e6f0 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.325+0000 7fae14993640 1 -- 192.168.123.100:0/4069365900 >> 192.168.123.100:0/4069365900 conn(0x7fae1007bda0 msgr2=0x7fae10106050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.325+0000 7fae14993640 1 -- 192.168.123.100:0/4069365900 shutdown_connections 2026-03-10T07:19:37.395 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.325+0000 7fae14993640 1 -- 192.168.123.100:0/4069365900 wait complete. 2026-03-10T07:19:37.639 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:37 vm00 bash[20971]: ignoring --setuser ceph since I am not root 2026-03-10T07:19:37.639 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:37 vm00 bash[20971]: ignoring --setgroup ceph since I am not root 2026-03-10T07:19:37.639 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:37 vm00 bash[20971]: debug 2026-03-10T07:19:37.445+0000 7fe30412a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T07:19:37.639 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:37 vm00 bash[20971]: debug 2026-03-10T07:19:37.493+0000 7fe30412a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T07:19:37.639 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:37 vm00 bash[20701]: audit 2026-03-10T07:19:36.385665+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.100:0/3766816708' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:19:37.639 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:37 vm00 bash[20701]: audit 2026-03-10T07:19:36.385665+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.100:0/3766816708' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:19:37.640 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:37 vm00 bash[20701]: audit 2026-03-10T07:19:36.669382+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.100:0/2353862342' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T07:19:37.640 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:37 vm00 bash[20701]: audit 2026-03-10T07:19:36.669382+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.100:0/2353862342' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T07:19:37.640 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:37 vm00 bash[20701]: audit 2026-03-10T07:19:36.963997+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.100:0/4069365900' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T07:19:37.640 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:37 vm00 bash[20701]: audit 2026-03-10T07:19:36.963997+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.100:0/4069365900' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T07:19:37.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:19:37.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-10T07:19:37.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T07:19:37.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-10T07:19:37.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T07:19:37.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.533+0000 7f6b42f5f640 1 Processor -- start 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.533+0000 7f6b42f5f640 1 -- start start 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.533+0000 7f6b42f5f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c074730 0x7f6b3c074b30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.533+0000 7f6b42f5f640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6b3c075100 con 0x7f6b3c074730 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.533+0000 7f6b41f5d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c074730 0x7f6b3c074b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.533+0000 7f6b41f5d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c074730 0x7f6b3c074b30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40194/0 (socket says 192.168.123.100:40194) 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.533+0000 7f6b41f5d640 1 -- 192.168.123.100:0/1959083111 learned_addr learned my addr 192.168.123.100:0/1959083111 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.533+0000 7f6b41f5d640 1 -- 192.168.123.100:0/1959083111 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6b3c10e1c0 con 0x7f6b3c074730 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.533+0000 7f6b41f5d640 1 --2- 192.168.123.100:0/1959083111 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c074730 0x7f6b3c074b30 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f6b380089a0 tx=0x7f6b38031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=6039776eb8bbbc63 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.533+0000 7f6b40f5b640 1 -- 192.168.123.100:0/1959083111 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6b3803c480 con 0x7f6b3c074730 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b40f5b640 1 -- 192.168.123.100:0/1959083111 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f6b3803ca40 con 0x7f6b3c074730 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 -- 192.168.123.100:0/1959083111 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c074730 msgr2=0x7f6b3c074b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 --2- 192.168.123.100:0/1959083111 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c074730 0x7f6b3c074b30 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f6b380089a0 tx=0x7f6b38031440 comp rx=0 tx=0).stop 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 -- 192.168.123.100:0/1959083111 shutdown_connections 2026-03-10T07:19:37.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 --2- 192.168.123.100:0/1959083111 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c074730 0x7f6b3c074b30 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 -- 192.168.123.100:0/1959083111 >> 192.168.123.100:0/1959083111 conn(0x7f6b3c06fa60 msgr2=0x7f6b3c071ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 -- 192.168.123.100:0/1959083111 shutdown_connections 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 -- 192.168.123.100:0/1959083111 wait complete. 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 Processor -- start 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 -- start start 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c1a2820 0x7f6b3c1a2c40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6b3c10eb60 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b41f5d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c1a2820 0x7f6b3c1a2c40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b41f5d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c1a2820 0x7f6b3c1a2c40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40202/0 (socket says 192.168.123.100:40202) 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b41f5d640 1 -- 192.168.123.100:0/1536585838 learned_addr learned my addr 192.168.123.100:0/1536585838 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b41f5d640 1 -- 192.168.123.100:0/1536585838 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6b3c1a3180 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b41f5d640 1 --2- 192.168.123.100:0/1536585838 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c1a2820 0x7f6b3c1a2c40 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f6b38031970 tx=0x7f6b38009c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b2affd640 1 -- 192.168.123.100:0/1536585838 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6b38009e40 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 -- 192.168.123.100:0/1536585838 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6b3c1a3410 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b42f5f640 1 -- 192.168.123.100:0/1536585838 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6b3c1a3f60 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b2affd640 1 -- 192.168.123.100:0/1536585838 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f6b38031e50 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.537+0000 7f6b2affd640 1 -- 192.168.123.100:0/1536585838 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6b3800b2d0 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.541+0000 7f6b2affd640 1 -- 192.168.123.100:0/1536585838 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 4) ==== 50247+0+0 (secure 0 0 0) 0x7f6b3800bce0 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.541+0000 7f6b2affd640 1 --2- 192.168.123.100:0/1536585838 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6b1003dce0 0x7f6b100401a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.541+0000 7f6b4175c640 1 -- 192.168.123.100:0/1536585838 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6b1003dce0 msgr2=0x7f6b100401a0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2344477988 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.541+0000 7f6b4175c640 1 --2- 192.168.123.100:0/1536585838 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6b1003dce0 0x7f6b100401a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.541+0000 7f6b42f5f640 1 -- 192.168.123.100:0/1536585838 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6b3c074730 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.541+0000 7f6b2affd640 1 -- 192.168.123.100:0/1536585838 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f6b38077060 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.541+0000 7f6b2affd640 1 -- 192.168.123.100:0/1536585838 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6b3804ad50 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b42f5f640 1 -- 192.168.123.100:0/1536585838 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7f6b3c1a45a0 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b2affd640 1 -- 192.168.123.100:0/1536585838 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v4) ==== 56+0+88 (secure 0 0 0) 0x7f6b38054070 con 0x7f6b3c1a2820 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b28ff9640 1 -- 192.168.123.100:0/1536585838 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6b1003dce0 msgr2=0x7f6b100401a0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b28ff9640 1 --2- 192.168.123.100:0/1536585838 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6b1003dce0 0x7f6b100401a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:37.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b28ff9640 1 -- 192.168.123.100:0/1536585838 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c1a2820 msgr2=0x7f6b3c1a2c40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:37.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b28ff9640 1 --2- 192.168.123.100:0/1536585838 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c1a2820 0x7f6b3c1a2c40 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f6b38031970 tx=0x7f6b38009c80 comp rx=0 tx=0).stop 2026-03-10T07:19:37.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b28ff9640 1 -- 192.168.123.100:0/1536585838 shutdown_connections 2026-03-10T07:19:37.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b28ff9640 1 --2- 192.168.123.100:0/1536585838 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6b1003dce0 0x7f6b100401a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:37.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b28ff9640 1 --2- 192.168.123.100:0/1536585838 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b3c1a2820 0x7f6b3c1a2c40 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:37.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b28ff9640 1 -- 192.168.123.100:0/1536585838 >> 192.168.123.100:0/1536585838 conn(0x7f6b3c06fa60 msgr2=0x7f6b3c070350 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:37.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b28ff9640 1 -- 192.168.123.100:0/1536585838 shutdown_connections 2026-03-10T07:19:37.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.657+0000 7f6b28ff9640 1 -- 192.168.123.100:0/1536585838 wait complete. 2026-03-10T07:19:37.709 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T07:19:37.709 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 4... 2026-03-10T07:19:37.891 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:37 vm00 bash[20971]: debug 2026-03-10T07:19:37.629+0000 7fe30412a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T07:19:38.391 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:37 vm00 bash[20971]: debug 2026-03-10T07:19:37.985+0000 7fe30412a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:38 vm00 bash[20701]: audit 2026-03-10T07:19:37.325136+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.100:0/4069365900' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:38 vm00 bash[20701]: audit 2026-03-10T07:19:37.325136+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.100:0/4069365900' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:38 vm00 bash[20701]: cluster 2026-03-10T07:19:37.329840+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:38 vm00 bash[20701]: cluster 2026-03-10T07:19:37.329840+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:38 vm00 bash[20701]: audit 2026-03-10T07:19:37.662312+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.100:0/1536585838' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:38 vm00 bash[20701]: audit 2026-03-10T07:19:37.662312+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.100:0/1536585838' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:38 vm00 bash[20971]: debug 2026-03-10T07:19:38.437+0000 7fe30412a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:38 vm00 bash[20971]: debug 2026-03-10T07:19:38.525+0000 7fe30412a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:38 vm00 bash[20971]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:38 vm00 bash[20971]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:38 vm00 bash[20971]: from numpy import show_config as show_numpy_config 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:38 vm00 bash[20971]: debug 2026-03-10T07:19:38.653+0000 7fe30412a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T07:19:38.795 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:38 vm00 bash[20971]: debug 2026-03-10T07:19:38.785+0000 7fe30412a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T07:19:39.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:38 vm00 bash[20971]: debug 2026-03-10T07:19:38.821+0000 7fe30412a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T07:19:39.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:38 vm00 bash[20971]: debug 2026-03-10T07:19:38.857+0000 7fe30412a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T07:19:39.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:38 vm00 bash[20971]: debug 2026-03-10T07:19:38.901+0000 7fe30412a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T07:19:39.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:38 vm00 bash[20971]: debug 2026-03-10T07:19:38.953+0000 7fe30412a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T07:19:39.693 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:39 vm00 bash[20971]: debug 2026-03-10T07:19:39.413+0000 7fe30412a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T07:19:39.693 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:39 vm00 bash[20971]: debug 2026-03-10T07:19:39.453+0000 7fe30412a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T07:19:39.693 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:39 vm00 bash[20971]: debug 2026-03-10T07:19:39.489+0000 7fe30412a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T07:19:39.693 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:39 vm00 bash[20971]: debug 2026-03-10T07:19:39.641+0000 7fe30412a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T07:19:39.693 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:39 vm00 bash[20971]: debug 2026-03-10T07:19:39.681+0000 7fe30412a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T07:19:39.998 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:39 vm00 bash[20971]: debug 2026-03-10T07:19:39.721+0000 7fe30412a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T07:19:39.998 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:39 vm00 bash[20971]: debug 2026-03-10T07:19:39.833+0000 7fe30412a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:19:39.998 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:39 vm00 bash[20971]: debug 2026-03-10T07:19:39.985+0000 7fe30412a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T07:19:40.251 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:40 vm00 bash[20971]: debug 2026-03-10T07:19:40.161+0000 7fe30412a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T07:19:40.251 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:40 vm00 bash[20971]: debug 2026-03-10T07:19:40.197+0000 7fe30412a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T07:19:40.641 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:40 vm00 bash[20971]: debug 2026-03-10T07:19:40.241+0000 7fe30412a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T07:19:40.641 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:40 vm00 bash[20971]: debug 2026-03-10T07:19:40.389+0000 7fe30412a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:19:41.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: cluster 2026-03-10T07:19:40.638032+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: cluster 2026-03-10T07:19:40.638032+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: cluster 2026-03-10T07:19:40.638383+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: cluster 2026-03-10T07:19:40.638383+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: cluster 2026-03-10T07:19:40.642870+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: cluster 2026-03-10T07:19:40.642870+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: cluster 2026-03-10T07:19:40.642943+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: y(active, starting, since 0.00472576s) 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: cluster 2026-03-10T07:19:40.642943+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: y(active, starting, since 0.00472576s) 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.645814+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.645814+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.646210+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.646210+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.646793+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.646793+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.646931+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.646931+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.647095+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.647095+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: cluster 2026-03-10T07:19:40.653456+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon y is now available 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: cluster 2026-03-10T07:19:40.653456+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon y is now available 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.662316+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.662316+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.665578+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.665578+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.677739+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.677739+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.679085+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.679085+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.681079+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:40 vm00 bash[20701]: audit 2026-03-10T07:19:40.681079+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:41.142 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:40 vm00 bash[20971]: debug 2026-03-10T07:19:40.629+0000 7fe30412a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T07:19:41.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:19:41.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-10T07:19:41.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T07:19:41.707 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fde471640 1 Processor -- start 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fde471640 1 -- start start 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fde471640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd8074730 0x7f6fd8074b30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fde471640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6fd8075100 con 0x7f6fd8074730 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fd7fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd8074730 0x7f6fd8074b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fd7fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd8074730 0x7f6fd8074b30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40216/0 (socket says 192.168.123.100:40216) 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fd7fff640 1 -- 192.168.123.100:0/1121781260 learned_addr learned my addr 192.168.123.100:0/1121781260 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fd7fff640 1 -- 192.168.123.100:0/1121781260 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6fd810e1c0 con 0x7f6fd8074730 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fd7fff640 1 --2- 192.168.123.100:0/1121781260 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd8074730 0x7f6fd8074b30 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f6fc8009530 tx=0x7f6fc80301d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=3c12805a55082a94 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fd6ffd640 1 -- 192.168.123.100:0/1121781260 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6fc803d070 con 0x7f6fd8074730 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fd6ffd640 1 -- 192.168.123.100:0/1121781260 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f6fc8030d90 con 0x7f6fd8074730 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fde471640 1 -- 192.168.123.100:0/1121781260 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd8074730 msgr2=0x7f6fd8074b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fde471640 1 --2- 192.168.123.100:0/1121781260 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd8074730 0x7f6fd8074b30 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f6fc8009530 tx=0x7f6fc80301d0 comp rx=0 tx=0).stop 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fde471640 1 -- 192.168.123.100:0/1121781260 shutdown_connections 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fde471640 1 --2- 192.168.123.100:0/1121781260 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd8074730 0x7f6fd8074b30 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fde471640 1 -- 192.168.123.100:0/1121781260 >> 192.168.123.100:0/1121781260 conn(0x7f6fd806fa60 msgr2=0x7f6fd8071ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fde471640 1 -- 192.168.123.100:0/1121781260 shutdown_connections 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.849+0000 7f6fde471640 1 -- 192.168.123.100:0/1121781260 wait complete. 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fde471640 1 Processor -- start 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fde471640 1 -- start start 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fde471640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd81a3e20 0x7f6fd81a2590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fde471640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6fd810eb60 con 0x7f6fd81a3e20 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd7fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd81a3e20 0x7f6fd81a2590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd7fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd81a3e20 0x7f6fd81a2590 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40218/0 (socket says 192.168.123.100:40218) 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd7fff640 1 -- 192.168.123.100:0/4044655961 learned_addr learned my addr 192.168.123.100:0/4044655961 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd7fff640 1 -- 192.168.123.100:0/4044655961 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6fd81a4240 con 0x7f6fd81a3e20 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd7fff640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd81a3e20 0x7f6fd81a2590 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f6fc8030780 tx=0x7f6fc8037640 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6fc803d070 con 0x7f6fd81a3e20 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f6fc8038440 con 0x7f6fd81a3e20 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fde471640 1 -- 192.168.123.100:0/4044655961 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6fd81a2ad0 con 0x7f6fd81a3e20 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fde471640 1 -- 192.168.123.100:0/4044655961 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6fd81a2f50 con 0x7f6fd81a3e20 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6fc8041400 con 0x7f6fd81a3e20 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 4) ==== 50247+0+0 (secure 0 0 0) 0x7f6fc80416b0 con 0x7f6fd81a3e20 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd57fa640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 0x7f6fc4040180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 --> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f6fc4040890 con 0x7f6fc403dcc0 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd77fe640 1 -- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 msgr2=0x7f6fc4040180 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2344477988 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f6fc80775f0 con 0x7f6fd81a3e20 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:37.853+0000 7f6fd77fe640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 0x7f6fc4040180 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:38.053+0000 7f6fd77fe640 1 -- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 msgr2=0x7f6fc4040180 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2344477988 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:38.053+0000 7f6fd77fe640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 0x7f6fc4040180 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-10T07:19:41.708 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:38.457+0000 7f6fd77fe640 1 -- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 msgr2=0x7f6fc4040180 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2344477988 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:38.457+0000 7f6fd77fe640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 0x7f6fc4040180 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:39.257+0000 7f6fd77fe640 1 -- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 msgr2=0x7f6fc4040180 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2344477988 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:39.257+0000 7f6fd77fe640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 0x7f6fc4040180 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:40.637+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mgrmap(e 5) ==== 50014+0+0 (secure 0 0 0) 0x7f6fc80763f0 con 0x7f6fd81a3e20 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:40.637+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 msgr2=0x7f6fc4040180 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:40.637+0000 7f6fd57fa640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 0x7f6fc4040180 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.645+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mgrmap(e 6) ==== 50141+0+0 (secure 0 0 0) 0x7f6fc8076e70 con 0x7f6fd81a3e20 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.645+0000 7f6fd57fa640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f6fc4041610 0x7f6fc4043a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.645+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 --> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f6fc4040890 con 0x7f6fc4041610 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.645+0000 7f6fd77fe640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f6fc4041610 0x7f6fc4043a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.645+0000 7f6fd77fe640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f6fc4041610 0x7f6fc4043a00 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f6fd0003e00 tx=0x7f6fd0007410 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.645+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 <== mgr.14118 v2:192.168.123.100:6800/1944661180 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (secure 0 0 0) 0x7f6fc4040890 con 0x7f6fc4041610 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.649+0000 7f6fde471640 1 -- 192.168.123.100:0/4044655961 --> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7f6fd8074730 con 0x7f6fc4041610 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fd57fa640 1 -- 192.168.123.100:0/4044655961 <== mgr.14118 v2:192.168.123.100:6800/1944661180 2 ==== command_reply(tid 1: 0 ) ==== 8+0+51 (secure 0 0 0) 0x7f6fd8074730 con 0x7f6fc4041610 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fde471640 1 -- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f6fc4041610 msgr2=0x7f6fc4043a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fde471640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f6fc4041610 0x7f6fc4043a00 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f6fd0003e00 tx=0x7f6fd0007410 comp rx=0 tx=0).stop 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fde471640 1 -- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd81a3e20 msgr2=0x7f6fd81a2590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fde471640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd81a3e20 0x7f6fd81a2590 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f6fc8030780 tx=0x7f6fc8037640 comp rx=0 tx=0).stop 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fde471640 1 -- 192.168.123.100:0/4044655961 shutdown_connections 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fde471640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f6fc4041610 0x7f6fc4043a00 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fde471640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:6800/2344477988,v1:192.168.123.100:6801/2344477988] conn(0x7f6fc403dcc0 0x7f6fc4040180 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fde471640 1 --2- 192.168.123.100:0/4044655961 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6fd81a3e20 0x7f6fd81a2590 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fde471640 1 -- 192.168.123.100:0/4044655961 >> 192.168.123.100:0/4044655961 conn(0x7f6fd806fa60 msgr2=0x7f6fd8070390 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fde471640 1 -- 192.168.123.100:0/4044655961 shutdown_connections 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.653+0000 7f6fde471640 1 -- 192.168.123.100:0/4044655961 wait complete. 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 4 is available 2026-03-10T07:19:41.709 INFO:teuthology.orchestra.run.vm00.stdout:Setting orchestrator backend to cephadm... 2026-03-10T07:19:42.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:41 vm00 bash[20701]: cephadm 2026-03-10T07:19:40.660275+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T07:19:42.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:41 vm00 bash[20701]: cephadm 2026-03-10T07:19:40.660275+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T07:19:42.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:41 vm00 bash[20701]: audit 2026-03-10T07:19:40.694601+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:19:42.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:41 vm00 bash[20701]: audit 2026-03-10T07:19:40.694601+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:19:42.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:41 vm00 bash[20701]: audit 2026-03-10T07:19:41.118591+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:42.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:41 vm00 bash[20701]: audit 2026-03-10T07:19:41.118591+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:42.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:41 vm00 bash[20701]: audit 2026-03-10T07:19:41.121524+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:42.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:41 vm00 bash[20701]: audit 2026-03-10T07:19:41.121524+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:42.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:41 vm00 bash[20701]: cluster 2026-03-10T07:19:41.652321+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: y(active, since 1.0141s) 2026-03-10T07:19:42.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:41 vm00 bash[20701]: cluster 2026-03-10T07:19:41.652321+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: y(active, since 1.0141s) 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.845+0000 7f61a1741640 1 Processor -- start 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.845+0000 7f61a1741640 1 -- start start 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.845+0000 7f61a1741640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c07d170 0x7f619c07d570 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.845+0000 7f61a1741640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f619c07db40 con 0x7f619c07d170 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f619affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c07d170 0x7f619c07d570 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f619affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c07d170 0x7f619c07d570 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40292/0 (socket says 192.168.123.100:40292) 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f619affd640 1 -- 192.168.123.100:0/3425927920 learned_addr learned my addr 192.168.123.100:0/3425927920 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f619affd640 1 -- 192.168.123.100:0/3425927920 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f619c07e3c0 con 0x7f619c07d170 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f619affd640 1 --2- 192.168.123.100:0/3425927920 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c07d170 0x7f619c07d570 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f6184009920 tx=0x7f618402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=27a43ccd00972638 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f6199ffb640 1 -- 192.168.123.100:0/3425927920 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f618403c070 con 0x7f619c07d170 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f6199ffb640 1 -- 192.168.123.100:0/3425927920 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f6184037440 con 0x7f619c07d170 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 -- 192.168.123.100:0/3425927920 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c07d170 msgr2=0x7f619c07d570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 --2- 192.168.123.100:0/3425927920 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c07d170 0x7f619c07d570 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f6184009920 tx=0x7f618402ef20 comp rx=0 tx=0).stop 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 -- 192.168.123.100:0/3425927920 shutdown_connections 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 --2- 192.168.123.100:0/3425927920 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c07d170 0x7f619c07d570 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 -- 192.168.123.100:0/3425927920 >> 192.168.123.100:0/3425927920 conn(0x7f619c07bd50 msgr2=0x7f619c07c1a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 -- 192.168.123.100:0/3425927920 shutdown_connections 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 -- 192.168.123.100:0/3425927920 wait complete. 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 Processor -- start 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 -- start start 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c197c40 0x7f619c198060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f619c07f0c0 con 0x7f619c197c40 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f619affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c197c40 0x7f619c198060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f619affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c197c40 0x7f619c198060 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40302/0 (socket says 192.168.123.100:40302) 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f619affd640 1 -- 192.168.123.100:0/1067087805 learned_addr learned my addr 192.168.123.100:0/1067087805 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f619affd640 1 -- 192.168.123.100:0/1067087805 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f619c1985a0 con 0x7f619c197c40 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f619affd640 1 --2- 192.168.123.100:0/1067087805 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c197c40 0x7f619c198060 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f6184035b50 tx=0x7f6184035b80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f617bfff640 1 -- 192.168.123.100:0/1067087805 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6184045070 con 0x7f619c197c40 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f617bfff640 1 -- 192.168.123.100:0/1067087805 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f618402fdd0 con 0x7f619c197c40 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f617bfff640 1 -- 192.168.123.100:0/1067087805 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f618403c050 con 0x7f619c197c40 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 -- 192.168.123.100:0/1067087805 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f619c198830 con 0x7f619c197c40 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f61a1741640 1 -- 192.168.123.100:0/1067087805 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f619c19b390 con 0x7f619c197c40 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.853+0000 7f617bfff640 1 -- 192.168.123.100:0/1067087805 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 6) ==== 50141+0+0 (secure 0 0 0) 0x7f618404a430 con 0x7f619c197c40 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.857+0000 7f61a1741640 1 -- 192.168.123.100:0/1067087805 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6160005180 con 0x7f619c197c40 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.857+0000 7f617bfff640 1 --2- 192.168.123.100:0/1067087805 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f617003db20 0x7f617003ffe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.857+0000 7f617bfff640 1 -- 192.168.123.100:0/1067087805 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f6184076d30 con 0x7f619c197c40 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.857+0000 7f619a7fc640 1 --2- 192.168.123.100:0/1067087805 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f617003db20 0x7f617003ffe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.861+0000 7f617bfff640 1 -- 192.168.123.100:0/1067087805 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f618403c1f0 con 0x7f619c197c40 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.861+0000 7f619a7fc640 1 --2- 192.168.123.100:0/1067087805 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f617003db20 0x7f617003ffe0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f618c00ad30 tx=0x7f618c0093f0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:41.973+0000 7f61a1741640 1 -- 192.168.123.100:0/1067087805 --> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] -- mgr_command(tid 0: {"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}) -- 0x7f6160002bf0 con 0x7f617003db20 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.057+0000 7f617bfff640 1 -- 192.168.123.100:0/1067087805 <== mgr.14118 v2:192.168.123.100:6800/1944661180 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7f6160002bf0 con 0x7f617003db20 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.061+0000 7f61a1741640 1 -- 192.168.123.100:0/1067087805 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f617003db20 msgr2=0x7f617003ffe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.061+0000 7f61a1741640 1 --2- 192.168.123.100:0/1067087805 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f617003db20 0x7f617003ffe0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f618c00ad30 tx=0x7f618c0093f0 comp rx=0 tx=0).stop 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.061+0000 7f61a1741640 1 -- 192.168.123.100:0/1067087805 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c197c40 msgr2=0x7f619c198060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.061+0000 7f61a1741640 1 --2- 192.168.123.100:0/1067087805 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c197c40 0x7f619c198060 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f6184035b50 tx=0x7f6184035b80 comp rx=0 tx=0).stop 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.061+0000 7f61a1741640 1 -- 192.168.123.100:0/1067087805 shutdown_connections 2026-03-10T07:19:42.182 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.061+0000 7f61a1741640 1 --2- 192.168.123.100:0/1067087805 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f617003db20 0x7f617003ffe0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:42.183 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.061+0000 7f61a1741640 1 --2- 192.168.123.100:0/1067087805 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f619c197c40 0x7f619c198060 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:42.183 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.061+0000 7f61a1741640 1 -- 192.168.123.100:0/1067087805 >> 192.168.123.100:0/1067087805 conn(0x7f619c07bd50 msgr2=0x7f619c106eb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:42.183 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.061+0000 7f61a1741640 1 -- 192.168.123.100:0/1067087805 shutdown_connections 2026-03-10T07:19:42.183 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.061+0000 7f61a1741640 1 -- 192.168.123.100:0/1067087805 wait complete. 2026-03-10T07:19:42.650 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T07:19:42.650 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.385+0000 7f017d163640 1 Processor -- start 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.385+0000 7f017d163640 1 -- start start 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.385+0000 7f017d163640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f017807abb0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.385+0000 7f017d163640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f017807b0f0 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.385+0000 7f0176d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f017807abb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.385+0000 7f0176d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f017807abb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40312/0 (socket says 192.168.123.100:40312) 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.385+0000 7f0176d76640 1 -- 192.168.123.100:0/1511451298 learned_addr learned my addr 192.168.123.100:0/1511451298 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.385+0000 7f0176d76640 1 -- 192.168.123.100:0/1511451298 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f017807b270 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f0176d76640 1 --2- 192.168.123.100:0/1511451298 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f017807abb0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f0160009920 tx=0x7f016002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d6d3fa5b12665c27 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f0175d74640 1 -- 192.168.123.100:0/1511451298 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f016003c070 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f0175d74640 1 -- 192.168.123.100:0/1511451298 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f0160037440 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 -- 192.168.123.100:0/1511451298 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 msgr2=0x7f017807abb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 --2- 192.168.123.100:0/1511451298 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f017807abb0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f0160009920 tx=0x7f016002ef20 comp rx=0 tx=0).stop 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 -- 192.168.123.100:0/1511451298 shutdown_connections 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 --2- 192.168.123.100:0/1511451298 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f017807abb0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 -- 192.168.123.100:0/1511451298 >> 192.168.123.100:0/1511451298 conn(0x7f0178101d30 msgr2=0x7f0178104170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 -- 192.168.123.100:0/1511451298 shutdown_connections 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 -- 192.168.123.100:0/1511451298 wait complete. 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 Processor -- start 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 -- start start 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f01781a2ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f01781084a0 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f0176d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f01781a2ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f0176d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f01781a2ba0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40322/0 (socket says 192.168.123.100:40322) 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f0176d76640 1 -- 192.168.123.100:0/969240899 learned_addr learned my addr 192.168.123.100:0/969240899 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f0176d76640 1 -- 192.168.123.100:0/969240899 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f01781a30e0 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f0176d76640 1 --2- 192.168.123.100:0/969240899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f01781a2ba0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f0160037b40 tx=0x7f0160037b70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f0157fff640 1 -- 192.168.123.100:0/969240899 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f016003c070 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 -- 192.168.123.100:0/969240899 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f01781a3370 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f017d163640 1 -- 192.168.123.100:0/969240899 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f01781a6060 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f0157fff640 1 -- 192.168.123.100:0/969240899 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f0160045070 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.389+0000 7f0157fff640 1 -- 192.168.123.100:0/969240899 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0160040a60 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.393+0000 7f0157fff640 1 -- 192.168.123.100:0/969240899 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 6) ==== 50141+0+0 (secure 0 0 0) 0x7f016004a460 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.393+0000 7f0157fff640 1 --2- 192.168.123.100:0/969240899 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f014803db70 0x7f0148040030 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.393+0000 7f0157fff640 1 -- 192.168.123.100:0/969240899 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f0160077410 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.393+0000 7f0176575640 1 --2- 192.168.123.100:0/969240899 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f014803db70 0x7f0148040030 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.393+0000 7f0176575640 1 --2- 192.168.123.100:0/969240899 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f014803db70 0x7f0148040030 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f016c0099c0 tx=0x7f016c006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.393+0000 7f0155ffb640 1 -- 192.168.123.100:0/969240899 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0134005180 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.397+0000 7f0157fff640 1 -- 192.168.123.100:0/969240899 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0160036e20 con 0x7f017807c750 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.497+0000 7f0155ffb640 1 -- 192.168.123.100:0/969240899 --> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] -- mgr_command(tid 0: {"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}) -- 0x7f0134002bf0 con 0x7f014803db70 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.497+0000 7f0157fff640 1 -- 192.168.123.100:0/969240899 <== mgr.14118 v2:192.168.123.100:6800/1944661180 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+16 (secure 0 0 0) 0x7f0134002bf0 con 0x7f014803db70 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.501+0000 7f0155ffb640 1 -- 192.168.123.100:0/969240899 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f014803db70 msgr2=0x7f0148040030 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.501+0000 7f0155ffb640 1 --2- 192.168.123.100:0/969240899 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f014803db70 0x7f0148040030 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f016c0099c0 tx=0x7f016c006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.501+0000 7f0155ffb640 1 -- 192.168.123.100:0/969240899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 msgr2=0x7f01781a2ba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.501+0000 7f0155ffb640 1 --2- 192.168.123.100:0/969240899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f01781a2ba0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f0160037b40 tx=0x7f0160037b70 comp rx=0 tx=0).stop 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.501+0000 7f0155ffb640 1 -- 192.168.123.100:0/969240899 shutdown_connections 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.501+0000 7f0155ffb640 1 --2- 192.168.123.100:0/969240899 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f014803db70 0x7f0148040030 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.501+0000 7f0155ffb640 1 --2- 192.168.123.100:0/969240899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f017807c750 0x7f01781a2ba0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.501+0000 7f0155ffb640 1 -- 192.168.123.100:0/969240899 >> 192.168.123.100:0/969240899 conn(0x7f0178101d30 msgr2=0x7f0178107810 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.501+0000 7f0155ffb640 1 -- 192.168.123.100:0/969240899 shutdown_connections 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:42.501+0000 7f0155ffb640 1 -- 192.168.123.100:0/969240899 wait complete. 2026-03-10T07:19:42.651 INFO:teuthology.orchestra.run.vm00.stdout:Generating ssh key... 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: cephadm 2026-03-10T07:19:41.641762+0000 mgr.y (mgr.14118) 2 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Bus STARTING 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: cephadm 2026-03-10T07:19:41.641762+0000 mgr.y (mgr.14118) 2 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Bus STARTING 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:41.653098+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:41.653098+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:41.657577+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:41.657577+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: cephadm 2026-03-10T07:19:41.743167+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: cephadm 2026-03-10T07:19:41.743167+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: cephadm 2026-03-10T07:19:41.852692+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: cephadm 2026-03-10T07:19:41.852692+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: cephadm 2026-03-10T07:19:41.852777+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Bus STARTED 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: cephadm 2026-03-10T07:19:41.852777+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Bus STARTED 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:41.855366+0000 mon.a (mon.0) 54 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:41.855366+0000 mon.a (mon.0) 54 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: cephadm 2026-03-10T07:19:41.855980+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Client ('192.168.123.100', 56074) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: cephadm 2026-03-10T07:19:41.855980+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Client ('192.168.123.100', 56074) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:41.978812+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:41.978812+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:42.055510+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:42.055510+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:42.065236+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:42.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:42 vm00 bash[20701]: audit 2026-03-10T07:19:42.065236+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:43.330 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.149+0000 7f42b94ae640 1 Processor -- start 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.149+0000 7f42b94ae640 1 -- start start 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.149+0000 7f42b94ae640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b4108c60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.149+0000 7f42b94ae640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f42b4109230 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.149+0000 7f42b2ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b4108c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.149+0000 7f42b2ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b4108c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40332/0 (socket says 192.168.123.100:40332) 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.149+0000 7f42b2ffd640 1 -- 192.168.123.100:0/3447997215 learned_addr learned my addr 192.168.123.100:0/3447997215 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b2ffd640 1 -- 192.168.123.100:0/3447997215 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f42b4109a60 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b2ffd640 1 --2- 192.168.123.100:0/3447997215 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b4108c60 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f42a8009b80 tx=0x7f42a802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=90c9e7c9abfb7cce server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b1ffb640 1 -- 192.168.123.100:0/3447997215 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f42a803c070 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b1ffb640 1 -- 192.168.123.100:0/3447997215 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f42a8037440 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b94ae640 1 -- 192.168.123.100:0/3447997215 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 msgr2=0x7f42b4108c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b94ae640 1 --2- 192.168.123.100:0/3447997215 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b4108c60 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f42a8009b80 tx=0x7f42a802f190 comp rx=0 tx=0).stop 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b94ae640 1 -- 192.168.123.100:0/3447997215 shutdown_connections 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b94ae640 1 --2- 192.168.123.100:0/3447997215 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b4108c60 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b94ae640 1 -- 192.168.123.100:0/3447997215 >> 192.168.123.100:0/3447997215 conn(0x7f42b407bda0 msgr2=0x7f42b407c1b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b94ae640 1 -- 192.168.123.100:0/3447997215 shutdown_connections 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b94ae640 1 -- 192.168.123.100:0/3447997215 wait complete. 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b94ae640 1 Processor -- start 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b94ae640 1 -- start start 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b94ae640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b419e540 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b94ae640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f42b410a760 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b2ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b419e540 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b2ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b419e540 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40348/0 (socket says 192.168.123.100:40348) 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b2ffd640 1 -- 192.168.123.100:0/2649076609 learned_addr learned my addr 192.168.123.100:0/2649076609 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b2ffd640 1 -- 192.168.123.100:0/2649076609 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f42b419ea80 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f42b2ffd640 1 --2- 192.168.123.100:0/2649076609 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b419e540 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f42a803a040 tx=0x7f42a80043e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f4293fff640 1 -- 192.168.123.100:0/2649076609 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f42a8045070 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.153+0000 7f4293fff640 1 -- 192.168.123.100:0/2649076609 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f42a8003b30 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.157+0000 7f4293fff640 1 -- 192.168.123.100:0/2649076609 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f42a803c040 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.157+0000 7f42b94ae640 1 -- 192.168.123.100:0/2649076609 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f42b419ed10 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.157+0000 7f42b94ae640 1 -- 192.168.123.100:0/2649076609 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f42b41a1a00 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.157+0000 7f42b94ae640 1 -- 192.168.123.100:0/2649076609 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4278005180 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.157+0000 7f4293fff640 1 -- 192.168.123.100:0/2649076609 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7f42a8003710 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.157+0000 7f4293fff640 1 --2- 192.168.123.100:0/2649076609 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f428803d8d0 0x7f428803fd90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.157+0000 7f4293fff640 1 -- 192.168.123.100:0/2649076609 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f42a8076100 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.161+0000 7f42b27fc640 1 --2- 192.168.123.100:0/2649076609 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f428803d8d0 0x7f428803fd90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.161+0000 7f42b27fc640 1 --2- 192.168.123.100:0/2649076609 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f428803d8d0 0x7f428803fd90 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f429c0099c0 tx=0x7f429c006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.161+0000 7f4293fff640 1 -- 192.168.123.100:0/2649076609 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f42a803d280 con 0x7f42b4108860 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.257+0000 7f42b94ae640 1 -- 192.168.123.100:0/2649076609 --> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] -- mgr_command(tid 0: {"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}) -- 0x7f4278002bf0 con 0x7f428803d8d0 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.277+0000 7f4293fff640 1 -- 192.168.123.100:0/2649076609 <== mgr.14118 v2:192.168.123.100:6800/1944661180 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7f4278002bf0 con 0x7f428803d8d0 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.281+0000 7f42b94ae640 1 -- 192.168.123.100:0/2649076609 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f428803d8d0 msgr2=0x7f428803fd90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.281+0000 7f42b94ae640 1 --2- 192.168.123.100:0/2649076609 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f428803d8d0 0x7f428803fd90 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f429c0099c0 tx=0x7f429c006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.281+0000 7f42b94ae640 1 -- 192.168.123.100:0/2649076609 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 msgr2=0x7f42b419e540 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.281+0000 7f42b94ae640 1 --2- 192.168.123.100:0/2649076609 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b419e540 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f42a803a040 tx=0x7f42a80043e0 comp rx=0 tx=0).stop 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.281+0000 7f42b94ae640 1 -- 192.168.123.100:0/2649076609 shutdown_connections 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.281+0000 7f42b94ae640 1 --2- 192.168.123.100:0/2649076609 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f428803d8d0 0x7f428803fd90 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.281+0000 7f42b94ae640 1 --2- 192.168.123.100:0/2649076609 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42b4108860 0x7f42b419e540 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.281+0000 7f42b94ae640 1 -- 192.168.123.100:0/2649076609 >> 192.168.123.100:0/2649076609 conn(0x7f42b407bda0 msgr2=0x7f42b4105e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.281+0000 7f42b94ae640 1 -- 192.168.123.100:0/2649076609 shutdown_connections 2026-03-10T07:19:43.331 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.281+0000 7f42b94ae640 1 -- 192.168.123.100:0/2649076609 wait complete. 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: Generating public/private ed25519 key pair. 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: Your identification has been saved in /tmp/tmpyox2pcu4/key 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: Your public key has been saved in /tmp/tmpyox2pcu4/key.pub 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: The key fingerprint is: 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: SHA256:oFKhyCVXouVMG1r1dvOj8oxxmtq7K/riISTOejardUA ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: The key's randomart image is: 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: +--[ED25519 256]--+ 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: | . O+o | 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: |..%.+.. | 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: |.+E+. .o o | 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: | . . .... o | 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: |..o . S o | 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: |= o . . | 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: | +... o o | 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: |..=o... X | 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: |o+o=+ooO+o | 2026-03-10T07:19:43.582 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:43 vm00 bash[20971]: +----[SHA256]-----+ 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYKws7A3YDjLcL26+Og+43ogqH5B/3kceilbHXDMvGQ ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a14113640 1 Processor -- start 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a14113640 1 -- start start 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a14113640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1071d0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a14113640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f9a0c079e30 con 0x7f9a0c104dc0 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a11e88640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1071d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a11e88640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1071d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40350/0 (socket says 192.168.123.100:40350) 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a11e88640 1 -- 192.168.123.100:0/2040336350 learned_addr learned my addr 192.168.123.100:0/2040336350 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a11e88640 1 -- 192.168.123.100:0/2040336350 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9a0c107710 con 0x7f9a0c104dc0 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a11e88640 1 --2- 192.168.123.100:0/2040336350 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1071d0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f9a00009920 tx=0x7f9a0002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=5ac7d0267d4700c server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a10e86640 1 -- 192.168.123.100:0/2040336350 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f9a0003c070 con 0x7f9a0c104dc0 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a10e86640 1 -- 192.168.123.100:0/2040336350 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f9a00037440 con 0x7f9a0c104dc0 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a14113640 1 -- 192.168.123.100:0/2040336350 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 msgr2=0x7f9a0c1071d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a14113640 1 --2- 192.168.123.100:0/2040336350 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1071d0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f9a00009920 tx=0x7f9a0002ef20 comp rx=0 tx=0).stop 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a14113640 1 -- 192.168.123.100:0/2040336350 shutdown_connections 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a14113640 1 --2- 192.168.123.100:0/2040336350 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1071d0 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.449+0000 7f9a14113640 1 -- 192.168.123.100:0/2040336350 >> 192.168.123.100:0/2040336350 conn(0x7f9a0c100bf0 msgr2=0x7f9a0c103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a14113640 1 -- 192.168.123.100:0/2040336350 shutdown_connections 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a14113640 1 -- 192.168.123.100:0/2040336350 wait complete. 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a14113640 1 Processor -- start 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a14113640 1 -- start start 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a14113640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1a29d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a14113640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f9a0c108590 con 0x7f9a0c104dc0 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a11e88640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1a29d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:43.614 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a11e88640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1a29d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40366/0 (socket says 192.168.123.100:40366) 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a11e88640 1 -- 192.168.123.100:0/2899020192 learned_addr learned my addr 192.168.123.100:0/2899020192 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a11e88640 1 -- 192.168.123.100:0/2899020192 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9a0c1a2f10 con 0x7f9a0c104dc0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a11e88640 1 --2- 192.168.123.100:0/2899020192 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1a29d0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f9a00035d50 tx=0x7f9a00035d80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f99faffd640 1 -- 192.168.123.100:0/2899020192 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f9a00045070 con 0x7f9a0c104dc0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f99faffd640 1 -- 192.168.123.100:0/2899020192 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f9a00040430 con 0x7f9a0c104dc0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f99faffd640 1 -- 192.168.123.100:0/2899020192 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f9a0003c050 con 0x7f9a0c104dc0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a14113640 1 -- 192.168.123.100:0/2899020192 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f9a0c1a31a0 con 0x7f9a0c104dc0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f9a14113640 1 -- 192.168.123.100:0/2899020192 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f9a0c1a5e90 con 0x7f9a0c104dc0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f99faffd640 1 -- 192.168.123.100:0/2899020192 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7f9a0003f3d0 con 0x7f9a0c104dc0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f99faffd640 1 --2- 192.168.123.100:0/2899020192 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f99e403dc40 0x7f99e4040100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.453+0000 7f99faffd640 1 -- 192.168.123.100:0/2899020192 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f9a00077250 con 0x7f9a0c104dc0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.457+0000 7f9a11687640 1 --2- 192.168.123.100:0/2899020192 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f99e403dc40 0x7f99e4040100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.457+0000 7f9a14113640 1 -- 192.168.123.100:0/2899020192 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f99d4005180 con 0x7f9a0c104dc0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.461+0000 7f9a11687640 1 --2- 192.168.123.100:0/2899020192 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f99e403dc40 0x7f99e4040100 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f99fc0099c0 tx=0x7f99fc006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.461+0000 7f99faffd640 1 -- 192.168.123.100:0/2899020192 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f9a00076ba0 con 0x7f9a0c104dc0 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.557+0000 7f9a14113640 1 -- 192.168.123.100:0/2899020192 --> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] -- mgr_command(tid 0: {"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}) -- 0x7f99d4002bf0 con 0x7f99e403dc40 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.557+0000 7f99faffd640 1 -- 192.168.123.100:0/2899020192 <== mgr.14118 v2:192.168.123.100:6800/1944661180 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+123 (secure 0 0 0) 0x7f99d4002bf0 con 0x7f99e403dc40 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.561+0000 7f9a14113640 1 -- 192.168.123.100:0/2899020192 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f99e403dc40 msgr2=0x7f99e4040100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.561+0000 7f9a14113640 1 --2- 192.168.123.100:0/2899020192 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f99e403dc40 0x7f99e4040100 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f99fc0099c0 tx=0x7f99fc006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.561+0000 7f9a14113640 1 -- 192.168.123.100:0/2899020192 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 msgr2=0x7f9a0c1a29d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.561+0000 7f9a14113640 1 --2- 192.168.123.100:0/2899020192 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1a29d0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f9a00035d50 tx=0x7f9a00035d80 comp rx=0 tx=0).stop 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.561+0000 7f9a14113640 1 -- 192.168.123.100:0/2899020192 shutdown_connections 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.561+0000 7f9a14113640 1 --2- 192.168.123.100:0/2899020192 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f99e403dc40 0x7f99e4040100 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.561+0000 7f9a14113640 1 --2- 192.168.123.100:0/2899020192 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a0c104dc0 0x7f9a0c1a29d0 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.561+0000 7f9a14113640 1 -- 192.168.123.100:0/2899020192 >> 192.168.123.100:0/2899020192 conn(0x7f9a0c100bf0 msgr2=0x7f9a0c10acd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.561+0000 7f9a14113640 1 -- 192.168.123.100:0/2899020192 shutdown_connections 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.561+0000 7f9a14113640 1 -- 192.168.123.100:0/2899020192 wait complete. 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T07:19:43.615 INFO:teuthology.orchestra.run.vm00.stdout:Adding host vm00... 2026-03-10T07:19:44.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: audit 2026-03-10T07:19:42.504687+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:44.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: audit 2026-03-10T07:19:42.504687+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:44.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: cluster 2026-03-10T07:19:43.062726+0000 mon.a (mon.0) 57 : cluster [DBG] mgrmap e7: y(active, since 2s) 2026-03-10T07:19:44.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: cluster 2026-03-10T07:19:43.062726+0000 mon.a (mon.0) 57 : cluster [DBG] mgrmap e7: y(active, since 2s) 2026-03-10T07:19:44.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: audit 2026-03-10T07:19:43.263574+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:44.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: audit 2026-03-10T07:19:43.263574+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:44.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: cephadm 2026-03-10T07:19:43.263810+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T07:19:44.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: cephadm 2026-03-10T07:19:43.263810+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T07:19:44.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: audit 2026-03-10T07:19:43.281493+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:44.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: audit 2026-03-10T07:19:43.281493+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:44.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: audit 2026-03-10T07:19:43.283919+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:44.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:44 vm00 bash[20701]: audit 2026-03-10T07:19:43.283919+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:45.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:45 vm00 bash[20701]: audit 2026-03-10T07:19:43.564658+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:45.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:45 vm00 bash[20701]: audit 2026-03-10T07:19:43.564658+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:45.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:45 vm00 bash[20701]: audit 2026-03-10T07:19:43.836016+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:45.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:45 vm00 bash[20701]: audit 2026-03-10T07:19:43.836016+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Added host 'vm00' with addr '192.168.123.100' 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.721+0000 7feaec745640 1 Processor -- start 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.721+0000 7feaec745640 1 -- start start 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.721+0000 7feaec745640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4108c60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.721+0000 7feaec745640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7feae4109230 con 0x7feae4108860 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.721+0000 7feaea4ba640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4108c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.721+0000 7feaea4ba640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4108c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40368/0 (socket says 192.168.123.100:40368) 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.721+0000 7feaea4ba640 1 -- 192.168.123.100:0/1484005422 learned_addr learned my addr 192.168.123.100:0/1484005422 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.721+0000 7feaea4ba640 1 -- 192.168.123.100:0/1484005422 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7feae4109a60 con 0x7feae4108860 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaea4ba640 1 --2- 192.168.123.100:0/1484005422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4108c60 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fead8009b80 tx=0x7fead802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c044c7d84628a225 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feae94b8640 1 -- 192.168.123.100:0/1484005422 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fead803c070 con 0x7feae4108860 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feae94b8640 1 -- 192.168.123.100:0/1484005422 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fead8037440 con 0x7feae4108860 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feae94b8640 1 -- 192.168.123.100:0/1484005422 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fead8035340 con 0x7feae4108860 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 -- 192.168.123.100:0/1484005422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 msgr2=0x7feae4108c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 --2- 192.168.123.100:0/1484005422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4108c60 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fead8009b80 tx=0x7fead802f190 comp rx=0 tx=0).stop 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 -- 192.168.123.100:0/1484005422 shutdown_connections 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 --2- 192.168.123.100:0/1484005422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4108c60 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 -- 192.168.123.100:0/1484005422 >> 192.168.123.100:0/1484005422 conn(0x7feae407bda0 msgr2=0x7feae407c1b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 -- 192.168.123.100:0/1484005422 shutdown_connections 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 -- 192.168.123.100:0/1484005422 wait complete. 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 Processor -- start 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 -- start start 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4080470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaea4ba640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4080470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaea4ba640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4080470 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40380/0 (socket says 192.168.123.100:40380) 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaea4ba640 1 -- 192.168.123.100:0/1016689697 learned_addr learned my addr 192.168.123.100:0/1016689697 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7feae410a760 con 0x7feae4108860 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaea4ba640 1 -- 192.168.123.100:0/1016689697 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7feae40809b0 con 0x7feae4108860 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaea4ba640 1 --2- 192.168.123.100:0/1016689697 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4080470 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fead8036970 tx=0x7fead80369a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7fead37fe640 1 -- 192.168.123.100:0/1016689697 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fead803c070 con 0x7feae4108860 2026-03-10T07:19:45.891 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7fead37fe640 1 -- 192.168.123.100:0/1016689697 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fead8045070 con 0x7feae4108860 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7fead37fe640 1 -- 192.168.123.100:0/1016689697 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fead8040c80 con 0x7feae4108860 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 -- 192.168.123.100:0/1016689697 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7feae4080c40 con 0x7feae4108860 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.725+0000 7feaec745640 1 -- 192.168.123.100:0/1016689697 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7feae407cfb0 con 0x7feae4108860 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.729+0000 7fead37fe640 1 -- 192.168.123.100:0/1016689697 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7fead804a460 con 0x7feae4108860 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.729+0000 7feaec745640 1 -- 192.168.123.100:0/1016689697 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7feaac005180 con 0x7feae4108860 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.729+0000 7fead37fe640 1 --2- 192.168.123.100:0/1016689697 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7feac003dc40 0x7feac0040100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.729+0000 7feae9cb9640 1 --2- 192.168.123.100:0/1016689697 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7feac003dc40 0x7feac0040100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.733+0000 7fead37fe640 1 -- 192.168.123.100:0/1016689697 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7fead8077820 con 0x7feae4108860 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.733+0000 7fead37fe640 1 -- 192.168.123.100:0/1016689697 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fead8035340 con 0x7feae4108860 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.733+0000 7feae9cb9640 1 --2- 192.168.123.100:0/1016689697 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7feac003dc40 0x7feac0040100 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fead4009a10 tx=0x7fead4006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:43.829+0000 7feaec745640 1 -- 192.168.123.100:0/1016689697 --> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}) -- 0x7feaac002bf0 con 0x7feac003dc40 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:45.813+0000 7fead37fe640 1 -- 192.168.123.100:0/1016689697 <== mgr.14118 v2:192.168.123.100:6800/1944661180 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (secure 0 0 0) 0x7feaac002bf0 con 0x7feac003dc40 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:45.817+0000 7feaec745640 1 -- 192.168.123.100:0/1016689697 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7feac003dc40 msgr2=0x7feac0040100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:45.817+0000 7feaec745640 1 --2- 192.168.123.100:0/1016689697 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7feac003dc40 0x7feac0040100 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fead4009a10 tx=0x7fead4006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:45.817+0000 7feaec745640 1 -- 192.168.123.100:0/1016689697 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 msgr2=0x7feae4080470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:45.817+0000 7feaec745640 1 --2- 192.168.123.100:0/1016689697 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4080470 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fead8036970 tx=0x7fead80369a0 comp rx=0 tx=0).stop 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:45.817+0000 7feaec745640 1 -- 192.168.123.100:0/1016689697 shutdown_connections 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:45.817+0000 7feaec745640 1 --2- 192.168.123.100:0/1016689697 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7feac003dc40 0x7feac0040100 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:45.817+0000 7feaec745640 1 --2- 192.168.123.100:0/1016689697 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feae4108860 0x7feae4080470 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:45.817+0000 7feaec745640 1 -- 192.168.123.100:0/1016689697 >> 192.168.123.100:0/1016689697 conn(0x7feae407bda0 msgr2=0x7feae4105cf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:45.817+0000 7feaec745640 1 -- 192.168.123.100:0/1016689697 shutdown_connections 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:45.817+0000 7feaec745640 1 -- 192.168.123.100:0/1016689697 wait complete. 2026-03-10T07:19:45.892 INFO:teuthology.orchestra.run.vm00.stdout:Deploying unmanaged mon service... 2026-03-10T07:19:46.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:46 vm00 bash[20701]: cephadm 2026-03-10T07:19:44.511428+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T07:19:46.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:46 vm00 bash[20701]: cephadm 2026-03-10T07:19:44.511428+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T07:19:46.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:46 vm00 bash[20701]: audit 2026-03-10T07:19:45.819023+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:46.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:46 vm00 bash[20701]: audit 2026-03-10T07:19:45.819023+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:46.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:46 vm00 bash[20701]: audit 2026-03-10T07:19:45.820354+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:46.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:46 vm00 bash[20701]: audit 2026-03-10T07:19:45.820354+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.033+0000 7fd2d677e640 1 Processor -- start 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 -- start start 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c80a4930 0x7fd2c80a4d30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd2c80a5300 con 0x7fd2c80a4930 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d577c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c80a4930 0x7fd2c80a4d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d577c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c80a4930 0x7fd2c80a4d30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53878/0 (socket says 192.168.123.100:53878) 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d577c640 1 -- 192.168.123.100:0/3916706159 learned_addr learned my addr 192.168.123.100:0/3916706159 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d577c640 1 -- 192.168.123.100:0/3916706159 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd2c80a5b30 con 0x7fd2c80a4930 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d577c640 1 --2- 192.168.123.100:0/3916706159 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c80a4930 0x7fd2c80a4d30 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fd2cc009b80 tx=0x7fd2cc02f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=2698da11dfc6a872 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2bffff640 1 -- 192.168.123.100:0/3916706159 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd2cc03c070 con 0x7fd2c80a4930 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2bffff640 1 -- 192.168.123.100:0/3916706159 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fd2cc037440 con 0x7fd2c80a4930 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 -- 192.168.123.100:0/3916706159 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c80a4930 msgr2=0x7fd2c80a4d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 --2- 192.168.123.100:0/3916706159 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c80a4930 0x7fd2c80a4d30 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fd2cc009b80 tx=0x7fd2cc02f190 comp rx=0 tx=0).stop 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 -- 192.168.123.100:0/3916706159 shutdown_connections 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 --2- 192.168.123.100:0/3916706159 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c80a4930 0x7fd2c80a4d30 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 -- 192.168.123.100:0/3916706159 >> 192.168.123.100:0/3916706159 conn(0x7fd2c809fc40 msgr2=0x7fd2c80a20a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 -- 192.168.123.100:0/3916706159 shutdown_connections 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 -- 192.168.123.100:0/3916706159 wait complete. 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 Processor -- start 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.037+0000 7fd2d677e640 1 -- start start 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2d677e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c813b1f0 0x7fd2c813b610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2d677e640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd2c80a6830 con 0x7fd2c813b1f0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2d577c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c813b1f0 0x7fd2c813b610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2d577c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c813b1f0 0x7fd2c813b610 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53882/0 (socket says 192.168.123.100:53882) 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2d577c640 1 -- 192.168.123.100:0/412412977 learned_addr learned my addr 192.168.123.100:0/412412977 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2d577c640 1 -- 192.168.123.100:0/412412977 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd2c813bb50 con 0x7fd2c813b1f0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2d577c640 1 --2- 192.168.123.100:0/412412977 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c813b1f0 0x7fd2c813b610 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fd2cc02f6c0 tx=0x7fd2cc003940 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2be7fc640 1 -- 192.168.123.100:0/412412977 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd2cc03c070 con 0x7fd2c813b1f0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2d677e640 1 -- 192.168.123.100:0/412412977 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd2c813bde0 con 0x7fd2c813b1f0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2d677e640 1 -- 192.168.123.100:0/412412977 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd2c80a9ae0 con 0x7fd2c813b1f0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2be7fc640 1 -- 192.168.123.100:0/412412977 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fd2cc044070 con 0x7fd2c813b1f0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2be7fc640 1 -- 192.168.123.100:0/412412977 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd2cc035560 con 0x7fd2c813b1f0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2be7fc640 1 -- 192.168.123.100:0/412412977 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7fd2cc0357c0 con 0x7fd2c813b1f0 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2be7fc640 1 --2- 192.168.123.100:0/412412977 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fd2b803dd10 0x7fd2b80401d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:46.237 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2be7fc640 1 -- 192.168.123.100:0/412412977 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7fd2cc0771a0 con 0x7fd2c813b1f0 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.041+0000 7fd2d4f7b640 1 --2- 192.168.123.100:0/412412977 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fd2b803dd10 0x7fd2b80401d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.045+0000 7fd2d677e640 1 -- 192.168.123.100:0/412412977 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd298005180 con 0x7fd2c813b1f0 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.045+0000 7fd2d4f7b640 1 --2- 192.168.123.100:0/412412977 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fd2b803dd10 0x7fd2b80401d0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fd2d00521f0 tx=0x7fd2d006a800 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.049+0000 7fd2be7fc640 1 -- 192.168.123.100:0/412412977 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd2cc044220 con 0x7fd2c813b1f0 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.173+0000 7fd2d677e640 1 -- 192.168.123.100:0/412412977 --> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7fd298002bf0 con 0x7fd2b803dd10 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.177+0000 7fd2be7fc640 1 -- 192.168.123.100:0/412412977 <== mgr.14118 v2:192.168.123.100:6800/1944661180 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7fd298002bf0 con 0x7fd2b803dd10 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.181+0000 7fd2d677e640 1 -- 192.168.123.100:0/412412977 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fd2b803dd10 msgr2=0x7fd2b80401d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.181+0000 7fd2d677e640 1 --2- 192.168.123.100:0/412412977 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fd2b803dd10 0x7fd2b80401d0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fd2d00521f0 tx=0x7fd2d006a800 comp rx=0 tx=0).stop 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.181+0000 7fd2d677e640 1 -- 192.168.123.100:0/412412977 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c813b1f0 msgr2=0x7fd2c813b610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.181+0000 7fd2d677e640 1 --2- 192.168.123.100:0/412412977 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c813b1f0 0x7fd2c813b610 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fd2cc02f6c0 tx=0x7fd2cc003940 comp rx=0 tx=0).stop 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.185+0000 7fd2d677e640 1 -- 192.168.123.100:0/412412977 shutdown_connections 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.185+0000 7fd2d677e640 1 --2- 192.168.123.100:0/412412977 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fd2b803dd10 0x7fd2b80401d0 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.185+0000 7fd2d677e640 1 --2- 192.168.123.100:0/412412977 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd2c813b1f0 0x7fd2c813b610 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.185+0000 7fd2d677e640 1 -- 192.168.123.100:0/412412977 >> 192.168.123.100:0/412412977 conn(0x7fd2c809fc40 msgr2=0x7fd2c80a08b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.185+0000 7fd2d677e640 1 -- 192.168.123.100:0/412412977 shutdown_connections 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.185+0000 7fd2d677e640 1 -- 192.168.123.100:0/412412977 wait complete. 2026-03-10T07:19:46.238 INFO:teuthology.orchestra.run.vm00.stdout:Deploying unmanaged mgr service... 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.341+0000 7fe50ee5a640 1 Processor -- start 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.341+0000 7fe50ee5a640 1 -- start start 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.341+0000 7fe50ee5a640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe508105860 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.341+0000 7fe50ee5a640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe508105e30 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.341+0000 7fe50cbcf640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe508105860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.341+0000 7fe50cbcf640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe508105860 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53894/0 (socket says 192.168.123.100:53894) 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.341+0000 7fe50cbcf640 1 -- 192.168.123.100:0/2927524959 learned_addr learned my addr 192.168.123.100:0/2927524959 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.341+0000 7fe50cbcf640 1 -- 192.168.123.100:0/2927524959 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe508106660 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.341+0000 7fe50cbcf640 1 --2- 192.168.123.100:0/2927524959 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe508105860 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fe4f800bc40 tx=0x7fe4f8031760 comp rx=0 tx=0).ready entity=mon.0 client_cookie=750fe958fb060df server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe4ff7fe640 1 -- 192.168.123.100:0/2927524959 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe4f8036640 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe4ff7fe640 1 -- 192.168.123.100:0/2927524959 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fe4f8036c00 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 -- 192.168.123.100:0/2927524959 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 msgr2=0x7fe508105860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 --2- 192.168.123.100:0/2927524959 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe508105860 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fe4f800bc40 tx=0x7fe4f8031760 comp rx=0 tx=0).stop 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 -- 192.168.123.100:0/2927524959 shutdown_connections 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 --2- 192.168.123.100:0/2927524959 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe508105860 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 -- 192.168.123.100:0/2927524959 >> 192.168.123.100:0/2927524959 conn(0x7fe508100bf0 msgr2=0x7fe508103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 -- 192.168.123.100:0/2927524959 shutdown_connections 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 -- 192.168.123.100:0/2927524959 wait complete. 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 Processor -- start 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 -- start start 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe50819e6f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50cbcf640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe50819e6f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50cbcf640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe50819e6f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53904/0 (socket says 192.168.123.100:53904) 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50cbcf640 1 -- 192.168.123.100:0/3414411352 learned_addr learned my addr 192.168.123.100:0/3414411352 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 -- 192.168.123.100:0/3414411352 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe508107420 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50cbcf640 1 -- 192.168.123.100:0/3414411352 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe50819ec30 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50cbcf640 1 --2- 192.168.123.100:0/3414411352 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe50819e6f0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7fe4f8006fd0 tx=0x7fe4f8036d30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe4fdffb640 1 -- 192.168.123.100:0/3414411352 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe4f8045070 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe4fdffb640 1 -- 192.168.123.100:0/3414411352 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fe4f8041990 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 -- 192.168.123.100:0/3414411352 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe50819eec0 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe50ee5a640 1 -- 192.168.123.100:0/3414411352 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe5081a1bb0 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.345+0000 7fe4fdffb640 1 -- 192.168.123.100:0/3414411352 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe4f80407e0 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.349+0000 7fe50ee5a640 1 -- 192.168.123.100:0/3414411352 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe508105860 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.349+0000 7fe4fdffb640 1 -- 192.168.123.100:0/3414411352 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7fe4f800c040 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.349+0000 7fe4fdffb640 1 --2- 192.168.123.100:0/3414411352 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fe4e403d920 0x7fe4e403fde0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.349+0000 7fe4fffff640 1 --2- 192.168.123.100:0/3414411352 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fe4e403d920 0x7fe4e403fde0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.349+0000 7fe4fffff640 1 --2- 192.168.123.100:0/3414411352 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fe4e403d920 0x7fe4e403fde0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7fe4f0009a10 tx=0x7fe4f0006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.349+0000 7fe4fdffb640 1 -- 192.168.123.100:0/3414411352 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7fe4f8041b40 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.353+0000 7fe4fdffb640 1 -- 192.168.123.100:0/3414411352 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe4f803ed40 con 0x7fe508105460 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.465+0000 7fe50ee5a640 1 -- 192.168.123.100:0/3414411352 --> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7fe5080630c0 con 0x7fe4e403d920 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.469+0000 7fe4fdffb640 1 -- 192.168.123.100:0/3414411352 <== mgr.14118 v2:192.168.123.100:6800/1944661180 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7fe5080630c0 con 0x7fe4e403d920 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.469+0000 7fe50ee5a640 1 -- 192.168.123.100:0/3414411352 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fe4e403d920 msgr2=0x7fe4e403fde0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.469+0000 7fe50ee5a640 1 --2- 192.168.123.100:0/3414411352 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fe4e403d920 0x7fe4e403fde0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7fe4f0009a10 tx=0x7fe4f0006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.469+0000 7fe50ee5a640 1 -- 192.168.123.100:0/3414411352 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 msgr2=0x7fe50819e6f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.469+0000 7fe50ee5a640 1 --2- 192.168.123.100:0/3414411352 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe50819e6f0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7fe4f8006fd0 tx=0x7fe4f8036d30 comp rx=0 tx=0).stop 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.469+0000 7fe50ee5a640 1 -- 192.168.123.100:0/3414411352 shutdown_connections 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.469+0000 7fe50ee5a640 1 --2- 192.168.123.100:0/3414411352 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fe4e403d920 0x7fe4e403fde0 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.469+0000 7fe50ee5a640 1 --2- 192.168.123.100:0/3414411352 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe508105460 0x7fe50819e6f0 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.469+0000 7fe50ee5a640 1 -- 192.168.123.100:0/3414411352 >> 192.168.123.100:0/3414411352 conn(0x7fe508100bf0 msgr2=0x7fe508194cd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:46.524 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.473+0000 7fe50ee5a640 1 -- 192.168.123.100:0/3414411352 shutdown_connections 2026-03-10T07:19:46.524 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.473+0000 7fe50ee5a640 1 -- 192.168.123.100:0/3414411352 wait complete. 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.649+0000 7f004807d640 1 Processor -- start 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.649+0000 7f004807d640 1 -- start start 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.649+0000 7f004807d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f00401069b0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.649+0000 7f004807d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0040106f80 con 0x7f00401065b0 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.649+0000 7f0045df2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f00401069b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.649+0000 7f0045df2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f00401069b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53908/0 (socket says 192.168.123.100:53908) 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.649+0000 7f0045df2640 1 -- 192.168.123.100:0/1173811684 learned_addr learned my addr 192.168.123.100:0/1173811684 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.649+0000 7f0045df2640 1 -- 192.168.123.100:0/1173811684 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f00401077b0 con 0x7f00401065b0 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.649+0000 7f0045df2640 1 --2- 192.168.123.100:0/1173811684 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f00401069b0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f0028009920 tx=0x7f002802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=168cf4b225f66be2 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.649+0000 7f0044df0640 1 -- 192.168.123.100:0/1173811684 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f002803c070 con 0x7f00401065b0 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.649+0000 7f0044df0640 1 -- 192.168.123.100:0/1173811684 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f0028037440 con 0x7f00401065b0 2026-03-10T07:19:46.824 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 -- 192.168.123.100:0/1173811684 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 msgr2=0x7f00401069b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 --2- 192.168.123.100:0/1173811684 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f00401069b0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f0028009920 tx=0x7f002802ef20 comp rx=0 tx=0).stop 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 -- 192.168.123.100:0/1173811684 shutdown_connections 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 --2- 192.168.123.100:0/1173811684 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f00401069b0 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 -- 192.168.123.100:0/1173811684 >> 192.168.123.100:0/1173811684 conn(0x7f0040101d20 msgr2=0x7f0040104180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 -- 192.168.123.100:0/1173811684 shutdown_connections 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 -- 192.168.123.100:0/1173811684 wait complete. 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 Processor -- start 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 -- start start 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f004007c3e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f00401084b0 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f0045df2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f004007c3e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f0045df2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f004007c3e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53916/0 (socket says 192.168.123.100:53916) 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f0045df2640 1 -- 192.168.123.100:0/1899807121 learned_addr learned my addr 192.168.123.100:0/1899807121 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f0045df2640 1 -- 192.168.123.100:0/1899807121 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f004007c920 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f0045df2640 1 --2- 192.168.123.100:0/1899807121 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f004007c3e0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f0028037b80 tx=0x7f0028037bb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f0036ffd640 1 -- 192.168.123.100:0/1899807121 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f002803c070 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f0036ffd640 1 -- 192.168.123.100:0/1899807121 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f0028045070 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f0036ffd640 1 -- 192.168.123.100:0/1899807121 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f00280409c0 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 -- 192.168.123.100:0/1899807121 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f004007ab10 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.653+0000 7f004807d640 1 -- 192.168.123.100:0/1899807121 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f004007aff0 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.657+0000 7f0036ffd640 1 -- 192.168.123.100:0/1899807121 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7f0028040c70 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.657+0000 7f0036ffd640 1 --2- 192.168.123.100:0/1899807121 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f001c03dc40 0x7f001c040100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.657+0000 7f00455f1640 1 --2- 192.168.123.100:0/1899807121 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f001c03dc40 0x7f001c040100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.657+0000 7f0036ffd640 1 -- 192.168.123.100:0/1899807121 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f002807c580 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.657+0000 7f00455f1640 1 --2- 192.168.123.100:0/1899807121 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f001c03dc40 0x7f001c040100 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f0030009a10 tx=0x7f0030006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.657+0000 7f004807d640 1 -- 192.168.123.100:0/1899807121 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f00401069b0 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.661+0000 7f0036ffd640 1 -- 192.168.123.100:0/1899807121 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f002803e330 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.761+0000 7f004807d640 1 -- 192.168.123.100:0/1899807121 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) -- 0x7f004007b4e0 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.765+0000 7f0036ffd640 1 -- 192.168.123.100:0/1899807121 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/container_init}]=0 v6)=0 v6) ==== 142+0+0 (secure 0 0 0) 0x7f0028036920 con 0x7f00401065b0 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.769+0000 7f004807d640 1 -- 192.168.123.100:0/1899807121 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f001c03dc40 msgr2=0x7f001c040100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.769+0000 7f004807d640 1 --2- 192.168.123.100:0/1899807121 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f001c03dc40 0x7f001c040100 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f0030009a10 tx=0x7f0030006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.769+0000 7f004807d640 1 -- 192.168.123.100:0/1899807121 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 msgr2=0x7f004007c3e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.769+0000 7f004807d640 1 --2- 192.168.123.100:0/1899807121 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f004007c3e0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f0028037b80 tx=0x7f0028037bb0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.769+0000 7f004807d640 1 -- 192.168.123.100:0/1899807121 shutdown_connections 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.769+0000 7f004807d640 1 --2- 192.168.123.100:0/1899807121 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f001c03dc40 0x7f001c040100 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.769+0000 7f004807d640 1 --2- 192.168.123.100:0/1899807121 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f00401065b0 0x7f004007c3e0 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.769+0000 7f004807d640 1 -- 192.168.123.100:0/1899807121 >> 192.168.123.100:0/1899807121 conn(0x7f0040101d20 msgr2=0x7f0040102a40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.769+0000 7f004807d640 1 -- 192.168.123.100:0/1899807121 shutdown_connections 2026-03-10T07:19:46.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.769+0000 7f004807d640 1 -- 192.168.123.100:0/1899807121 wait complete. 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 Processor -- start 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 -- start start 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1069b0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f201c106f80 con 0x7f201c1065b0 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f201a575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1069b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f201a575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1069b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53918/0 (socket says 192.168.123.100:53918) 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f201a575640 1 -- 192.168.123.100:0/2222933636 learned_addr learned my addr 192.168.123.100:0/2222933636 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f201a575640 1 -- 192.168.123.100:0/2222933636 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f201c1077b0 con 0x7f201c1065b0 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f201a575640 1 --2- 192.168.123.100:0/2222933636 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1069b0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f200400fa60 tx=0x7f2004033330 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ac3ccb55e5a286b9 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2019573640 1 -- 192.168.123.100:0/2222933636 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2004040070 con 0x7f201c1065b0 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2019573640 1 -- 192.168.123.100:0/2222933636 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f200403b860 con 0x7f201c1065b0 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 -- 192.168.123.100:0/2222933636 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 msgr2=0x7f201c1069b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 --2- 192.168.123.100:0/2222933636 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1069b0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f200400fa60 tx=0x7f2004033330 comp rx=0 tx=0).stop 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 -- 192.168.123.100:0/2222933636 shutdown_connections 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 --2- 192.168.123.100:0/2222933636 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1069b0 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 -- 192.168.123.100:0/2222933636 >> 192.168.123.100:0/2222933636 conn(0x7f201c101d60 msgr2=0x7f201c104180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 -- 192.168.123.100:0/2222933636 shutdown_connections 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 -- 192.168.123.100:0/2222933636 wait complete. 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 Processor -- start 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 -- start start 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1a2bb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:47.129 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.953+0000 7f2020e25640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f201c108570 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f201a575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1a2bb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f201a575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1a2bb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53932/0 (socket says 192.168.123.100:53932) 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f201a575640 1 -- 192.168.123.100:0/2058834360 learned_addr learned my addr 192.168.123.100:0/2058834360 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f201a575640 1 -- 192.168.123.100:0/2058834360 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f201c1a30f0 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f201a575640 1 --2- 192.168.123.100:0/2058834360 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1a2bb0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f2004033860 tx=0x7f200403c9f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f200b7fe640 1 -- 192.168.123.100:0/2058834360 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2004040070 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f2020e25640 1 -- 192.168.123.100:0/2058834360 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f201c1a3380 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f2020e25640 1 -- 192.168.123.100:0/2058834360 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f201c1a6070 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f200b7fe640 1 -- 192.168.123.100:0/2058834360 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7f200400f3a0 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f200b7fe640 1 -- 192.168.123.100:0/2058834360 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2004045c50 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f200b7fe640 1 -- 192.168.123.100:0/2058834360 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7f200404d070 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f200b7fe640 1 --2- 192.168.123.100:0/2058834360 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f1fec03dc90 0x7f1fec040150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f200b7fe640 1 -- 192.168.123.100:0/2058834360 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f20040776d0 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f2019d74640 1 --2- 192.168.123.100:0/2058834360 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f1fec03dc90 0x7f1fec040150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f2019d74640 1 --2- 192.168.123.100:0/2058834360 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f1fec03dc90 0x7f1fec040150 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f20100099c0 tx=0x7f2010006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.957+0000 7f2020e25640 1 -- 192.168.123.100:0/2058834360 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f1fe0005180 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:46.961+0000 7f200b7fe640 1 -- 192.168.123.100:0/2058834360 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f20040774b0 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.065+0000 7f2020e25640 1 -- 192.168.123.100:0/2058834360 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0) -- 0x7f1fe0005470 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.073+0000 7f200b7fe640 1 -- 192.168.123.100:0/2058834360 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/dashboard/ssl_server_port}]=0 v7)=0 v7) ==== 130+0+0 (secure 0 0 0) 0x7f200404cd70 con 0x7f201c1065b0 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.077+0000 7f2020e25640 1 -- 192.168.123.100:0/2058834360 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f1fec03dc90 msgr2=0x7f1fec040150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.077+0000 7f2020e25640 1 --2- 192.168.123.100:0/2058834360 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f1fec03dc90 0x7f1fec040150 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f20100099c0 tx=0x7f2010006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.077+0000 7f2020e25640 1 -- 192.168.123.100:0/2058834360 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 msgr2=0x7f201c1a2bb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.077+0000 7f2020e25640 1 --2- 192.168.123.100:0/2058834360 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1a2bb0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f2004033860 tx=0x7f200403c9f0 comp rx=0 tx=0).stop 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.077+0000 7f2020e25640 1 -- 192.168.123.100:0/2058834360 shutdown_connections 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.077+0000 7f2020e25640 1 --2- 192.168.123.100:0/2058834360 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f1fec03dc90 0x7f1fec040150 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.077+0000 7f2020e25640 1 --2- 192.168.123.100:0/2058834360 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f201c1065b0 0x7f201c1a2bb0 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.077+0000 7f2020e25640 1 -- 192.168.123.100:0/2058834360 >> 192.168.123.100:0/2058834360 conn(0x7f201c101d60 msgr2=0x7f201c102db0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.077+0000 7f2020e25640 1 -- 192.168.123.100:0/2058834360 shutdown_connections 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.077+0000 7f2020e25640 1 -- 192.168.123.100:0/2058834360 wait complete. 2026-03-10T07:19:47.130 INFO:teuthology.orchestra.run.vm00.stdout:Enabling the dashboard module... 2026-03-10T07:19:47.368 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: cephadm 2026-03-10T07:19:45.819848+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: cephadm 2026-03-10T07:19:45.819848+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: audit 2026-03-10T07:19:46.178931+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: audit 2026-03-10T07:19:46.178931+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: cephadm 2026-03-10T07:19:46.180070+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: cephadm 2026-03-10T07:19:46.180070+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: audit 2026-03-10T07:19:46.183718+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: audit 2026-03-10T07:19:46.183718+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: audit 2026-03-10T07:19:46.474799+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: audit 2026-03-10T07:19:46.474799+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: audit 2026-03-10T07:19:46.769705+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.100:0/1899807121' entity='client.admin' 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: audit 2026-03-10T07:19:46.769705+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.100:0/1899807121' entity='client.admin' 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: audit 2026-03-10T07:19:47.072790+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.100:0/2058834360' entity='client.admin' 2026-03-10T07:19:47.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:47 vm00 bash[20701]: audit 2026-03-10T07:19:47.072790+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.100:0/2058834360' entity='client.admin' 2026-03-10T07:19:48.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:48 vm00 bash[20701]: audit 2026-03-10T07:19:46.471537+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:48.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:48 vm00 bash[20701]: audit 2026-03-10T07:19:46.471537+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:48.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:48 vm00 bash[20701]: cephadm 2026-03-10T07:19:46.472256+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T07:19:48.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:48 vm00 bash[20701]: cephadm 2026-03-10T07:19:46.472256+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T07:19:48.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:48 vm00 bash[20701]: audit 2026-03-10T07:19:47.422830+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:48.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:48 vm00 bash[20701]: audit 2026-03-10T07:19:47.422830+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:48.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:48 vm00 bash[20701]: audit 2026-03-10T07:19:47.468499+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.100:0/1170239651' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T07:19:48.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:48 vm00 bash[20701]: audit 2026-03-10T07:19:47.468499+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.100:0/1170239651' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T07:19:48.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:48 vm00 bash[20701]: audit 2026-03-10T07:19:47.746383+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:48.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:48 vm00 bash[20701]: audit 2026-03-10T07:19:47.746383+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:19:48.564 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 Processor -- start 2026-03-10T07:19:48.564 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 -- start start 2026-03-10T07:19:48.564 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd580a4d30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:48.564 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fcd580a5300 con 0x7fcd580a4930 2026-03-10T07:19:48.564 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd65ca5640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd580a4d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:48.564 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd65ca5640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd580a4d30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53948/0 (socket says 192.168.123.100:53948) 2026-03-10T07:19:48.564 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd65ca5640 1 -- 192.168.123.100:0/4017143760 learned_addr learned my addr 192.168.123.100:0/4017143760 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd65ca5640 1 -- 192.168.123.100:0/4017143760 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fcd580a5b30 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd65ca5640 1 --2- 192.168.123.100:0/4017143760 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd580a4d30 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fcd5c0089a0 tx=0x7fcd5c031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=af985fb93b0a17c8 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd64ca3640 1 -- 192.168.123.100:0/4017143760 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fcd5c03c480 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd64ca3640 1 -- 192.168.123.100:0/4017143760 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fcd5c03ca40 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd64ca3640 1 -- 192.168.123.100:0/4017143760 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fcd5c03b910 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 -- 192.168.123.100:0/4017143760 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 msgr2=0x7fcd580a4d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 --2- 192.168.123.100:0/4017143760 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd580a4d30 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fcd5c0089a0 tx=0x7fcd5c031440 comp rx=0 tx=0).stop 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 -- 192.168.123.100:0/4017143760 shutdown_connections 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 --2- 192.168.123.100:0/4017143760 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd580a4d30 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 -- 192.168.123.100:0/4017143760 >> 192.168.123.100:0/4017143760 conn(0x7fcd5809fc40 msgr2=0x7fcd580a20a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 -- 192.168.123.100:0/4017143760 shutdown_connections 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 -- 192.168.123.100:0/4017143760 wait complete. 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 Processor -- start 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 -- start start 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd58142c40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fcd580a64d0 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd65ca5640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd58142c40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd65ca5640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd58142c40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53956/0 (socket says 192.168.123.100:53956) 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd65ca5640 1 -- 192.168.123.100:0/1170239651 learned_addr learned my addr 192.168.123.100:0/1170239651 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd65ca5640 1 -- 192.168.123.100:0/1170239651 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fcd58143180 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd65ca5640 1 --2- 192.168.123.100:0/1170239651 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd58142c40 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7fcd5c031970 tx=0x7fcd5c03b930 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd4effd640 1 -- 192.168.123.100:0/1170239651 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fcd5c042070 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 -- 192.168.123.100:0/1170239651 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fcd58143410 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.285+0000 7fcd66ca7640 1 -- 192.168.123.100:0/1170239651 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fcd58143870 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.289+0000 7fcd4effd640 1 -- 192.168.123.100:0/1170239651 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 947+0+0 (secure 0 0 0) 0x7fcd5c008d80 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.289+0000 7fcd4effd640 1 -- 192.168.123.100:0/1170239651 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fcd5c0475d0 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.289+0000 7fcd66ca7640 1 -- 192.168.123.100:0/1170239651 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fcd580a4d30 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.289+0000 7fcd4effd640 1 -- 192.168.123.100:0/1170239651 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 7) ==== 50247+0+0 (secure 0 0 0) 0x7fcd5c00bce0 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.289+0000 7fcd4effd640 1 --2- 192.168.123.100:0/1170239651 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fcd3003dbf0 0x7fcd300400b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.289+0000 7fcd654a4640 1 --2- 192.168.123.100:0/1170239651 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fcd3003dbf0 0x7fcd300400b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.289+0000 7fcd654a4640 1 --2- 192.168.123.100:0/1170239651 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fcd3003dbf0 0x7fcd300400b0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7fcd600521f0 tx=0x7fcd6006a990 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.289+0000 7fcd4effd640 1 -- 192.168.123.100:0/1170239651 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7fcd5c077750 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.293+0000 7fcd4effd640 1 -- 192.168.123.100:0/1170239651 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fcd5c04f340 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.421+0000 7fcd4effd640 1 -- 192.168.123.100:0/1170239651 <== mon.0 v2:192.168.123.100:3300/0 7 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fcd5c0774b0 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:47.461+0000 7fcd66ca7640 1 -- 192.168.123.100:0/1170239651 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0) -- 0x7fcd58146160 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.461+0000 7fcd4effd640 1 -- 192.168.123.100:0/1170239651 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "dashboard"}]=0 v8) ==== 88+0+0 (secure 0 0 0) 0x7fcd5c045360 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.465+0000 7fcd4effd640 1 -- 192.168.123.100:0/1170239651 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mgrmap(e 8) ==== 50260+0+0 (secure 0 0 0) 0x7fcd5c0463c0 con 0x7fcd580a4930 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.465+0000 7fcd66ca7640 1 -- 192.168.123.100:0/1170239651 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fcd3003dbf0 msgr2=0x7fcd300400b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.465+0000 7fcd66ca7640 1 --2- 192.168.123.100:0/1170239651 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fcd3003dbf0 0x7fcd300400b0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7fcd600521f0 tx=0x7fcd6006a990 comp rx=0 tx=0).stop 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.465+0000 7fcd66ca7640 1 -- 192.168.123.100:0/1170239651 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 msgr2=0x7fcd58142c40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.465+0000 7fcd66ca7640 1 --2- 192.168.123.100:0/1170239651 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd58142c40 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7fcd5c031970 tx=0x7fcd5c03b930 comp rx=0 tx=0).stop 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.469+0000 7fcd66ca7640 1 -- 192.168.123.100:0/1170239651 shutdown_connections 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.469+0000 7fcd66ca7640 1 --2- 192.168.123.100:0/1170239651 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7fcd3003dbf0 0x7fcd300400b0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.469+0000 7fcd66ca7640 1 --2- 192.168.123.100:0/1170239651 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcd580a4930 0x7fcd58142c40 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.469+0000 7fcd66ca7640 1 -- 192.168.123.100:0/1170239651 >> 192.168.123.100:0/1170239651 conn(0x7fcd5809fc40 msgr2=0x7fcd580a0730 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:48.565 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.469+0000 7fcd66ca7640 1 -- 192.168.123.100:0/1170239651 shutdown_connections 2026-03-10T07:19:48.566 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.469+0000 7fcd66ca7640 1 -- 192.168.123.100:0/1170239651 wait complete. 2026-03-10T07:19:48.831 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:48 vm00 bash[20971]: ignoring --setuser ceph since I am not root 2026-03-10T07:19:48.831 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:48 vm00 bash[20971]: ignoring --setgroup ceph since I am not root 2026-03-10T07:19:48.831 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:48 vm00 bash[20971]: debug 2026-03-10T07:19:48.653+0000 7f5291625140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T07:19:48.832 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:48 vm00 bash[20971]: debug 2026-03-10T07:19:48.689+0000 7f5291625140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 Processor -- start 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 -- start start 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910074550 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f7910074b20 con 0x7f7910074150 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7915a47640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910074550 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7915a47640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910074550 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:53990/0 (socket says 192.168.123.100:53990) 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7915a47640 1 -- 192.168.123.100:0/3376570995 learned_addr learned my addr 192.168.123.100:0/3376570995 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7915a47640 1 -- 192.168.123.100:0/3376570995 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7910074ca0 con 0x7f7910074150 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7915a47640 1 --2- 192.168.123.100:0/3376570995 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910074550 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f7900008970 tx=0x7f79000312f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=f7b5271a4d63ad62 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7914a45640 1 -- 192.168.123.100:0/3376570995 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7900037070 con 0x7f7910074150 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7914a45640 1 -- 192.168.123.100:0/3376570995 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f7900031e30 con 0x7f7910074150 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7914a45640 1 -- 192.168.123.100:0/3376570995 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f790003a520 con 0x7f7910074150 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 -- 192.168.123.100:0/3376570995 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 msgr2=0x7f7910074550 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 --2- 192.168.123.100:0/3376570995 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910074550 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f7900008970 tx=0x7f79000312f0 comp rx=0 tx=0).stop 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 -- 192.168.123.100:0/3376570995 shutdown_connections 2026-03-10T07:19:48.952 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 --2- 192.168.123.100:0/3376570995 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910074550 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 -- 192.168.123.100:0/3376570995 >> 192.168.123.100:0/3376570995 conn(0x7f791006fa60 msgr2=0x7f7910071ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 -- 192.168.123.100:0/3376570995 shutdown_connections 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 -- 192.168.123.100:0/3376570995 wait complete. 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 Processor -- start 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 -- start start 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910086af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7917cd2640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f791007bbe0 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7915a47640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910086af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7915a47640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910086af0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54004/0 (socket says 192.168.123.100:54004) 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.701+0000 7f7915a47640 1 -- 192.168.123.100:0/75861386 learned_addr learned my addr 192.168.123.100:0/75861386 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.705+0000 7f7915a47640 1 -- 192.168.123.100:0/75861386 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7910089fb0 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.705+0000 7f7915a47640 1 --2- 192.168.123.100:0/75861386 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910086af0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f790003bfd0 tx=0x7f790003b630 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.705+0000 7f78feffd640 1 -- 192.168.123.100:0/75861386 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7900049070 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.705+0000 7f7917cd2640 1 -- 192.168.123.100:0/75861386 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f7910087030 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.705+0000 7f7917cd2640 1 -- 192.168.123.100:0/75861386 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f7910087510 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.705+0000 7f7917cd2640 1 -- 192.168.123.100:0/75861386 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f79100745d0 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.705+0000 7f78feffd640 1 -- 192.168.123.100:0/75861386 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f790003ac60 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.705+0000 7f78feffd640 1 -- 192.168.123.100:0/75861386 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7900037040 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.709+0000 7f78feffd640 1 -- 192.168.123.100:0/75861386 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 8) ==== 50260+0+0 (secure 0 0 0) 0x7f7900056080 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.709+0000 7f78feffd640 1 --2- 192.168.123.100:0/75861386 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f78e403dce0 0x7f78e40401a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.709+0000 7f7915246640 1 -- 192.168.123.100:0/75861386 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f78e403dce0 msgr2=0x7f78e40401a0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/1944661180 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.709+0000 7f7915246640 1 --2- 192.168.123.100:0/75861386 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f78e403dce0 0x7f78e40401a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.709+0000 7f78feffd640 1 -- 192.168.123.100:0/75861386 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f7900077390 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.709+0000 7f78feffd640 1 -- 192.168.123.100:0/75861386 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f7900077800 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.833+0000 7f7917cd2640 1 -- 192.168.123.100:0/75861386 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7f79101b7730 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.833+0000 7f78feffd640 1 -- 192.168.123.100:0/75861386 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v8) ==== 56+0+88 (secure 0 0 0) 0x7f79000770f0 con 0x7f7910074150 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.837+0000 7f78fcff9640 1 -- 192.168.123.100:0/75861386 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f78e403dce0 msgr2=0x7f78e40401a0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.837+0000 7f78fcff9640 1 --2- 192.168.123.100:0/75861386 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f78e403dce0 0x7f78e40401a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.837+0000 7f78fcff9640 1 -- 192.168.123.100:0/75861386 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 msgr2=0x7f7910086af0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.837+0000 7f78fcff9640 1 --2- 192.168.123.100:0/75861386 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910086af0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f790003bfd0 tx=0x7f790003b630 comp rx=0 tx=0).stop 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.837+0000 7f78fcff9640 1 -- 192.168.123.100:0/75861386 shutdown_connections 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.837+0000 7f78fcff9640 1 --2- 192.168.123.100:0/75861386 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f78e403dce0 0x7f78e40401a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.837+0000 7f78fcff9640 1 --2- 192.168.123.100:0/75861386 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7910074150 0x7f7910086af0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.837+0000 7f78fcff9640 1 -- 192.168.123.100:0/75861386 >> 192.168.123.100:0/75861386 conn(0x7f791006fa60 msgr2=0x7f7910070610 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.837+0000 7f78fcff9640 1 -- 192.168.123.100:0/75861386 shutdown_connections 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:48.837+0000 7f78fcff9640 1 -- 192.168.123.100:0/75861386 wait complete. 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T07:19:48.953 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 8... 2026-03-10T07:19:49.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:48 vm00 bash[20971]: debug 2026-03-10T07:19:48.821+0000 7f5291625140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T07:19:49.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:49 vm00 bash[20971]: debug 2026-03-10T07:19:49.153+0000 7f5291625140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T07:19:49.827 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:49 vm00 bash[20971]: debug 2026-03-10T07:19:49.601+0000 7f5291625140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T07:19:49.827 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:49 vm00 bash[20971]: debug 2026-03-10T07:19:49.693+0000 7f5291625140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T07:19:49.827 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:49 vm00 bash[20701]: audit 2026-03-10T07:19:48.468487+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/1170239651' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T07:19:49.827 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:49 vm00 bash[20701]: audit 2026-03-10T07:19:48.468487+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/1170239651' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T07:19:49.827 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:49 vm00 bash[20701]: cluster 2026-03-10T07:19:48.470864+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: y(active, since 7s) 2026-03-10T07:19:49.827 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:49 vm00 bash[20701]: cluster 2026-03-10T07:19:48.470864+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: y(active, since 7s) 2026-03-10T07:19:49.827 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:49 vm00 bash[20701]: audit 2026-03-10T07:19:48.839897+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.100:0/75861386' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:19:49.827 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:49 vm00 bash[20701]: audit 2026-03-10T07:19:48.839897+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.100:0/75861386' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:19:50.106 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:49 vm00 bash[20971]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T07:19:50.106 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:49 vm00 bash[20971]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T07:19:50.106 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:49 vm00 bash[20971]: from numpy import show_config as show_numpy_config 2026-03-10T07:19:50.106 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:49 vm00 bash[20971]: debug 2026-03-10T07:19:49.821+0000 7f5291625140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T07:19:50.106 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:49 vm00 bash[20971]: debug 2026-03-10T07:19:49.965+0000 7f5291625140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T07:19:50.107 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:50 vm00 bash[20971]: debug 2026-03-10T07:19:50.009+0000 7f5291625140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T07:19:50.107 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:50 vm00 bash[20971]: debug 2026-03-10T07:19:50.053+0000 7f5291625140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T07:19:50.391 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:50 vm00 bash[20971]: debug 2026-03-10T07:19:50.093+0000 7f5291625140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T07:19:50.391 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:50 vm00 bash[20971]: debug 2026-03-10T07:19:50.145+0000 7f5291625140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T07:19:50.866 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:50 vm00 bash[20971]: debug 2026-03-10T07:19:50.585+0000 7f5291625140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T07:19:50.866 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:50 vm00 bash[20971]: debug 2026-03-10T07:19:50.629+0000 7f5291625140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T07:19:50.866 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:50 vm00 bash[20971]: debug 2026-03-10T07:19:50.665+0000 7f5291625140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T07:19:50.866 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:50 vm00 bash[20971]: debug 2026-03-10T07:19:50.813+0000 7f5291625140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T07:19:51.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:50 vm00 bash[20971]: debug 2026-03-10T07:19:50.853+0000 7f5291625140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T07:19:51.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:50 vm00 bash[20971]: debug 2026-03-10T07:19:50.897+0000 7f5291625140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T07:19:51.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:51 vm00 bash[20971]: debug 2026-03-10T07:19:51.013+0000 7f5291625140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:19:51.461 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:51 vm00 bash[20971]: debug 2026-03-10T07:19:51.181+0000 7f5291625140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T07:19:51.462 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:51 vm00 bash[20971]: debug 2026-03-10T07:19:51.369+0000 7f5291625140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T07:19:51.462 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:51 vm00 bash[20971]: debug 2026-03-10T07:19:51.405+0000 7f5291625140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T07:19:51.857 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:51 vm00 bash[20971]: debug 2026-03-10T07:19:51.449+0000 7f5291625140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T07:19:51.857 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:51 vm00 bash[20971]: debug 2026-03-10T07:19:51.605+0000 7f5291625140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:19:52.141 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:19:51 vm00 bash[20971]: debug 2026-03-10T07:19:51.845+0000 7f5291625140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T07:19:52.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: cluster 2026-03-10T07:19:51.853818+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:19:52.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: cluster 2026-03-10T07:19:51.853818+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:19:52.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: cluster 2026-03-10T07:19:51.854240+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon y 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: cluster 2026-03-10T07:19:51.854240+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon y 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: cluster 2026-03-10T07:19:51.859953+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: cluster 2026-03-10T07:19:51.859953+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: cluster 2026-03-10T07:19:51.860151+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: y(active, starting, since 0.00600237s) 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: cluster 2026-03-10T07:19:51.860151+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: y(active, starting, since 0.00600237s) 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.863538+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.863538+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.864404+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.864404+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.865394+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.865394+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.865855+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.865855+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.866295+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.866295+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: cluster 2026-03-10T07:19:51.872787+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon y is now available 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: cluster 2026-03-10T07:19:51.872787+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon y is now available 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.895007+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.895007+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.905194+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:19:52.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:51 vm00 bash[20701]: audit 2026-03-10T07:19:51.905194+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.085+0000 7f4d59883640 1 Processor -- start 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.089+0000 7f4d59883640 1 -- start start 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.089+0000 7f4d59883640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c0a4d10 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.089+0000 7f4d59883640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f4d4c0a52e0 con 0x7f4d4c0a4910 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.089+0000 7f4d58881640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c0a4d10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.089+0000 7f4d58881640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c0a4d10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54016/0 (socket says 192.168.123.100:54016) 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.089+0000 7f4d58881640 1 -- 192.168.123.100:0/3476142860 learned_addr learned my addr 192.168.123.100:0/3476142860 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d58881640 1 -- 192.168.123.100:0/3476142860 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4d4c0a5b10 con 0x7f4d4c0a4910 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d58881640 1 --2- 192.168.123.100:0/3476142860 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c0a4d10 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f4d480089a0 tx=0x7f4d48031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=8ff6619f312ef42c server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d537fe640 1 -- 192.168.123.100:0/3476142860 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4d4803c480 con 0x7f4d4c0a4910 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d537fe640 1 -- 192.168.123.100:0/3476142860 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f4d4803ca60 con 0x7f4d4c0a4910 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d537fe640 1 -- 192.168.123.100:0/3476142860 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4d4803b950 con 0x7f4d4c0a4910 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 -- 192.168.123.100:0/3476142860 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 msgr2=0x7f4d4c0a4d10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 --2- 192.168.123.100:0/3476142860 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c0a4d10 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f4d480089a0 tx=0x7f4d48031440 comp rx=0 tx=0).stop 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 -- 192.168.123.100:0/3476142860 shutdown_connections 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 --2- 192.168.123.100:0/3476142860 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c0a4d10 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 -- 192.168.123.100:0/3476142860 >> 192.168.123.100:0/3476142860 conn(0x7f4d4c09fc20 msgr2=0x7f4d4c0a2080 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:52.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 -- 192.168.123.100:0/3476142860 shutdown_connections 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 -- 192.168.123.100:0/3476142860 wait complete. 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 Processor -- start 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 -- start start 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c13a2e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f4d4c0a6590 con 0x7f4d4c0a4910 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d58881640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c13a2e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d58881640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c13a2e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54028/0 (socket says 192.168.123.100:54028) 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d58881640 1 -- 192.168.123.100:0/3610825621 learned_addr learned my addr 192.168.123.100:0/3610825621 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d58881640 1 -- 192.168.123.100:0/3610825621 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4d4c13a820 con 0x7f4d4c0a4910 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d58881640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c13a2e0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f4d48031970 tx=0x7f4d48008c70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4d4803cc10 con 0x7f4d4c0a4910 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 -- 192.168.123.100:0/3610825621 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4d4c13aab0 con 0x7f4d4c0a4910 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.093+0000 7f4d59883640 1 -- 192.168.123.100:0/3610825621 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f4d4c13af10 con 0x7f4d4c0a4910 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.097+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f4d48008e30 con 0x7f4d4c0a4910 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.097+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4d4800b870 con 0x7f4d4c0a4910 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.097+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 8) ==== 50260+0+0 (secure 0 0 0) 0x7f4d48045070 con 0x7f4d4c0a4910 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.097+0000 7f4d51ffb640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 0x7f4d340401a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.097+0000 7f4d53fff640 1 -- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 msgr2=0x7f4d340401a0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/1944661180 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.097+0000 7f4d53fff640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 0x7f4d340401a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.097+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 --> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f4d340408b0 con 0x7f4d3403dce0 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.097+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 940+0+0 (secure 0 0 0) 0x7f4d48076b30 con 0x7f4d4c0a4910 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.297+0000 7f4d53fff640 1 -- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 msgr2=0x7f4d340401a0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/1944661180 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.297+0000 7f4d53fff640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 0x7f4d340401a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.701+0000 7f4d53fff640 1 -- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 msgr2=0x7f4d340401a0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/1944661180 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:49.701+0000 7f4d53fff640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 0x7f4d340401a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:50.501+0000 7f4d53fff640 1 -- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 msgr2=0x7f4d340401a0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/1944661180 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:50.501+0000 7f4d53fff640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 0x7f4d340401a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:51.853+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mgrmap(e 9) ==== 50027+0+0 (secure 0 0 0) 0x7f4d4800bce0 con 0x7f4d4c0a4910 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:51.853+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 msgr2=0x7f4d340401a0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:51.853+0000 7f4d51ffb640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 0x7f4d340401a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.857+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mgrmap(e 10) ==== 50154+0+0 (secure 0 0 0) 0x7f4d4804f430 con 0x7f4d4c0a4910 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.857+0000 7f4d51ffb640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f4d340417a0 0x7f4d34043b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.857+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f4d48041ec0 con 0x7f4d340417a0 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.857+0000 7f4d53fff640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f4d340417a0 0x7f4d34043b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.857+0000 7f4d53fff640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f4d340417a0 0x7f4d34043b90 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f4d540664d0 tx=0x7f4d540519c0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.861+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (secure 0 0 0) 0x7f4d48041ec0 con 0x7f4d340417a0 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 -- 192.168.123.100:0/3610825621 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7f4d24002670 con 0x7f4d340417a0 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d51ffb640 1 -- 192.168.123.100:0/3610825621 <== mgr.14150 v2:192.168.123.100:6800/2669938860 2 ==== command_reply(tid 1: 0 ) ==== 8+0+52 (secure 0 0 0) 0x7f4d24002670 con 0x7f4d340417a0 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 -- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f4d340417a0 msgr2=0x7f4d34043b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f4d340417a0 0x7f4d34043b90 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f4d540664d0 tx=0x7f4d540519c0 comp rx=0 tx=0).stop 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 -- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 msgr2=0x7f4d4c13a2e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c13a2e0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f4d48031970 tx=0x7f4d48008c70 comp rx=0 tx=0).stop 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 -- 192.168.123.100:0/3610825621 shutdown_connections 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f4d340417a0 0x7f4d34043b90 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:6800/1944661180,v1:192.168.123.100:6801/1944661180] conn(0x7f4d3403dce0 0x7f4d340401a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 --2- 192.168.123.100:0/3610825621 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4d4c0a4910 0x7f4d4c13a2e0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 -- 192.168.123.100:0/3610825621 >> 192.168.123.100:0/3610825621 conn(0x7f4d4c09fc20 msgr2=0x7f4d4c0a0710 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 -- 192.168.123.100:0/3610825621 shutdown_connections 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:52.865+0000 7f4d59883640 1 -- 192.168.123.100:0/3610825621 wait complete. 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 8 is available 2026-03-10T07:19:52.920 INFO:teuthology.orchestra.run.vm00.stdout:Generating a dashboard self-signed certificate... 2026-03-10T07:19:53.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:52 vm00 bash[20701]: audit 2026-03-10T07:19:51.923636+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:19:53.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:52 vm00 bash[20701]: audit 2026-03-10T07:19:51.923636+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:19:53.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:52 vm00 bash[20701]: cluster 2026-03-10T07:19:52.863775+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: y(active, since 1.00962s) 2026-03-10T07:19:53.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:52 vm00 bash[20701]: cluster 2026-03-10T07:19:52.863775+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: y(active, since 1.00962s) 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.045+0000 7ff392537640 1 Processor -- start 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.045+0000 7ff392537640 1 -- start start 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.045+0000 7ff392537640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c1069b0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.045+0000 7ff392537640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff38c106f80 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.045+0000 7ff38bfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c1069b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.045+0000 7ff38bfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c1069b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54102/0 (socket says 192.168.123.100:54102) 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.045+0000 7ff38bfff640 1 -- 192.168.123.100:0/379035416 learned_addr learned my addr 192.168.123.100:0/379035416 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff38bfff640 1 -- 192.168.123.100:0/379035416 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff38c1077b0 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff38bfff640 1 --2- 192.168.123.100:0/379035416 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c1069b0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7ff370009b80 tx=0x7ff37002f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=15d9e00fcea34cb1 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff38affd640 1 -- 192.168.123.100:0/379035416 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff37003c070 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff38affd640 1 -- 192.168.123.100:0/379035416 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ff370037440 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff38affd640 1 -- 192.168.123.100:0/379035416 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff370035340 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 -- 192.168.123.100:0/379035416 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 msgr2=0x7ff38c1069b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 --2- 192.168.123.100:0/379035416 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c1069b0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7ff370009b80 tx=0x7ff37002f190 comp rx=0 tx=0).stop 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 -- 192.168.123.100:0/379035416 shutdown_connections 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 --2- 192.168.123.100:0/379035416 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c1069b0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 -- 192.168.123.100:0/379035416 >> 192.168.123.100:0/379035416 conn(0x7ff38c101d20 msgr2=0x7ff38c104180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 -- 192.168.123.100:0/379035416 shutdown_connections 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 -- 192.168.123.100:0/379035416 wait complete. 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 Processor -- start 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 -- start start 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c19c390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff38c10a9e0 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff38bfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c19c390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff38bfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c19c390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54104/0 (socket says 192.168.123.100:54104) 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff38bfff640 1 -- 192.168.123.100:0/3880802023 learned_addr learned my addr 192.168.123.100:0/3880802023 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff38bfff640 1 -- 192.168.123.100:0/3880802023 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff38c19c8d0 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff38bfff640 1 --2- 192.168.123.100:0/3880802023 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c19c390 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7ff370009cb0 tx=0x7ff370036940 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff3897fa640 1 -- 192.168.123.100:0/3880802023 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff3700375f0 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff3897fa640 1 -- 192.168.123.100:0/3880802023 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ff370036ec0 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff3897fa640 1 -- 192.168.123.100:0/3880802023 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff370048390 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 -- 192.168.123.100:0/3880802023 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff38c19cb60 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.049+0000 7ff392537640 1 -- 192.168.123.100:0/3880802023 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff38c07a950 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.053+0000 7ff392537640 1 -- 192.168.123.100:0/3880802023 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff350005180 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.057+0000 7ff3897fa640 1 -- 192.168.123.100:0/3880802023 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 10) ==== 50154+0+0 (secure 0 0 0) 0x7ff370036a80 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.057+0000 7ff3897fa640 1 --2- 192.168.123.100:0/3880802023 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff36003dbc0 0x7ff360040080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.057+0000 7ff3897fa640 1 -- 192.168.123.100:0/3880802023 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7ff37007b400 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.057+0000 7ff3897fa640 1 -- 192.168.123.100:0/3880802023 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff37007b8c0 con 0x7ff38c1065b0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.057+0000 7ff38b7fe640 1 --2- 192.168.123.100:0/3880802023 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff36003dbc0 0x7ff360040080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.057+0000 7ff38b7fe640 1 --2- 192.168.123.100:0/3880802023 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff36003dbc0 0x7ff360040080 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7ff378009a10 tx=0x7ff378006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:53.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.161+0000 7ff392537640 1 -- 192.168.123.100:0/3880802023 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}) -- 0x7ff350002bf0 con 0x7ff36003dbc0 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.229+0000 7ff3897fa640 1 -- 192.168.123.100:0/3880802023 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7ff350002bf0 con 0x7ff36003dbc0 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.233+0000 7ff392537640 1 -- 192.168.123.100:0/3880802023 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff36003dbc0 msgr2=0x7ff360040080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.233+0000 7ff392537640 1 --2- 192.168.123.100:0/3880802023 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff36003dbc0 0x7ff360040080 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7ff378009a10 tx=0x7ff378006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.233+0000 7ff392537640 1 -- 192.168.123.100:0/3880802023 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 msgr2=0x7ff38c19c390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.233+0000 7ff392537640 1 --2- 192.168.123.100:0/3880802023 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c19c390 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7ff370009cb0 tx=0x7ff370036940 comp rx=0 tx=0).stop 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.233+0000 7ff392537640 1 -- 192.168.123.100:0/3880802023 shutdown_connections 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.233+0000 7ff392537640 1 --2- 192.168.123.100:0/3880802023 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff36003dbc0 0x7ff360040080 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.233+0000 7ff392537640 1 --2- 192.168.123.100:0/3880802023 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff38c1065b0 0x7ff38c19c390 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.233+0000 7ff392537640 1 -- 192.168.123.100:0/3880802023 >> 192.168.123.100:0/3880802023 conn(0x7ff38c101d20 msgr2=0x7ff38c102920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.233+0000 7ff392537640 1 -- 192.168.123.100:0/3880802023 shutdown_connections 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.233+0000 7ff392537640 1 -- 192.168.123.100:0/3880802023 wait complete. 2026-03-10T07:19:53.300 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial admin user... 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$2NaZOHTuePEwxzRMDTFwPOFFlklA0CuZz2gKrSZYPTz0tY/UI6N0C", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773127193, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3adbca6640 1 Processor -- start 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3adbca6640 1 -- start start 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3adbca6640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad4108c30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3adbca6640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f3ad4109200 con 0x7f3ad4108830 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3ad9a1b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad4108c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3ad9a1b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad4108c30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54120/0 (socket says 192.168.123.100:54120) 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3ad9a1b640 1 -- 192.168.123.100:0/3937888075 learned_addr learned my addr 192.168.123.100:0/3937888075 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3ad9a1b640 1 -- 192.168.123.100:0/3937888075 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3ad4109a30 con 0x7f3ad4108830 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3ad9a1b640 1 --2- 192.168.123.100:0/3937888075 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad4108c30 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f3ac4009920 tx=0x7f3ac402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=47c639edaef2975f server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3ad8a19640 1 -- 192.168.123.100:0/3937888075 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3ac403c070 con 0x7f3ad4108830 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3ad8a19640 1 -- 192.168.123.100:0/3937888075 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3ac4037440 con 0x7f3ad4108830 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3adbca6640 1 -- 192.168.123.100:0/3937888075 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 msgr2=0x7f3ad4108c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.421+0000 7f3adbca6640 1 --2- 192.168.123.100:0/3937888075 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad4108c30 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f3ac4009920 tx=0x7f3ac402ef20 comp rx=0 tx=0).stop 2026-03-10T07:19:53.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 -- 192.168.123.100:0/3937888075 shutdown_connections 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 --2- 192.168.123.100:0/3937888075 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad4108c30 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 -- 192.168.123.100:0/3937888075 >> 192.168.123.100:0/3937888075 conn(0x7f3ad407bd50 msgr2=0x7f3ad407c1a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 -- 192.168.123.100:0/3937888075 shutdown_connections 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 -- 192.168.123.100:0/3937888075 wait complete. 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 Processor -- start 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 -- start start 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad419e780 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f3ad410cc60 con 0x7f3ad4108830 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3ad9a1b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad419e780 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3ad9a1b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad419e780 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54126/0 (socket says 192.168.123.100:54126) 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3ad9a1b640 1 -- 192.168.123.100:0/2634327584 learned_addr learned my addr 192.168.123.100:0/2634327584 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3ad9a1b640 1 -- 192.168.123.100:0/2634327584 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3ad419ecc0 con 0x7f3ad4108830 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3ad9a1b640 1 --2- 192.168.123.100:0/2634327584 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad419e780 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f3ac402f4d0 tx=0x7f3ac4035d00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3ac2ffd640 1 -- 192.168.123.100:0/2634327584 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3ac403c070 con 0x7f3ad4108830 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3ac2ffd640 1 -- 192.168.123.100:0/2634327584 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3ac4045070 con 0x7f3ad4108830 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3ac2ffd640 1 -- 192.168.123.100:0/2634327584 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3ac4040aa0 con 0x7f3ad4108830 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 -- 192.168.123.100:0/2634327584 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3ad419ef50 con 0x7f3ad4108830 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 -- 192.168.123.100:0/2634327584 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3ad419f370 con 0x7f3ad4108830 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3adbca6640 1 -- 192.168.123.100:0/2634327584 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3a9c005180 con 0x7f3ad4108830 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.425+0000 7f3ac2ffd640 1 -- 192.168.123.100:0/2634327584 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 10) ==== 50154+0+0 (secure 0 0 0) 0x7f3ac4037780 con 0x7f3ad4108830 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.429+0000 7f3ac2ffd640 1 --2- 192.168.123.100:0/2634327584 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3aac03dbc0 0x7f3aac040080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.429+0000 7f3ac2ffd640 1 -- 192.168.123.100:0/2634327584 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f3ac4076800 con 0x7f3ad4108830 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.429+0000 7f3ad921a640 1 --2- 192.168.123.100:0/2634327584 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3aac03dbc0 0x7f3aac040080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.429+0000 7f3ac2ffd640 1 -- 192.168.123.100:0/2634327584 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3ac4048dc0 con 0x7f3ad4108830 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.429+0000 7f3ad921a640 1 --2- 192.168.123.100:0/2634327584 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3aac03dbc0 0x7f3aac040080 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f3ac80099c0 tx=0x7f3ac8006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.529+0000 7f3adbca6640 1 -- 192.168.123.100:0/2634327584 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}) -- 0x7f3a9c003c00 con 0x7f3aac03dbc0 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.685+0000 7f3ac2ffd640 1 -- 192.168.123.100:0/2634327584 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+252 (secure 0 0 0) 0x7f3a9c003c00 con 0x7f3aac03dbc0 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.689+0000 7f3adbca6640 1 -- 192.168.123.100:0/2634327584 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3aac03dbc0 msgr2=0x7f3aac040080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.689+0000 7f3adbca6640 1 --2- 192.168.123.100:0/2634327584 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3aac03dbc0 0x7f3aac040080 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f3ac80099c0 tx=0x7f3ac8006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.689+0000 7f3adbca6640 1 -- 192.168.123.100:0/2634327584 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 msgr2=0x7f3ad419e780 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.689+0000 7f3adbca6640 1 --2- 192.168.123.100:0/2634327584 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad419e780 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f3ac402f4d0 tx=0x7f3ac4035d00 comp rx=0 tx=0).stop 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.689+0000 7f3adbca6640 1 -- 192.168.123.100:0/2634327584 shutdown_connections 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.689+0000 7f3adbca6640 1 --2- 192.168.123.100:0/2634327584 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3aac03dbc0 0x7f3aac040080 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.689+0000 7f3adbca6640 1 --2- 192.168.123.100:0/2634327584 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3ad4108830 0x7f3ad419e780 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.689+0000 7f3adbca6640 1 -- 192.168.123.100:0/2634327584 >> 192.168.123.100:0/2634327584 conn(0x7f3ad407bd50 msgr2=0x7f3ad41060f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.689+0000 7f3adbca6640 1 -- 192.168.123.100:0/2634327584 shutdown_connections 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.689+0000 7f3adbca6640 1 -- 192.168.123.100:0/2634327584 wait complete. 2026-03-10T07:19:53.739 INFO:teuthology.orchestra.run.vm00.stdout:Fetching dashboard port number... 2026-03-10T07:19:54.089 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T07:19:54.089 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.881+0000 7f87a568d640 1 Processor -- start 2026-03-10T07:19:54.089 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.889+0000 7f87a568d640 1 -- start start 2026-03-10T07:19:54.089 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.889+0000 7f87a568d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a0108a80 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:54.089 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.889+0000 7f879ffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a0108a80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.889+0000 7f879ffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a0108a80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:52054/0 (socket says 192.168.123.100:52054) 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.889+0000 7f879ffff640 1 -- 192.168.123.100:0/2820046197 learned_addr learned my addr 192.168.123.100:0/2820046197 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.889+0000 7f87a568d640 1 -- 192.168.123.100:0/2820046197 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f87a0109050 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.889+0000 7f879ffff640 1 -- 192.168.123.100:0/2820046197 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f87a0109880 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.889+0000 7f879ffff640 1 --2- 192.168.123.100:0/2820046197 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a0108a80 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f878c009920 tx=0x7f878c02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=5859992ee34ab5cf server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.893+0000 7f879effd640 1 -- 192.168.123.100:0/2820046197 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f878c03c070 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.893+0000 7f879effd640 1 -- 192.168.123.100:0/2820046197 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f878c037440 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.893+0000 7f879effd640 1 -- 192.168.123.100:0/2820046197 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f878c035340 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.893+0000 7f87a568d640 1 -- 192.168.123.100:0/2820046197 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 msgr2=0x7f87a0108a80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.893+0000 7f87a568d640 1 --2- 192.168.123.100:0/2820046197 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a0108a80 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f878c009920 tx=0x7f878c02ef20 comp rx=0 tx=0).stop 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.893+0000 7f87a568d640 1 -- 192.168.123.100:0/2820046197 shutdown_connections 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.893+0000 7f87a568d640 1 --2- 192.168.123.100:0/2820046197 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a0108a80 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.893+0000 7f87a568d640 1 -- 192.168.123.100:0/2820046197 >> 192.168.123.100:0/2820046197 conn(0x7f87a007bb60 msgr2=0x7f87a007bf70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.893+0000 7f87a568d640 1 -- 192.168.123.100:0/2820046197 shutdown_connections 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.893+0000 7f87a568d640 1 -- 192.168.123.100:0/2820046197 wait complete. 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.893+0000 7f87a568d640 1 Processor -- start 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f87a568d640 1 -- start start 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f87a568d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a00803e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f87a568d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f87a010cab0 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f879ffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a00803e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f879ffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a00803e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:52064/0 (socket says 192.168.123.100:52064) 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f879ffff640 1 -- 192.168.123.100:0/2147109762 learned_addr learned my addr 192.168.123.100:0/2147109762 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f879ffff640 1 -- 192.168.123.100:0/2147109762 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f87a0080920 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f879ffff640 1 --2- 192.168.123.100:0/2147109762 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a00803e0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f878c009a50 tx=0x7f878c037b00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f879d7fa640 1 -- 192.168.123.100:0/2147109762 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f878c047070 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f879d7fa640 1 -- 192.168.123.100:0/2147109762 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f878c035dc0 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f879d7fa640 1 -- 192.168.123.100:0/2147109762 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f878c03c070 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f87a568d640 1 -- 192.168.123.100:0/2147109762 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f87a0080bb0 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f87a568d640 1 -- 192.168.123.100:0/2147109762 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f87a007cf20 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f87a568d640 1 -- 192.168.123.100:0/2147109762 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f87a010cc30 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f879d7fa640 1 -- 192.168.123.100:0/2147109762 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 10) ==== 50154+0+0 (secure 0 0 0) 0x7f878c054080 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f879d7fa640 1 --2- 192.168.123.100:0/2147109762 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f877403db20 0x7f877403ffe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.897+0000 7f879d7fa640 1 -- 192.168.123.100:0/2147109762 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f878c0774b0 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.901+0000 7f879f7fe640 1 --2- 192.168.123.100:0/2147109762 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f877403db20 0x7f877403ffe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.901+0000 7f879f7fe640 1 --2- 192.168.123.100:0/2147109762 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f877403db20 0x7f877403ffe0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f87900099c0 tx=0x7f8790006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:53.901+0000 7f879d7fa640 1 -- 192.168.123.100:0/2147109762 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f878c051200 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.033+0000 7f87a568d640 1 -- 192.168.123.100:0/2147109762 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"} v 0) -- 0x7f87a0108b00 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.033+0000 7f879d7fa640 1 -- 192.168.123.100:0/2147109762 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]=0 v8) ==== 112+0+5 (secure 0 0 0) 0x7f878c0425f0 con 0x7f87a0108680 2026-03-10T07:19:54.090 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.033+0000 7f87a568d640 1 -- 192.168.123.100:0/2147109762 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f877403db20 msgr2=0x7f877403ffe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.037+0000 7f87a568d640 1 --2- 192.168.123.100:0/2147109762 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f877403db20 0x7f877403ffe0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f87900099c0 tx=0x7f8790006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.037+0000 7f87a568d640 1 -- 192.168.123.100:0/2147109762 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 msgr2=0x7f87a00803e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.037+0000 7f87a568d640 1 --2- 192.168.123.100:0/2147109762 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a00803e0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f878c009a50 tx=0x7f878c037b00 comp rx=0 tx=0).stop 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.037+0000 7f87a568d640 1 -- 192.168.123.100:0/2147109762 shutdown_connections 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.037+0000 7f87a568d640 1 --2- 192.168.123.100:0/2147109762 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f877403db20 0x7f877403ffe0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.037+0000 7f87a568d640 1 --2- 192.168.123.100:0/2147109762 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f87a0108680 0x7f87a00803e0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.037+0000 7f87a568d640 1 -- 192.168.123.100:0/2147109762 >> 192.168.123.100:0/2147109762 conn(0x7f87a007bb60 msgr2=0x7f87a0105ef0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.037+0000 7f87a568d640 1 -- 192.168.123.100:0/2147109762 shutdown_connections 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.037+0000 7f87a568d640 1 -- 192.168.123.100:0/2147109762 wait complete. 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:Ceph Dashboard is now available at: 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout: URL: https://vm00.local:8443/ 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout: User: admin 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout: Password: bgwuugdeev 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:19:54.091 INFO:teuthology.orchestra.run.vm00.stdout:Saving cluster configuration to /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config directory 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: cephadm 2026-03-10T07:19:52.967172+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:07:19:52] ENGINE Bus STARTING 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: cephadm 2026-03-10T07:19:52.967172+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:07:19:52] ENGINE Bus STARTING 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: cephadm 2026-03-10T07:19:53.068730+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: cephadm 2026-03-10T07:19:53.068730+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:53.168121+0000 mgr.y (mgr.14150) 5 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:53.168121+0000 mgr.y (mgr.14150) 5 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: cephadm 2026-03-10T07:19:53.177651+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: cephadm 2026-03-10T07:19:53.177651+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: cephadm 2026-03-10T07:19:53.177813+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Bus STARTED 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: cephadm 2026-03-10T07:19:53.177813+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Bus STARTED 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: cephadm 2026-03-10T07:19:53.178117+0000 mgr.y (mgr.14150) 8 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Client ('192.168.123.100', 47470) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: cephadm 2026-03-10T07:19:53.178117+0000 mgr.y (mgr.14150) 8 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Client ('192.168.123.100', 47470) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:53.232624+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:53.232624+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:53.235245+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:53.235245+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:53.538451+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:53.538451+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:53.691419+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:53.691419+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:54.038598+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.100:0/2147109762' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T07:19:54.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:54 vm00 bash[20701]: audit 2026-03-10T07:19:54.038598+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.100:0/2147109762' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T07:19:54.426 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.213+0000 7f412ff49640 1 Processor -- start 2026-03-10T07:19:54.426 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.213+0000 7f412ff49640 1 -- start start 2026-03-10T07:19:54.426 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.213+0000 7f412ff49640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128108c60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:54.426 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.213+0000 7f412ff49640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f4128109230 con 0x7f4128108860 2026-03-10T07:19:54.426 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.213+0000 7f412dcbe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128108c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:54.426 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412dcbe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128108c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:52078/0 (socket says 192.168.123.100:52078) 2026-03-10T07:19:54.426 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412dcbe640 1 -- 192.168.123.100:0/3322427845 learned_addr learned my addr 192.168.123.100:0/3322427845 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:54.426 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412dcbe640 1 -- 192.168.123.100:0/3322427845 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4128109a60 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412dcbe640 1 --2- 192.168.123.100:0/3322427845 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128108c60 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f4118009b80 tx=0x7f411802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ac3431987c7a1d69 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ccbc640 1 -- 192.168.123.100:0/3322427845 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f411803c070 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ccbc640 1 -- 192.168.123.100:0/3322427845 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f4118037440 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ccbc640 1 -- 192.168.123.100:0/3322427845 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4118035340 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ff49640 1 -- 192.168.123.100:0/3322427845 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 msgr2=0x7f4128108c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ff49640 1 --2- 192.168.123.100:0/3322427845 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128108c60 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f4118009b80 tx=0x7f411802f190 comp rx=0 tx=0).stop 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ff49640 1 -- 192.168.123.100:0/3322427845 shutdown_connections 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ff49640 1 --2- 192.168.123.100:0/3322427845 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128108c60 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ff49640 1 -- 192.168.123.100:0/3322427845 >> 192.168.123.100:0/3322427845 conn(0x7f412807bda0 msgr2=0x7f412807c1b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ff49640 1 -- 192.168.123.100:0/3322427845 shutdown_connections 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ff49640 1 -- 192.168.123.100:0/3322427845 wait complete. 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ff49640 1 Processor -- start 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ff49640 1 -- start start 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ff49640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128080470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412ff49640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f412810cc90 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412dcbe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128080470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412dcbe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128080470 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:52086/0 (socket says 192.168.123.100:52086) 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412dcbe640 1 -- 192.168.123.100:0/4202400202 learned_addr learned my addr 192.168.123.100:0/4202400202 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.217+0000 7f412dcbe640 1 -- 192.168.123.100:0/4202400202 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f41280809b0 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.221+0000 7f412dcbe640 1 --2- 192.168.123.100:0/4202400202 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128080470 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f4118009cb0 tx=0x7f4118035870 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.221+0000 7f4116ffd640 1 -- 192.168.123.100:0/4202400202 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f411803c070 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.221+0000 7f4116ffd640 1 -- 192.168.123.100:0/4202400202 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f4118042440 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.221+0000 7f412ff49640 1 -- 192.168.123.100:0/4202400202 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4128080c40 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.221+0000 7f4116ffd640 1 -- 192.168.123.100:0/4202400202 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f41180413e0 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.221+0000 7f412ff49640 1 -- 192.168.123.100:0/4202400202 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f412807cfb0 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.221+0000 7f4116ffd640 1 -- 192.168.123.100:0/4202400202 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 10) ==== 50154+0+0 (secure 0 0 0) 0x7f4118041640 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.221+0000 7f412ff49640 1 -- 192.168.123.100:0/4202400202 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f40f0005180 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.221+0000 7f4116ffd640 1 --2- 192.168.123.100:0/4202400202 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40f803d800 0x7f40f803fcc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.221+0000 7f4116ffd640 1 -- 192.168.123.100:0/4202400202 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f4118078a60 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.225+0000 7f412d4bd640 1 --2- 192.168.123.100:0/4202400202 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40f803d800 0x7f40f803fcc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.225+0000 7f412d4bd640 1 --2- 192.168.123.100:0/4202400202 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40f803d800 0x7f40f803fcc0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f411c0099c0 tx=0x7f411c006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.225+0000 7f4116ffd640 1 -- 192.168.123.100:0/4202400202 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f411803fd20 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.369+0000 7f412ff49640 1 -- 192.168.123.100:0/4202400202 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) -- 0x7f40f0005470 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.373+0000 7f4116ffd640 1 -- 192.168.123.100:0/4202400202 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{prefix=config-key set, key=mgr/dashboard/cluster/status}]=0 set mgr/dashboard/cluster/status v24)=0 set mgr/dashboard/cluster/status v24) ==== 153+0+0 (secure 0 0 0) 0x7f4118040e40 con 0x7f4128108860 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.377+0000 7f412ff49640 1 -- 192.168.123.100:0/4202400202 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40f803d800 msgr2=0x7f40f803fcc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.377+0000 7f412ff49640 1 --2- 192.168.123.100:0/4202400202 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40f803d800 0x7f40f803fcc0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f411c0099c0 tx=0x7f411c006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.377+0000 7f412ff49640 1 -- 192.168.123.100:0/4202400202 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 msgr2=0x7f4128080470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.377+0000 7f412ff49640 1 --2- 192.168.123.100:0/4202400202 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128080470 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f4118009cb0 tx=0x7f4118035870 comp rx=0 tx=0).stop 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.377+0000 7f412ff49640 1 -- 192.168.123.100:0/4202400202 shutdown_connections 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.377+0000 7f412ff49640 1 --2- 192.168.123.100:0/4202400202 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40f803d800 0x7f40f803fcc0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.377+0000 7f412ff49640 1 --2- 192.168.123.100:0/4202400202 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4128108860 0x7f4128080470 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.377+0000 7f412ff49640 1 -- 192.168.123.100:0/4202400202 >> 192.168.123.100:0/4202400202 conn(0x7f412807bda0 msgr2=0x7f4128105e60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.377+0000 7f412ff49640 1 -- 192.168.123.100:0/4202400202 shutdown_connections 2026-03-10T07:19:54.427 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr 2026-03-10T07:19:54.377+0000 7f412ff49640 1 -- 192.168.123.100:0/4202400202 wait complete. 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: sudo /usr/sbin/cephadm shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: sudo /usr/sbin/cephadm shell 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: ceph telemetry on 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout:For more information see: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:19:54.428 INFO:teuthology.orchestra.run.vm00.stdout:Bootstrap complete. 2026-03-10T07:19:54.451 INFO:tasks.cephadm:Fetching config... 2026-03-10T07:19:54.451 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:19:54.451 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T07:19:54.455 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T07:19:54.455 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:19:54.455 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T07:19:54.502 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T07:19:54.502 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:19:54.502 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.a/keyring of=/dev/stdout 2026-03-10T07:19:54.555 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T07:19:54.555 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:19:54.556 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T07:19:54.602 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T07:19:54.602 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYKws7A3YDjLcL26+Og+43ogqH5B/3kceilbHXDMvGQ ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T07:19:54.656 INFO:teuthology.orchestra.run.vm00.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYKws7A3YDjLcL26+Og+43ogqH5B/3kceilbHXDMvGQ ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:54.662 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYKws7A3YDjLcL26+Og+43ogqH5B/3kceilbHXDMvGQ ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T07:19:54.675 INFO:teuthology.orchestra.run.vm03.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYKws7A3YDjLcL26+Og+43ogqH5B/3kceilbHXDMvGQ ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:19:54.680 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T07:19:55.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:55 vm00 bash[20701]: audit 2026-03-10T07:19:54.378990+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.100:0/4202400202' entity='client.admin' 2026-03-10T07:19:55.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:55 vm00 bash[20701]: audit 2026-03-10T07:19:54.378990+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.100:0/4202400202' entity='client.admin' 2026-03-10T07:19:55.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:55 vm00 bash[20701]: cluster 2026-03-10T07:19:54.695612+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: y(active, since 2s) 2026-03-10T07:19:55.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:55 vm00 bash[20701]: cluster 2026-03-10T07:19:54.695612+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: y(active, since 2s) 2026-03-10T07:19:58.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:57 vm00 bash[20701]: audit 2026-03-10T07:19:56.682654+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:19:58.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:57 vm00 bash[20701]: audit 2026-03-10T07:19:56.682654+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:19:58.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:57 vm00 bash[20701]: audit 2026-03-10T07:19:57.337393+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:19:58.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:57 vm00 bash[20701]: audit 2026-03-10T07:19:57.337393+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:19:58.552 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.a/config 2026-03-10T07:19:58.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.689+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1050217443 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae4806b740 msgr2=0x7fae48104cf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:58.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.689+0000 7fae4c8fc640 1 --2- 192.168.123.100:0/1050217443 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae4806b740 0x7fae48104cf0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7fae3c0099b0 tx=0x7fae3c02f2b0 comp rx=0 tx=0).stop 2026-03-10T07:19:58.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.689+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1050217443 shutdown_connections 2026-03-10T07:19:58.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.689+0000 7fae4c8fc640 1 --2- 192.168.123.100:0/1050217443 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae4806b740 0x7fae48104cf0 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:58.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.689+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1050217443 >> 192.168.123.100:0/1050217443 conn(0x7fae480fc7c0 msgr2=0x7fae480febe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:58.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.689+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1050217443 shutdown_connections 2026-03-10T07:19:58.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.689+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1050217443 wait complete. 2026-03-10T07:19:58.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae4c8fc640 1 Processor -- start 2026-03-10T07:19:58.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae4c8fc640 1 -- start start 2026-03-10T07:19:58.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae4c8fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae4806b740 0x7fae481024e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:58.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae4c8fc640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fae48107530 con 0x7fae4806b740 2026-03-10T07:19:58.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae46d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae4806b740 0x7fae481024e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:58.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae46d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae4806b740 0x7fae481024e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:52110/0 (socket says 192.168.123.100:52110) 2026-03-10T07:19:58.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae46d76640 1 -- 192.168.123.100:0/1748877265 learned_addr learned my addr 192.168.123.100:0/1748877265 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:19:58.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae46d76640 1 -- 192.168.123.100:0/1748877265 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fae48102a20 con 0x7fae4806b740 2026-03-10T07:19:58.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae46d76640 1 --2- 192.168.123.100:0/1748877265 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae4806b740 0x7fae481024e0 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7fae3c004270 tx=0x7fae3c0042a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:58.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae27fff640 1 -- 192.168.123.100:0/1748877265 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fae3c0385a0 con 0x7fae4806b740 2026-03-10T07:19:58.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae27fff640 1 -- 192.168.123.100:0/1748877265 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fae3c046070 con 0x7fae4806b740 2026-03-10T07:19:58.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae27fff640 1 -- 192.168.123.100:0/1748877265 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fae3c041740 con 0x7fae4806b740 2026-03-10T07:19:58.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1748877265 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fae48100c10 con 0x7fae4806b740 2026-03-10T07:19:58.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1748877265 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fae481010f0 con 0x7fae4806b740 2026-03-10T07:19:58.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae27fff640 1 -- 192.168.123.100:0/1748877265 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7fae3c0418e0 con 0x7fae4806b740 2026-03-10T07:19:58.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae27fff640 1 --2- 192.168.123.100:0/1748877265 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae2003dd30 0x7fae200401f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:19:58.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae27fff640 1 -- 192.168.123.100:0/1748877265 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7fae3c077b90 con 0x7fae4806b740 2026-03-10T07:19:58.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae46575640 1 --2- 192.168.123.100:0/1748877265 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae2003dd30 0x7fae200401f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:19:58.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.693+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1748877265 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fae14005180 con 0x7fae4806b740 2026-03-10T07:19:58.709 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.697+0000 7fae46575640 1 --2- 192.168.123.100:0/1748877265 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae2003dd30 0x7fae200401f0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fae300099c0 tx=0x7fae30006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:19:58.709 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.697+0000 7fae27fff640 1 -- 192.168.123.100:0/1748877265 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fae3c035320 con 0x7fae4806b740 2026-03-10T07:19:58.805 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.793+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1748877265 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/allow_ptrace}] v 0) -- 0x7fae14005470 con 0x7fae4806b740 2026-03-10T07:19:58.810 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.797+0000 7fae27fff640 1 -- 192.168.123.100:0/1748877265 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/allow_ptrace}]=0 v9)=0 v9) ==== 125+0+0 (secure 0 0 0) 0x7fae3c0331d0 con 0x7fae4806b740 2026-03-10T07:19:58.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.805+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1748877265 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae2003dd30 msgr2=0x7fae200401f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:58.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.805+0000 7fae4c8fc640 1 --2- 192.168.123.100:0/1748877265 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae2003dd30 0x7fae200401f0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fae300099c0 tx=0x7fae30006eb0 comp rx=0 tx=0).stop 2026-03-10T07:19:58.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.805+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1748877265 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae4806b740 msgr2=0x7fae481024e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:19:58.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.805+0000 7fae4c8fc640 1 --2- 192.168.123.100:0/1748877265 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae4806b740 0x7fae481024e0 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7fae3c004270 tx=0x7fae3c0042a0 comp rx=0 tx=0).stop 2026-03-10T07:19:58.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.805+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1748877265 shutdown_connections 2026-03-10T07:19:58.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.805+0000 7fae4c8fc640 1 --2- 192.168.123.100:0/1748877265 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae2003dd30 0x7fae200401f0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:58.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.805+0000 7fae4c8fc640 1 --2- 192.168.123.100:0/1748877265 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae4806b740 0x7fae481024e0 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:19:58.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.805+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1748877265 >> 192.168.123.100:0/1748877265 conn(0x7fae480fc7c0 msgr2=0x7fae480fd240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:19:58.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.805+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1748877265 shutdown_connections 2026-03-10T07:19:58.816 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:19:58.805+0000 7fae4c8fc640 1 -- 192.168.123.100:0/1748877265 wait complete. 2026-03-10T07:19:58.870 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T07:19:58.870 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T07:19:59.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:59 vm00 bash[20701]: cluster 2026-03-10T07:19:58.689835+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: y(active, since 6s) 2026-03-10T07:19:59.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:59 vm00 bash[20701]: cluster 2026-03-10T07:19:58.689835+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: y(active, since 6s) 2026-03-10T07:19:59.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:59 vm00 bash[20701]: audit 2026-03-10T07:19:58.802573+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.100:0/1748877265' entity='client.admin' 2026-03-10T07:19:59.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:19:59 vm00 bash[20701]: audit 2026-03-10T07:19:58.802573+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.100:0/1748877265' entity='client.admin' 2026-03-10T07:20:03.564 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.a/config 2026-03-10T07:20:03.713 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.701+0000 7f878196e640 1 -- 192.168.123.100:0/3114039812 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f877c102e30 msgr2=0x7f877c105220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:03.713 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.701+0000 7f878196e640 1 --2- 192.168.123.100:0/3114039812 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f877c102e30 0x7f877c105220 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f8770009a00 tx=0x7f877002f310 comp rx=0 tx=0).stop 2026-03-10T07:20:03.713 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.701+0000 7f878196e640 1 -- 192.168.123.100:0/3114039812 shutdown_connections 2026-03-10T07:20:03.713 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.701+0000 7f878196e640 1 --2- 192.168.123.100:0/3114039812 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f877c102e30 0x7f877c105220 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:03.713 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.701+0000 7f878196e640 1 -- 192.168.123.100:0/3114039812 >> 192.168.123.100:0/3114039812 conn(0x7f877c0fc9d0 msgr2=0x7f877c0fedf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:03.714 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.701+0000 7f878196e640 1 -- 192.168.123.100:0/3114039812 shutdown_connections 2026-03-10T07:20:03.714 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.701+0000 7f878196e640 1 -- 192.168.123.100:0/3114039812 wait complete. 2026-03-10T07:20:03.714 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.701+0000 7f878196e640 1 Processor -- start 2026-03-10T07:20:03.714 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.701+0000 7f878196e640 1 -- start start 2026-03-10T07:20:03.714 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f878196e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f877c102e30 0x7f877c196b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:03.714 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f877affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f877c102e30 0x7f877c196b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:03.714 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f878196e640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f877c1029b0 con 0x7f877c102e30 2026-03-10T07:20:03.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f877affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f877c102e30 0x7f877c196b00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:52136/0 (socket says 192.168.123.100:52136) 2026-03-10T07:20:03.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f877affd640 1 -- 192.168.123.100:0/397678422 learned_addr learned my addr 192.168.123.100:0/397678422 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:20:03.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f877affd640 1 -- 192.168.123.100:0/397678422 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f877c197040 con 0x7f877c102e30 2026-03-10T07:20:03.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f877affd640 1 --2- 192.168.123.100:0/397678422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f877c102e30 0x7f877c196b00 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f8770004600 tx=0x7f8770004630 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:03.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f878096c640 1 -- 192.168.123.100:0/397678422 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f877003d070 con 0x7f877c102e30 2026-03-10T07:20:03.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f878096c640 1 -- 192.168.123.100:0/397678422 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f8770045070 con 0x7f877c102e30 2026-03-10T07:20:03.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f878096c640 1 -- 192.168.123.100:0/397678422 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f87700403f0 con 0x7f877c102e30 2026-03-10T07:20:03.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f878196e640 1 -- 192.168.123.100:0/397678422 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f877c1972d0 con 0x7f877c102e30 2026-03-10T07:20:03.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f878196e640 1 -- 192.168.123.100:0/397678422 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f877c199fc0 con 0x7f877c102e30 2026-03-10T07:20:03.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f878196e640 1 -- 192.168.123.100:0/397678422 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f877c1977b0 con 0x7f877c102e30 2026-03-10T07:20:03.719 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f878096c640 1 -- 192.168.123.100:0/397678422 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f8770038470 con 0x7f877c102e30 2026-03-10T07:20:03.719 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f878096c640 1 --2- 192.168.123.100:0/397678422 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f875003d9c0 0x7f875003fe80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:03.719 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f878096c640 1 -- 192.168.123.100:0/397678422 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f8770076650 con 0x7f877c102e30 2026-03-10T07:20:03.719 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f877a7fc640 1 --2- 192.168.123.100:0/397678422 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f875003d9c0 0x7f875003fe80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:03.719 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.705+0000 7f877a7fc640 1 --2- 192.168.123.100:0/397678422 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f875003d9c0 0x7f875003fe80 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f8764009a10 tx=0x7f8764006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:03.720 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.709+0000 7f878096c640 1 -- 192.168.123.100:0/397678422 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f877c1977b0 con 0x7f877c102e30 2026-03-10T07:20:03.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.801+0000 7f878196e640 1 -- 192.168.123.100:0/397678422 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}) -- 0x7f877c1902a0 con 0x7f875003d9c0 2026-03-10T07:20:03.819 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.809+0000 7f878096c640 1 -- 192.168.123.100:0/397678422 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7f877c1902a0 con 0x7f875003d9c0 2026-03-10T07:20:03.826 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.813+0000 7f878196e640 1 -- 192.168.123.100:0/397678422 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f875003d9c0 msgr2=0x7f875003fe80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:03.826 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.813+0000 7f878196e640 1 --2- 192.168.123.100:0/397678422 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f875003d9c0 0x7f875003fe80 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f8764009a10 tx=0x7f8764006eb0 comp rx=0 tx=0).stop 2026-03-10T07:20:03.826 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.813+0000 7f878196e640 1 -- 192.168.123.100:0/397678422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f877c102e30 msgr2=0x7f877c196b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:03.826 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.813+0000 7f878196e640 1 --2- 192.168.123.100:0/397678422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f877c102e30 0x7f877c196b00 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f8770004600 tx=0x7f8770004630 comp rx=0 tx=0).stop 2026-03-10T07:20:03.826 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.817+0000 7f878196e640 1 -- 192.168.123.100:0/397678422 shutdown_connections 2026-03-10T07:20:03.826 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.817+0000 7f878196e640 1 --2- 192.168.123.100:0/397678422 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f875003d9c0 0x7f875003fe80 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:03.826 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.817+0000 7f878196e640 1 --2- 192.168.123.100:0/397678422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f877c102e30 0x7f877c196b00 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:03.826 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.817+0000 7f878196e640 1 -- 192.168.123.100:0/397678422 >> 192.168.123.100:0/397678422 conn(0x7f877c0fc9d0 msgr2=0x7f877c0fd3f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:03.827 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.817+0000 7f878196e640 1 -- 192.168.123.100:0/397678422 shutdown_connections 2026-03-10T07:20:03.827 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:03.817+0000 7f878196e640 1 -- 192.168.123.100:0/397678422 wait complete. 2026-03-10T07:20:03.911 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm03 2026-03-10T07:20:03.911 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:20:03.911 DEBUG:teuthology.orchestra.run.vm03:> dd of=/etc/ceph/ceph.conf 2026-03-10T07:20:03.914 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:20:03.915 DEBUG:teuthology.orchestra.run.vm03:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:20:03.961 INFO:tasks.cephadm:Adding host vm03 to orchestrator... 2026-03-10T07:20:03.961 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch host add vm03 2026-03-10T07:20:04.122 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.114317+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.114317+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.116824+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.116824+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.117624+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.117624+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.120030+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.120030+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.126290+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.126290+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.129849+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.129849+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.812645+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.812645+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.813640+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.813640+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.814896+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.814896+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.815537+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.815537+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.958983+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.958983+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.963358+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.963358+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.967915+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:04.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:04 vm00 bash[20701]: audit 2026-03-10T07:20:03.967915+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:05.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:05 vm00 bash[20701]: audit 2026-03-10T07:20:03.809757+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:05.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:05 vm00 bash[20701]: audit 2026-03-10T07:20:03.809757+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:05.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:05 vm00 bash[20701]: cephadm 2026-03-10T07:20:03.816336+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:20:05.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:05 vm00 bash[20701]: cephadm 2026-03-10T07:20:03.816336+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:20:05.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:05 vm00 bash[20701]: cephadm 2026-03-10T07:20:03.853348+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:05.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:05 vm00 bash[20701]: cephadm 2026-03-10T07:20:03.853348+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:05.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:05 vm00 bash[20701]: cephadm 2026-03-10T07:20:03.898670+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:20:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:05 vm00 bash[20701]: cephadm 2026-03-10T07:20:03.898670+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:20:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:05 vm00 bash[20701]: cephadm 2026-03-10T07:20:03.926333+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:20:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:05 vm00 bash[20701]: cephadm 2026-03-10T07:20:03.926333+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:20:08.578 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.a/config 2026-03-10T07:20:08.737 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.725+0000 7f5b65693640 1 -- 192.168.123.100:0/3929780300 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b6010a650 msgr2=0x7f5b6010aa30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:08.737 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.725+0000 7f5b65693640 1 --2- 192.168.123.100:0/3929780300 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b6010a650 0x7f5b6010aa30 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f5b480099b0 tx=0x7f5b4802f2b0 comp rx=0 tx=0).stop 2026-03-10T07:20:08.737 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.725+0000 7f5b65693640 1 -- 192.168.123.100:0/3929780300 shutdown_connections 2026-03-10T07:20:08.737 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.725+0000 7f5b65693640 1 --2- 192.168.123.100:0/3929780300 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b6010a650 0x7f5b6010aa30 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:08.737 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.725+0000 7f5b65693640 1 -- 192.168.123.100:0/3929780300 >> 192.168.123.100:0/3929780300 conn(0x7f5b60100280 msgr2=0x7f5b601026a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:08.737 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.725+0000 7f5b65693640 1 -- 192.168.123.100:0/3929780300 shutdown_connections 2026-03-10T07:20:08.737 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.725+0000 7f5b65693640 1 -- 192.168.123.100:0/3929780300 wait complete. 2026-03-10T07:20:08.738 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.725+0000 7f5b65693640 1 Processor -- start 2026-03-10T07:20:08.738 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.725+0000 7f5b65693640 1 -- start start 2026-03-10T07:20:08.738 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.725+0000 7f5b65693640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b6010a650 0x7f5b6019b450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:08.738 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.725+0000 7f5b65693640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f5b6010df70 con 0x7f5b6010a650 2026-03-10T07:20:08.738 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b5effd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b6010a650 0x7f5b6019b450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:08.738 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b5effd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b6010a650 0x7f5b6019b450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:35754/0 (socket says 192.168.123.100:35754) 2026-03-10T07:20:08.738 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b5effd640 1 -- 192.168.123.100:0/3061314799 learned_addr learned my addr 192.168.123.100:0/3061314799 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:20:08.738 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b5effd640 1 -- 192.168.123.100:0/3061314799 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5b6019b990 con 0x7f5b6010a650 2026-03-10T07:20:08.739 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b5effd640 1 --2- 192.168.123.100:0/3061314799 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b6010a650 0x7f5b6019b450 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f5b48004290 tx=0x7f5b480042c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:08.739 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b3ffff640 1 -- 192.168.123.100:0/3061314799 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5b48038470 con 0x7f5b6010a650 2026-03-10T07:20:08.739 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b65693640 1 -- 192.168.123.100:0/3061314799 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f5b6019bc20 con 0x7f5b6010a650 2026-03-10T07:20:08.739 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b65693640 1 -- 192.168.123.100:0/3061314799 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5b6019c080 con 0x7f5b6010a650 2026-03-10T07:20:08.740 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b3ffff640 1 -- 192.168.123.100:0/3061314799 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f5b48046070 con 0x7f5b6010a650 2026-03-10T07:20:08.740 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b3ffff640 1 -- 192.168.123.100:0/3061314799 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5b48041600 con 0x7f5b6010a650 2026-03-10T07:20:08.740 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b3ffff640 1 -- 192.168.123.100:0/3061314799 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f5b480417a0 con 0x7f5b6010a650 2026-03-10T07:20:08.740 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b3ffff640 1 --2- 192.168.123.100:0/3061314799 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5b3003dd80 0x7f5b30040240 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:08.740 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b3ffff640 1 -- 192.168.123.100:0/3061314799 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f5b480779c0 con 0x7f5b6010a650 2026-03-10T07:20:08.741 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.729+0000 7f5b65693640 1 -- 192.168.123.100:0/3061314799 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5b60105cc0 con 0x7f5b6010a650 2026-03-10T07:20:08.744 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.733+0000 7f5b5e7fc640 1 --2- 192.168.123.100:0/3061314799 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5b3003dd80 0x7f5b30040240 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:08.744 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.733+0000 7f5b5e7fc640 1 --2- 192.168.123.100:0/3061314799 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5b3003dd80 0x7f5b30040240 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f5b540099c0 tx=0x7f5b54006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:08.744 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.733+0000 7f5b3ffff640 1 -- 192.168.123.100:0/3061314799 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f5b48035320 con 0x7f5b6010a650 2026-03-10T07:20:08.847 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:08.833+0000 7f5b65693640 1 -- 192.168.123.100:0/3061314799 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}) -- 0x7f5b6019c7b0 con 0x7f5b3003dd80 2026-03-10T07:20:09.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:09 vm00 bash[20701]: audit 2026-03-10T07:20:08.842294+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:09.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:09 vm00 bash[20701]: audit 2026-03-10T07:20:08.842294+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:09.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:09 vm00 bash[20701]: cephadm 2026-03-10T07:20:09.403007+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-10T07:20:09.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:09 vm00 bash[20701]: cephadm 2026-03-10T07:20:09.403007+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-10T07:20:10.717 INFO:teuthology.orchestra.run.vm00.stdout:Added host 'vm03' with addr '192.168.123.103' 2026-03-10T07:20:10.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:10.705+0000 7f5b3ffff640 1 -- 192.168.123.100:0/3061314799 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (secure 0 0 0) 0x7f5b6019c7b0 con 0x7f5b3003dd80 2026-03-10T07:20:10.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:10.705+0000 7f5b65693640 1 -- 192.168.123.100:0/3061314799 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5b3003dd80 msgr2=0x7f5b30040240 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:10.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:10.705+0000 7f5b65693640 1 --2- 192.168.123.100:0/3061314799 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5b3003dd80 0x7f5b30040240 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f5b540099c0 tx=0x7f5b54006eb0 comp rx=0 tx=0).stop 2026-03-10T07:20:10.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:10.705+0000 7f5b65693640 1 -- 192.168.123.100:0/3061314799 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b6010a650 msgr2=0x7f5b6019b450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:10.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:10.705+0000 7f5b65693640 1 --2- 192.168.123.100:0/3061314799 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b6010a650 0x7f5b6019b450 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f5b48004290 tx=0x7f5b480042c0 comp rx=0 tx=0).stop 2026-03-10T07:20:10.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:10.705+0000 7f5b65693640 1 -- 192.168.123.100:0/3061314799 shutdown_connections 2026-03-10T07:20:10.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:10.705+0000 7f5b65693640 1 --2- 192.168.123.100:0/3061314799 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5b3003dd80 0x7f5b30040240 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:10.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:10.705+0000 7f5b65693640 1 --2- 192.168.123.100:0/3061314799 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b6010a650 0x7f5b6019b450 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:10.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:10.705+0000 7f5b65693640 1 -- 192.168.123.100:0/3061314799 >> 192.168.123.100:0/3061314799 conn(0x7f5b60100280 msgr2=0x7f5b60101c90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:10.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:10.705+0000 7f5b65693640 1 -- 192.168.123.100:0/3061314799 shutdown_connections 2026-03-10T07:20:10.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:10.705+0000 7f5b65693640 1 -- 192.168.123.100:0/3061314799 wait complete. 2026-03-10T07:20:10.784 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch host ls --format=json 2026-03-10T07:20:12.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:11 vm00 bash[20701]: audit 2026-03-10T07:20:10.708707+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:12.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:11 vm00 bash[20701]: audit 2026-03-10T07:20:10.708707+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:12.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:11 vm00 bash[20701]: cephadm 2026-03-10T07:20:10.709241+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm03 2026-03-10T07:20:12.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:11 vm00 bash[20701]: cephadm 2026-03-10T07:20:10.709241+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm03 2026-03-10T07:20:12.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:11 vm00 bash[20701]: audit 2026-03-10T07:20:10.709764+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:12.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:11 vm00 bash[20701]: audit 2026-03-10T07:20:10.709764+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:12.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:11 vm00 bash[20701]: audit 2026-03-10T07:20:11.020016+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:12.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:11 vm00 bash[20701]: audit 2026-03-10T07:20:11.020016+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:13.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:13 vm00 bash[20701]: cluster 2026-03-10T07:20:11.867880+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:13.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:13 vm00 bash[20701]: cluster 2026-03-10T07:20:11.867880+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:13.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:13 vm00 bash[20701]: audit 2026-03-10T07:20:12.312929+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:13.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:13 vm00 bash[20701]: audit 2026-03-10T07:20:12.312929+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:13.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:13 vm00 bash[20701]: audit 2026-03-10T07:20:12.905235+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:13.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:13 vm00 bash[20701]: audit 2026-03-10T07:20:12.905235+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:15.403 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.a/config 2026-03-10T07:20:15.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.541+0000 7f7cb51ba640 1 -- 192.168.123.100:0/2695864222 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7cb0105fa0 msgr2=0x7f7cb0075410 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:15.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.541+0000 7f7cb51ba640 1 --2- 192.168.123.100:0/2695864222 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7cb0105fa0 0x7f7cb0075410 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f7c9c009a00 tx=0x7f7c9c02f3a0 comp rx=0 tx=0).stop 2026-03-10T07:20:15.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cb51ba640 1 -- 192.168.123.100:0/2695864222 shutdown_connections 2026-03-10T07:20:15.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cb51ba640 1 --2- 192.168.123.100:0/2695864222 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7cb0105fa0 0x7f7cb0075410 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:15.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cb51ba640 1 -- 192.168.123.100:0/2695864222 >> 192.168.123.100:0/2695864222 conn(0x7f7cb00fddd0 msgr2=0x7f7cb01001f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:15.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cb51ba640 1 -- 192.168.123.100:0/2695864222 shutdown_connections 2026-03-10T07:20:15.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cb51ba640 1 -- 192.168.123.100:0/2695864222 wait complete. 2026-03-10T07:20:15.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cb51ba640 1 Processor -- start 2026-03-10T07:20:15.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cb51ba640 1 -- start start 2026-03-10T07:20:15.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cb51ba640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7cb0105fa0 0x7f7cb0072310 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:15.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cb51ba640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f7cb0112240 con 0x7f7cb0105fa0 2026-03-10T07:20:15.556 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7caffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7cb0105fa0 0x7f7cb0072310 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:15.556 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7caffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7cb0105fa0 0x7f7cb0072310 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:41766/0 (socket says 192.168.123.100:41766) 2026-03-10T07:20:15.556 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7caffff640 1 -- 192.168.123.100:0/1555406846 learned_addr learned my addr 192.168.123.100:0/1555406846 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:20:15.556 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7caffff640 1 -- 192.168.123.100:0/1555406846 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7cb0072850 con 0x7f7cb0105fa0 2026-03-10T07:20:15.556 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7caffff640 1 --2- 192.168.123.100:0/1555406846 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7cb0105fa0 0x7f7cb0072310 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f7c9c0059c0 tx=0x7f7c9c002c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:15.556 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cad7fa640 1 -- 192.168.123.100:0/1555406846 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7c9c038470 con 0x7f7cb0105fa0 2026-03-10T07:20:15.557 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cad7fa640 1 -- 192.168.123.100:0/1555406846 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f7c9c046070 con 0x7f7cb0105fa0 2026-03-10T07:20:15.557 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cad7fa640 1 -- 192.168.123.100:0/1555406846 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7c9c041590 con 0x7f7cb0105fa0 2026-03-10T07:20:15.557 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cb51ba640 1 -- 192.168.123.100:0/1555406846 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f7cb0072ae0 con 0x7f7cb0105fa0 2026-03-10T07:20:15.558 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cad7fa640 1 -- 192.168.123.100:0/1555406846 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f7c9c038610 con 0x7f7cb0105fa0 2026-03-10T07:20:15.558 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.545+0000 7f7cb51ba640 1 -- 192.168.123.100:0/1555406846 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f7cb0072f40 con 0x7f7cb0105fa0 2026-03-10T07:20:15.558 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.549+0000 7f7cad7fa640 1 --2- 192.168.123.100:0/1555406846 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f7c8403d970 0x7f7c8403fe30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:15.559 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.549+0000 7f7cad7fa640 1 -- 192.168.123.100:0/1555406846 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f7c9c07c920 con 0x7f7cb0105fa0 2026-03-10T07:20:15.559 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.549+0000 7f7caf7fe640 1 --2- 192.168.123.100:0/1555406846 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f7c8403d970 0x7f7c8403fe30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:15.559 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.549+0000 7f7cb51ba640 1 -- 192.168.123.100:0/1555406846 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7cb0076b50 con 0x7f7cb0105fa0 2026-03-10T07:20:15.559 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.549+0000 7f7caf7fe640 1 --2- 192.168.123.100:0/1555406846 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f7c8403d970 0x7f7c8403fe30 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f7ca00099c0 tx=0x7f7ca0006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:15.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.553+0000 7f7cad7fa640 1 -- 192.168.123.100:0/1555406846 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f7c9c035da0 con 0x7f7cb0105fa0 2026-03-10T07:20:15.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:15 vm00 bash[20701]: cluster 2026-03-10T07:20:13.868143+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:15.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:15 vm00 bash[20701]: cluster 2026-03-10T07:20:13.868143+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:15.670 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.657+0000 7f7cb51ba640 1 -- 192.168.123.100:0/1555406846 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f7cb0075490 con 0x7f7c8403d970 2026-03-10T07:20:15.672 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.661+0000 7f7cad7fa640 1 -- 192.168.123.100:0/1555406846 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+155 (secure 0 0 0) 0x7f7cb0075490 con 0x7f7c8403d970 2026-03-10T07:20:15.672 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:20:15.672 INFO:teuthology.orchestra.run.vm00.stdout:[{"addr": "192.168.123.100", "hostname": "vm00", "labels": [], "status": ""}, {"addr": "192.168.123.103", "hostname": "vm03", "labels": [], "status": ""}] 2026-03-10T07:20:15.675 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.665+0000 7f7cb51ba640 1 -- 192.168.123.100:0/1555406846 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f7c8403d970 msgr2=0x7f7c8403fe30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:15.675 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.665+0000 7f7cb51ba640 1 --2- 192.168.123.100:0/1555406846 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f7c8403d970 0x7f7c8403fe30 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f7ca00099c0 tx=0x7f7ca0006eb0 comp rx=0 tx=0).stop 2026-03-10T07:20:15.675 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.665+0000 7f7cb51ba640 1 -- 192.168.123.100:0/1555406846 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7cb0105fa0 msgr2=0x7f7cb0072310 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:15.675 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.665+0000 7f7cb51ba640 1 --2- 192.168.123.100:0/1555406846 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7cb0105fa0 0x7f7cb0072310 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f7c9c0059c0 tx=0x7f7c9c002c80 comp rx=0 tx=0).stop 2026-03-10T07:20:15.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.665+0000 7f7cb51ba640 1 -- 192.168.123.100:0/1555406846 shutdown_connections 2026-03-10T07:20:15.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.665+0000 7f7cb51ba640 1 --2- 192.168.123.100:0/1555406846 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f7c8403d970 0x7f7c8403fe30 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:15.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.665+0000 7f7cb51ba640 1 --2- 192.168.123.100:0/1555406846 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7cb0105fa0 0x7f7cb0072310 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:15.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.665+0000 7f7cb51ba640 1 -- 192.168.123.100:0/1555406846 >> 192.168.123.100:0/1555406846 conn(0x7f7cb00fddd0 msgr2=0x7f7cb0105500 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:15.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.665+0000 7f7cb51ba640 1 -- 192.168.123.100:0/1555406846 shutdown_connections 2026-03-10T07:20:15.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:15.665+0000 7f7cb51ba640 1 -- 192.168.123.100:0/1555406846 wait complete. 2026-03-10T07:20:15.734 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T07:20:15.734 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd crush tunables default 2026-03-10T07:20:16.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.637976+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.637976+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.640234+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.640234+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.643184+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.643184+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.645764+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.645764+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.646397+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.646397+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.647147+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.647147+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.647607+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.647607+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: cephadm 2026-03-10T07:20:15.648325+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: cephadm 2026-03-10T07:20:15.648325+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.666001+0000 mgr.y (mgr.14150) 21 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.666001+0000 mgr.y (mgr.14150) 21 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: cephadm 2026-03-10T07:20:15.683428+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: cephadm 2026-03-10T07:20:15.683428+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: cephadm 2026-03-10T07:20:15.733754+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: cephadm 2026-03-10T07:20:15.733754+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: cephadm 2026-03-10T07:20:15.766780+0000 mgr.y (mgr.14150) 24 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: cephadm 2026-03-10T07:20:15.766780+0000 mgr.y (mgr.14150) 24 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.801388+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.801388+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.803998+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.803998+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.806715+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: audit 2026-03-10T07:20:15.806715+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: cluster 2026-03-10T07:20:15.868432+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:16.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:16 vm00 bash[20701]: cluster 2026-03-10T07:20:15.868432+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:19.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:18 vm00 bash[20701]: cluster 2026-03-10T07:20:17.868673+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:19.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:18 vm00 bash[20701]: cluster 2026-03-10T07:20:17.868673+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:19.412 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.a/config 2026-03-10T07:20:19.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 -- 192.168.123.100:0/3948049419 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1430075410 msgr2=0x7f14300757f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:19.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 --2- 192.168.123.100:0/3948049419 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1430075410 0x7f14300757f0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f1420009a00 tx=0x7f142002f3a0 comp rx=0 tx=0).stop 2026-03-10T07:20:19.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 -- 192.168.123.100:0/3948049419 shutdown_connections 2026-03-10T07:20:19.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 --2- 192.168.123.100:0/3948049419 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1430075410 0x7f14300757f0 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:19.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 -- 192.168.123.100:0/3948049419 >> 192.168.123.100:0/3948049419 conn(0x7f14300fde00 msgr2=0x7f1430100220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:19.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 -- 192.168.123.100:0/3948049419 shutdown_connections 2026-03-10T07:20:19.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 -- 192.168.123.100:0/3948049419 wait complete. 2026-03-10T07:20:19.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 Processor -- start 2026-03-10T07:20:19.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 -- start start 2026-03-10T07:20:19.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1430075410 0x7f143019b2e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:19.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f143010dd70 con 0x7f1430075410 2026-03-10T07:20:19.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f142ffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1430075410 0x7f143019b2e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:19.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f142ffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1430075410 0x7f143019b2e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:41792/0 (socket says 192.168.123.100:41792) 2026-03-10T07:20:19.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f142ffff640 1 -- 192.168.123.100:0/4238596270 learned_addr learned my addr 192.168.123.100:0/4238596270 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:20:19.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f142ffff640 1 -- 192.168.123.100:0/4238596270 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f143019b820 con 0x7f1430075410 2026-03-10T07:20:19.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f142ffff640 1 --2- 192.168.123.100:0/4238596270 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1430075410 0x7f143019b2e0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f14200059c0 tx=0x7f1420002c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:19.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f142d7fa640 1 -- 192.168.123.100:0/4238596270 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f1420038470 con 0x7f1430075410 2026-03-10T07:20:19.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f142d7fa640 1 -- 192.168.123.100:0/4238596270 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f1420046070 con 0x7f1430075410 2026-03-10T07:20:19.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 -- 192.168.123.100:0/4238596270 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f143019bab0 con 0x7f1430075410 2026-03-10T07:20:19.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f142d7fa640 1 -- 192.168.123.100:0/4238596270 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f1420041590 con 0x7f1430075410 2026-03-10T07:20:19.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 -- 192.168.123.100:0/4238596270 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f143019bf10 con 0x7f1430075410 2026-03-10T07:20:19.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f142d7fa640 1 -- 192.168.123.100:0/4238596270 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f1420038610 con 0x7f1430075410 2026-03-10T07:20:19.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f142d7fa640 1 --2- 192.168.123.100:0/4238596270 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f140403d9c0 0x7f140403fe80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:19.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f142d7fa640 1 -- 192.168.123.100:0/4238596270 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1155+0+0 (secure 0 0 0) 0x7f1420078a70 con 0x7f1430075410 2026-03-10T07:20:19.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.557+0000 7f143523c640 1 -- 192.168.123.100:0/4238596270 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f1430076b50 con 0x7f1430075410 2026-03-10T07:20:19.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.561+0000 7f142f7fe640 1 --2- 192.168.123.100:0/4238596270 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f140403d9c0 0x7f140403fe80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:19.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.561+0000 7f142d7fa640 1 -- 192.168.123.100:0/4238596270 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f1420038930 con 0x7f1430075410 2026-03-10T07:20:19.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.561+0000 7f142f7fe640 1 --2- 192.168.123.100:0/4238596270 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f140403d9c0 0x7f140403fe80 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f141c0099c0 tx=0x7f141c006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:19.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.657+0000 7f143523c640 1 -- 192.168.123.100:0/4238596270 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd crush tunables", "profile": "default"} v 0) -- 0x7f1430075870 con 0x7f1430075410 2026-03-10T07:20:19.931 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.921+0000 7f142d7fa640 1 -- 192.168.123.100:0/4238596270 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd crush tunables", "profile": "default"}]=0 adjusted tunables profile to default v4) ==== 124+0+0 (secure 0 0 0) 0x7f1420037cd0 con 0x7f1430075410 2026-03-10T07:20:19.932 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-03-10T07:20:19.934 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.925+0000 7f143523c640 1 -- 192.168.123.100:0/4238596270 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f140403d9c0 msgr2=0x7f140403fe80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:19.934 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.925+0000 7f143523c640 1 --2- 192.168.123.100:0/4238596270 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f140403d9c0 0x7f140403fe80 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f141c0099c0 tx=0x7f141c006eb0 comp rx=0 tx=0).stop 2026-03-10T07:20:19.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.925+0000 7f143523c640 1 -- 192.168.123.100:0/4238596270 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1430075410 msgr2=0x7f143019b2e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:19.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.925+0000 7f143523c640 1 --2- 192.168.123.100:0/4238596270 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1430075410 0x7f143019b2e0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f14200059c0 tx=0x7f1420002c80 comp rx=0 tx=0).stop 2026-03-10T07:20:19.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.925+0000 7f143523c640 1 -- 192.168.123.100:0/4238596270 shutdown_connections 2026-03-10T07:20:19.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.925+0000 7f143523c640 1 --2- 192.168.123.100:0/4238596270 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f140403d9c0 0x7f140403fe80 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:19.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.925+0000 7f143523c640 1 --2- 192.168.123.100:0/4238596270 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1430075410 0x7f143019b2e0 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:19.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.925+0000 7f143523c640 1 -- 192.168.123.100:0/4238596270 >> 192.168.123.100:0/4238596270 conn(0x7f14300fde00 msgr2=0x7f143010bb30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:19.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.925+0000 7f143523c640 1 -- 192.168.123.100:0/4238596270 shutdown_connections 2026-03-10T07:20:19.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:19.925+0000 7f143523c640 1 -- 192.168.123.100:0/4238596270 wait complete. 2026-03-10T07:20:19.990 INFO:tasks.cephadm:Adding mon.a on vm00 2026-03-10T07:20:19.990 INFO:tasks.cephadm:Adding mon.c on vm00 2026-03-10T07:20:19.990 INFO:tasks.cephadm:Adding mon.b on vm03 2026-03-10T07:20:19.990 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch apply mon '3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b' 2026-03-10T07:20:20.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:19 vm00 bash[20701]: audit 2026-03-10T07:20:19.663626+0000 mon.a (mon.0) 124 : audit [INF] from='client.? 192.168.123.100:0/4238596270' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T07:20:20.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:19 vm00 bash[20701]: audit 2026-03-10T07:20:19.663626+0000 mon.a (mon.0) 124 : audit [INF] from='client.? 192.168.123.100:0/4238596270' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T07:20:21.101 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:21.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.243+0000 7f302737f640 1 -- 192.168.123.103:0/1241482893 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3020101070 msgr2=0x7f3020101450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:21.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.243+0000 7f302737f640 1 --2- 192.168.123.103:0/1241482893 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3020101070 0x7f3020101450 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f30080099b0 tx=0x7f300802f2b0 comp rx=0 tx=0).stop 2026-03-10T07:20:21.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.243+0000 7f302737f640 1 -- 192.168.123.103:0/1241482893 shutdown_connections 2026-03-10T07:20:21.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.243+0000 7f302737f640 1 --2- 192.168.123.103:0/1241482893 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3020101070 0x7f3020101450 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:21.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.243+0000 7f302737f640 1 -- 192.168.123.103:0/1241482893 >> 192.168.123.103:0/1241482893 conn(0x7f30200fcec0 msgr2=0x7f30200ff2e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:21.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.243+0000 7f302737f640 1 -- 192.168.123.103:0/1241482893 shutdown_connections 2026-03-10T07:20:21.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.243+0000 7f302737f640 1 -- 192.168.123.103:0/1241482893 wait complete. 2026-03-10T07:20:21.254 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.243+0000 7f302737f640 1 Processor -- start 2026-03-10T07:20:21.254 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f302737f640 1 -- start start 2026-03-10T07:20:21.254 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f302737f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3020101070 0x7f302019b490 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:21.255 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f302737f640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f302010df70 con 0x7f3020101070 2026-03-10T07:20:21.255 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f30250f4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3020101070 0x7f302019b490 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:21.255 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f30250f4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3020101070 0x7f302019b490 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.103:47128/0 (socket says 192.168.123.103:47128) 2026-03-10T07:20:21.255 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f30250f4640 1 -- 192.168.123.103:0/1212123669 learned_addr learned my addr 192.168.123.103:0/1212123669 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:20:21.255 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f30250f4640 1 -- 192.168.123.103:0/1212123669 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f302019b9d0 con 0x7f3020101070 2026-03-10T07:20:21.255 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f30250f4640 1 --2- 192.168.123.103:0/1212123669 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3020101070 0x7f302019b490 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f3008004290 tx=0x7f30080042c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:21.256 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f30167fc640 1 -- 192.168.123.103:0/1212123669 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3008038470 con 0x7f3020101070 2026-03-10T07:20:21.256 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f302737f640 1 -- 192.168.123.103:0/1212123669 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f302019bc60 con 0x7f3020101070 2026-03-10T07:20:21.256 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f30167fc640 1 -- 192.168.123.103:0/1212123669 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3008046070 con 0x7f3020101070 2026-03-10T07:20:21.256 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f30167fc640 1 -- 192.168.123.103:0/1212123669 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3008041600 con 0x7f3020101070 2026-03-10T07:20:21.256 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f302737f640 1 -- 192.168.123.103:0/1212123669 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f302019c0c0 con 0x7f3020101070 2026-03-10T07:20:21.257 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f30167fc640 1 -- 192.168.123.103:0/1212123669 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f3008038610 con 0x7f3020101070 2026-03-10T07:20:21.257 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f302737f640 1 -- 192.168.123.103:0/1212123669 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f302006b250 con 0x7f3020101070 2026-03-10T07:20:21.257 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f30167fc640 1 --2- 192.168.123.103:0/1212123669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2ffc03dd30 0x7f2ffc0401f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:21.257 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.247+0000 7f30167fc640 1 -- 192.168.123.103:0/1212123669 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7f3008076c40 con 0x7f3020101070 2026-03-10T07:20:21.262 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.251+0000 7f30248f3640 1 --2- 192.168.123.103:0/1212123669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2ffc03dd30 0x7f2ffc0401f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:21.262 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.251+0000 7f30167fc640 1 -- 192.168.123.103:0/1212123669 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3008038c00 con 0x7f3020101070 2026-03-10T07:20:21.262 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.255+0000 7f30248f3640 1 --2- 192.168.123.103:0/1212123669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2ffc03dd30 0x7f2ffc0401f0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f30100099c0 tx=0x7f3010006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:21.357 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.347+0000 7f302737f640 1 -- 192.168.123.103:0/1212123669 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b", "target": ["mon-mgr", ""]}) -- 0x7f302019c7f0 con 0x7f2ffc03dd30 2026-03-10T07:20:21.363 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.355+0000 7f30167fc640 1 -- 192.168.123.103:0/1212123669 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f302019c7f0 con 0x7f2ffc03dd30 2026-03-10T07:20:21.363 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled mon update... 2026-03-10T07:20:21.366 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.355+0000 7f302737f640 1 -- 192.168.123.103:0/1212123669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2ffc03dd30 msgr2=0x7f2ffc0401f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:21.366 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.355+0000 7f302737f640 1 --2- 192.168.123.103:0/1212123669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2ffc03dd30 0x7f2ffc0401f0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f30100099c0 tx=0x7f3010006eb0 comp rx=0 tx=0).stop 2026-03-10T07:20:21.366 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.355+0000 7f302737f640 1 -- 192.168.123.103:0/1212123669 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3020101070 msgr2=0x7f302019b490 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:21.366 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.355+0000 7f302737f640 1 --2- 192.168.123.103:0/1212123669 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3020101070 0x7f302019b490 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f3008004290 tx=0x7f30080042c0 comp rx=0 tx=0).stop 2026-03-10T07:20:21.366 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.359+0000 7f302737f640 1 -- 192.168.123.103:0/1212123669 shutdown_connections 2026-03-10T07:20:21.366 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.359+0000 7f302737f640 1 --2- 192.168.123.103:0/1212123669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2ffc03dd30 0x7f2ffc0401f0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:21.366 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.359+0000 7f302737f640 1 --2- 192.168.123.103:0/1212123669 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3020101070 0x7f302019b490 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:21.366 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.359+0000 7f302737f640 1 -- 192.168.123.103:0/1212123669 >> 192.168.123.103:0/1212123669 conn(0x7f30200fcec0 msgr2=0x7f30200fe2b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:21.366 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.359+0000 7f302737f640 1 -- 192.168.123.103:0/1212123669 shutdown_connections 2026-03-10T07:20:21.367 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:21.359+0000 7f302737f640 1 -- 192.168.123.103:0/1212123669 wait complete. 2026-03-10T07:20:21.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:20 vm00 bash[20701]: cluster 2026-03-10T07:20:19.868871+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:21.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:20 vm00 bash[20701]: cluster 2026-03-10T07:20:19.868871+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:21.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:20 vm00 bash[20701]: audit 2026-03-10T07:20:19.926069+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.100:0/4238596270' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T07:20:21.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:20 vm00 bash[20701]: audit 2026-03-10T07:20:19.926069+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.100:0/4238596270' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T07:20:21.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:20 vm00 bash[20701]: cluster 2026-03-10T07:20:19.927568+0000 mon.a (mon.0) 126 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:21.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:20 vm00 bash[20701]: cluster 2026-03-10T07:20:19.927568+0000 mon.a (mon.0) 126 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:21.469 DEBUG:teuthology.orchestra.run.vm00:mon.c> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.c.service 2026-03-10T07:20:21.470 DEBUG:teuthology.orchestra.run.vm03:mon.b> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.b.service 2026-03-10T07:20:21.471 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T07:20:21.471 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph mon dump -f json 2026-03-10T07:20:22.625 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.353081+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.353081+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: cephadm 2026-03-10T07:20:21.354663+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b;count:3 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: cephadm 2026-03-10T07:20:21.354663+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b;count:3 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.357638+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.357638+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.358319+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.358319+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.359311+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.359311+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.359697+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.359697+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.362646+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:22.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.362646+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:22.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.363762+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:22.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.363762+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:22.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.364119+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:22.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: audit 2026-03-10T07:20:21.364119+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:22.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: cephadm 2026-03-10T07:20:21.364637+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm03 2026-03-10T07:20:22.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:22 vm00 bash[20701]: cephadm 2026-03-10T07:20:21.364637+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm03 2026-03-10T07:20:22.867 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:22.955 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb416332640 1 -- 192.168.123.103:0/1990032980 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb41010e850 msgr2=0x7fb41010ec30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb416332640 1 --2- 192.168.123.103:0/1990032980 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb41010e850 0x7fb41010ec30 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7fb4000099b0 tx=0x7fb40002f2b0 comp rx=0 tx=0).stop 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb416332640 1 -- 192.168.123.103:0/1990032980 shutdown_connections 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb416332640 1 --2- 192.168.123.103:0/1990032980 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb41010e850 0x7fb41010ec30 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb416332640 1 -- 192.168.123.103:0/1990032980 >> 192.168.123.103:0/1990032980 conn(0x7fb41006d290 msgr2=0x7fb41006d6a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb416332640 1 -- 192.168.123.103:0/1990032980 shutdown_connections 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb416332640 1 -- 192.168.123.103:0/1990032980 wait complete. 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb416332640 1 Processor -- start 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb416332640 1 -- start start 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb416332640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb41010e850 0x7fb4101a83b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb416332640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fb41011ae20 con 0x7fb41010e850 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb40ffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb41010e850 0x7fb4101a83b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:22.956 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb40ffff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb41010e850 0x7fb4101a83b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.103:47146/0 (socket says 192.168.123.103:47146) 2026-03-10T07:20:22.957 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb40ffff640 1 -- 192.168.123.103:0/3120653669 learned_addr learned my addr 192.168.123.103:0/3120653669 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:20:22.957 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.947+0000 7fb40ffff640 1 -- 192.168.123.103:0/3120653669 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb4101a88f0 con 0x7fb41010e850 2026-03-10T07:20:22.958 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.951+0000 7fb40ffff640 1 --2- 192.168.123.103:0/3120653669 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb41010e850 0x7fb4101a83b0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7fb400009ae0 tx=0x7fb400004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:22.958 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.951+0000 7fb40d7fa640 1 -- 192.168.123.103:0/3120653669 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb4000045b0 con 0x7fb41010e850 2026-03-10T07:20:22.960 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.951+0000 7fb416332640 1 -- 192.168.123.103:0/3120653669 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fb4101a8b80 con 0x7fb41010e850 2026-03-10T07:20:22.960 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.951+0000 7fb416332640 1 -- 192.168.123.103:0/3120653669 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fb4101a8fe0 con 0x7fb41010e850 2026-03-10T07:20:22.960 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.951+0000 7fb40d7fa640 1 -- 192.168.123.103:0/3120653669 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fb400033040 con 0x7fb41010e850 2026-03-10T07:20:22.960 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.951+0000 7fb40d7fa640 1 -- 192.168.123.103:0/3120653669 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb400031850 con 0x7fb41010e850 2026-03-10T07:20:22.961 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.951+0000 7fb40d7fa640 1 -- 192.168.123.103:0/3120653669 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7fb400038470 con 0x7fb41010e850 2026-03-10T07:20:22.961 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.951+0000 7fb416332640 1 -- 192.168.123.103:0/3120653669 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb410074f20 con 0x7fb41010e850 2026-03-10T07:20:22.961 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.951+0000 7fb40d7fa640 1 --2- 192.168.123.103:0/3120653669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fb3d803da10 0x7fb3d803fed0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:22.961 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.951+0000 7fb40d7fa640 1 -- 192.168.123.103:0/3120653669 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7fb40003e070 con 0x7fb41010e850 2026-03-10T07:20:22.966 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.955+0000 7fb40f7fe640 1 --2- 192.168.123.103:0/3120653669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fb3d803da10 0x7fb3d803fed0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:22.966 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.955+0000 7fb40d7fa640 1 -- 192.168.123.103:0/3120653669 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fb400036370 con 0x7fb41010e850 2026-03-10T07:20:22.967 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:22.959+0000 7fb40f7fe640 1 --2- 192.168.123.103:0/3120653669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fb3d803da10 0x7fb3d803fed0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7fb40800ad30 tx=0x7fb4080093f0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:23.053 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:23.043+0000 7fb40d7fa640 1 -- 192.168.123.103:0/3120653669 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_map magic: 0 ==== 309+0+0 (secure 0 0 0) 0x7fb400049330 con 0x7fb41010e850 2026-03-10T07:20:23.122 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:23.111+0000 7fb416332640 1 -- 192.168.123.103:0/3120653669 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7fb41010ec30 con 0x7fb41010e850 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 systemd[1]: Started Ceph mon.b for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 0 load: jerasure load: lrc 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Git sha 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: DB SUMMARY 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: DB Session ID: 7DTHRKIBT72ZG72YC235 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-b/store.db dir, Total Num: 0, files: 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-b/store.db: 000004.log size: 511 ; 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:22 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.env: 0x5556f6943dc0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.info_log: 0x55571d708700 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.db_log_dir: 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.wal_dir: 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T07:20:23.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.write_buffer_manager: 0x55571d70d900 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.row_cache: None 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.wal_filter: None 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T07:20:23.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Compression algorithms supported: 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: kZSTD supported: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.merge_operator: 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55571d708640) 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cache_index_and_filter_blocks: 1 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: pin_top_level_index_and_filter: 1 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: index_type: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: data_block_index_type: 0 2026-03-10T07:20:23.276 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: index_shortening: 1 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: checksum: 4 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: no_block_cache: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: block_cache: 0x55571d72f350 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: block_cache_name: BinnedLRUCache 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: block_cache_options: 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: capacity : 536870912 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: num_shard_bits : 4 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: strict_capacity_limit : 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: high_pri_pool_ratio: 0.000 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: block_cache_compressed: (nil) 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: persistent_cache: (nil) 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: block_size: 4096 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: block_size_deviation: 10 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: block_restart_interval: 16 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: index_block_restart_interval: 1 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: metadata_block_size: 4096 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: partition_filters: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: use_delta_encoding: 1 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: filter_policy: bloomfilter 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: whole_key_filtering: 1 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: verify_compression: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: read_amp_bytes_per_bit: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: format_version: 5 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: enable_index_compression: 1 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: block_align: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: max_auto_readahead_size: 262144 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: prepopulate_block_cache: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: initial_auto_readahead_size: 8192 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: num_file_reads_for_auto_readahead: 2 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.num_levels: 7 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T07:20:23.277 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T07:20:23.278 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.991+0000 7f729f579d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.995+0000 7f729f579d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.995+0000 7f729f579d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.995+0000 7f729f579d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8649cc54-eea4-4e27-9480-2f0a796d0ab2 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.995+0000 7f729f579d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127222998866, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.995+0000 7f729f579d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.995+0000 7f729f579d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127223000420, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773127222, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8649cc54-eea4-4e27-9480-2f0a796d0ab2", "db_session_id": "7DTHRKIBT72ZG72YC235", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.995+0000 7f729f579d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127223000481, "job": 1, "event": "recovery_finished"} 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:22.995+0000 7f729f579d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.003+0000 7f729f579d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-b/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.003+0000 7f729f579d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55571d730e00 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.003+0000 7f729f579d80 4 rocksdb: DB pointer 0x55571d846000 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.003+0000 7f729f579d80 0 mon.b does not exist in monmap, will attempt to join an existing cluster 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.003+0000 7f729f579d80 0 using public_addr v2:192.168.123.103:0/0 -> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.003+0000 7f7295343640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.003+0000 7f7295343640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: ** DB Stats ** 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: ** Compaction Stats [default] ** 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: ** Compaction Stats [default] ** 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: AddFile(Keys): cumulative 0, interval 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Cumulative compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Interval compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Block cache BinnedLRUCache@0x55571d72f350#7 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.4e-05 secs_since: 0 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: ** File Read Latency Histogram By Level [default] ** 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.003+0000 7f729f579d80 0 starting mon.b rank -1 at public addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] at bind addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon_data /var/lib/ceph/mon/ceph-b fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:23.279 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.003+0000 7f729f579d80 1 mon.b@-1(???) e0 preinit fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 0 mon.b@-1(synchronizing).mds e1 new map 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 0 mon.b@-1(synchronizing).mds e1 print_map 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: e1 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: btime 2026-03-10T07:19:29:469789+0000 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: legacy client fscid: -1 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: No filesystems configured 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 1 mon.b@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 1 mon.b@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 1 mon.b@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 1 mon.b@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 1 mon.b@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 1 mon.b@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 0 mon.b@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.031+0000 7f7298349640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:29.470311+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:29.470311+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:29.464238+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:29.464238+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756185+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756185+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T07:20:23.280 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756210+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756210+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756213+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756213+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756215+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756215+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756222+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756222+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756225+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756225+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756227+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756227+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756230+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756230+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756454+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756454+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756465+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756465+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756929+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:30.756929+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:30.997564+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/1827183523' entity='client.admin' 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:30.997564+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/1827183523' entity='client.admin' 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:31.636954+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/3986233816' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:31.636954+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/3986233816' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:33.908523+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/2084913738' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:33.908523+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/2084913738' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:35.024145+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:35.024145+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:35.030139+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00611743s) 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:35.030139+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00611743s) 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.036618+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.036618+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.036995+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.036995+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.037983+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.037983+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.038678+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.038678+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.039190+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.039190+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:35.047116+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:35.047116+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.088146+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.088146+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.091384+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:20:23.281 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.091384+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.091738+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.091738+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.094694+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.094694+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.099324+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:35.099324+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/374434072' entity='mgr.y' 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:36.061876+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.03785s) 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:36.061876+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.03785s) 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:36.385665+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.100:0/3766816708' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:36.385665+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.100:0/3766816708' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:36.669382+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.100:0/2353862342' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:36.669382+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.100:0/2353862342' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:36.963997+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.100:0/4069365900' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:36.963997+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.100:0/4069365900' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:37.325136+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.100:0/4069365900' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:37.325136+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.100:0/4069365900' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:37.329840+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:37.329840+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:37.662312+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.100:0/1536585838' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:37.662312+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.100:0/1536585838' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:40.638032+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:40.638032+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:40.638383+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:40.638383+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:40.642870+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:40.642870+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:40.642943+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: y(active, starting, since 0.00472576s) 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:40.642943+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: y(active, starting, since 0.00472576s) 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.645814+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.645814+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.646210+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.646210+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.646793+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.646793+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.646931+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.646931+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.647095+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.647095+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:40.653456+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon y is now available 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:40.653456+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon y is now available 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.662316+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.662316+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.665578+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.665578+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.677739+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.677739+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.679085+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.679085+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.681079+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.282 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.681079+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:40.660275+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:40.660275+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.694601+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:40.694601+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.118591+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.118591+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.121524+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.121524+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:41.652321+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: y(active, since 1.0141s) 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:41.652321+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: y(active, since 1.0141s) 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:41.641762+0000 mgr.y (mgr.14118) 2 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Bus STARTING 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:41.641762+0000 mgr.y (mgr.14118) 2 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Bus STARTING 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.653098+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.653098+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.657577+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.657577+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:41.743167+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:41.743167+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:41.852692+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:41.852692+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:41.852777+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Bus STARTED 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:41.852777+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Bus STARTED 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.855366+0000 mon.a (mon.0) 54 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.855366+0000 mon.a (mon.0) 54 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:41.855980+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Client ('192.168.123.100', 56074) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:41.855980+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:07:19:41] ENGINE Client ('192.168.123.100', 56074) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.978812+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:41.978812+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:42.055510+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:42.055510+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:42.065236+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:42.065236+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:42.504687+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:42.504687+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:43.062726+0000 mon.a (mon.0) 57 : cluster [DBG] mgrmap e7: y(active, since 2s) 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:43.062726+0000 mon.a (mon.0) 57 : cluster [DBG] mgrmap e7: y(active, since 2s) 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:43.263574+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:43.263574+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:43.263810+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:43.263810+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:43.281493+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:43.281493+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:43.283919+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:43.283919+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:43.564658+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:43.564658+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:43.836016+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:43.836016+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:44.511428+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:44.511428+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:45.819023+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:45.819023+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:45.820354+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:45.820354+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:45.819848+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:45.819848+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:46.178931+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:46.178931+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:46.180070+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:46.180070+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:46.183718+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:46.183718+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:46.474799+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:46.474799+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:46.769705+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.100:0/1899807121' entity='client.admin' 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:46.769705+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.100:0/1899807121' entity='client.admin' 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:47.072790+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.100:0/2058834360' entity='client.admin' 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:47.072790+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.100:0/2058834360' entity='client.admin' 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:46.471537+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:46.471537+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:46.472256+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:46.472256+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:47.422830+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:47.422830+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:47.468499+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.100:0/1170239651' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:47.468499+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.100:0/1170239651' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:47.746383+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:47.746383+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.100:0/3853501618' entity='mgr.y' 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:48.468487+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/1170239651' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:48.468487+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/1170239651' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:48.470864+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: y(active, since 7s) 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:48.470864+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: y(active, since 7s) 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:48.839897+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.100:0/75861386' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:48.839897+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.100:0/75861386' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:51.853818+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:51.853818+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:51.854240+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon y 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:51.854240+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon y 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:51.859953+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:51.859953+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:51.860151+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: y(active, starting, since 0.00600237s) 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:51.860151+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: y(active, starting, since 0.00600237s) 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.863538+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.863538+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.864404+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.864404+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.865394+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.865394+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.865855+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.865855+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.866295+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.866295+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:51.872787+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon y is now available 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:51.872787+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon y is now available 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.895007+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.895007+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.905194+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.905194+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.923636+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:51.923636+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:52.863775+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: y(active, since 1.00962s) 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:52.863775+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: y(active, since 1.00962s) 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:52.967172+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:07:19:52] ENGINE Bus STARTING 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:52.967172+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:07:19:52] ENGINE Bus STARTING 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:53.068730+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:53.068730+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:53.168121+0000 mgr.y (mgr.14150) 5 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:53.168121+0000 mgr.y (mgr.14150) 5 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:53.177651+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:53.177651+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:53.177813+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Bus STARTED 2026-03-10T07:20:23.284 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:53.177813+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Bus STARTED 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:53.178117+0000 mgr.y (mgr.14150) 8 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Client ('192.168.123.100', 47470) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:19:53.178117+0000 mgr.y (mgr.14150) 8 : cephadm [INF] [10/Mar/2026:07:19:53] ENGINE Client ('192.168.123.100', 47470) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:53.232624+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:53.232624+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:53.235245+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:53.235245+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:53.538451+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:53.538451+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:53.691419+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:53.691419+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:54.038598+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.100:0/2147109762' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:54.038598+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.100:0/2147109762' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:54.378990+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.100:0/4202400202' entity='client.admin' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:54.378990+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.100:0/4202400202' entity='client.admin' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:54.695612+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: y(active, since 2s) 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:54.695612+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: y(active, since 2s) 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:56.682654+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:56.682654+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:57.337393+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:57.337393+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:58.689835+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: y(active, since 6s) 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:19:58.689835+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: y(active, since 6s) 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:58.802573+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.100:0/1748877265' entity='client.admin' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:19:58.802573+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.100:0/1748877265' entity='client.admin' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.114317+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.114317+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.116824+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.116824+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.117624+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.117624+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.120030+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.120030+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.126290+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.126290+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.129849+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.129849+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.812645+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.812645+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.813640+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.813640+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.814896+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.814896+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.815537+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.815537+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.958983+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.958983+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.963358+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.963358+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.967915+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.967915+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.809757+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:03.809757+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:03.816336+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:03.816336+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:03.853348+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:03.853348+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:03.898670+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:03.898670+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:03.926333+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:03.926333+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:08.842294+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:08.842294+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:09.403007+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:09.403007+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:10.708707+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:10.708707+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:10.709241+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm03 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:10.709241+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm03 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:10.709764+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:10.709764+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:11.020016+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:11.020016+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:11.867880+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:11.867880+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:12.312929+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:12.312929+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:12.905235+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:12.905235+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:13.868143+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:13.868143+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.637976+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.637976+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.640234+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.640234+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.643184+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.643184+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.645764+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.645764+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.646397+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.646397+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.647147+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.647147+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.647607+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.647607+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:15.648325+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:15.648325+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:20:23.286 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.666001+0000 mgr.y (mgr.14150) 21 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.666001+0000 mgr.y (mgr.14150) 21 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:15.683428+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:15.683428+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:15.733754+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:15.733754+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:15.766780+0000 mgr.y (mgr.14150) 24 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:15.766780+0000 mgr.y (mgr.14150) 24 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.801388+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.801388+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.803998+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.803998+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.806715+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:15.806715+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:15.868432+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:15.868432+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:17.868673+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:17.868673+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:19.663626+0000 mon.a (mon.0) 124 : audit [INF] from='client.? 192.168.123.100:0/4238596270' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:19.663626+0000 mon.a (mon.0) 124 : audit [INF] from='client.? 192.168.123.100:0/4238596270' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:19.868871+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:19.868871+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:19.926069+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.100:0/4238596270' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:19.926069+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.100:0/4238596270' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:19.927568+0000 mon.a (mon.0) 126 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cluster 2026-03-10T07:20:19.927568+0000 mon.a (mon.0) 126 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.353081+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.353081+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:21.354663+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b;count:3 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:21.354663+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b;count:3 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.357638+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.357638+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.358319+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.358319+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.359311+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.359311+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.359697+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.359697+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.362646+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.362646+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.363762+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.363762+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.364119+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: audit 2026-03-10T07:20:21.364119+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:21.364637+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm03 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: cephadm 2026-03-10T07:20:21.364637+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm03 2026-03-10T07:20:23.287 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:23 vm03 bash[23382]: debug 2026-03-10T07:20:23.035+0000 7f7298349640 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T07:20:23.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:23 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:23.707 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:20:23 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:24.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:23 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:24.142 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:20:23 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:24.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:23 vm00 bash[28005]: debug 2026-03-10T07:20:23.905+0000 7f310f95c640 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T07:20:28.067 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:28.059+0000 7fb40d7fa640 1 -- 192.168.123.103:0/3120653669 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 2 v2) ==== 95+0+1031 (secure 0 0 0) 0x7fb400038790 con 0x7fb41010e850 2026-03-10T07:20:28.067 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:20:28.067 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":2,"fsid":"534d9c8a-1c51-11f1-ac87-d1fb9a119953","modified":"2026-03-10T07:20:23.045655Z","created":"2026-03-10T07:19:27.999189Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T07:20:28.067 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 2 2026-03-10T07:20:28.070 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:28.059+0000 7fb416332640 1 -- 192.168.123.103:0/3120653669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fb3d803da10 msgr2=0x7fb3d803fed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:28.070 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:28.059+0000 7fb416332640 1 --2- 192.168.123.103:0/3120653669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fb3d803da10 0x7fb3d803fed0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7fb40800ad30 tx=0x7fb4080093f0 comp rx=0 tx=0).stop 2026-03-10T07:20:28.070 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:28.059+0000 7fb416332640 1 -- 192.168.123.103:0/3120653669 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb41010e850 msgr2=0x7fb4101a83b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:28.070 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:28.059+0000 7fb416332640 1 --2- 192.168.123.103:0/3120653669 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb41010e850 0x7fb4101a83b0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7fb400009ae0 tx=0x7fb400004290 comp rx=0 tx=0).stop 2026-03-10T07:20:28.070 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:28.063+0000 7fb416332640 1 -- 192.168.123.103:0/3120653669 shutdown_connections 2026-03-10T07:20:28.070 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:28.063+0000 7fb416332640 1 --2- 192.168.123.103:0/3120653669 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fb3d803da10 0x7fb3d803fed0 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:28.070 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:28.063+0000 7fb416332640 1 --2- 192.168.123.103:0/3120653669 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb41010e850 0x7fb4101a83b0 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:28.070 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:28.063+0000 7fb416332640 1 -- 192.168.123.103:0/3120653669 >> 192.168.123.103:0/3120653669 conn(0x7fb41006d290 msgr2=0x7fb41010dca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:28.071 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:28.063+0000 7fb416332640 1 -- 192.168.123.103:0/3120653669 shutdown_connections 2026-03-10T07:20:28.071 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:28.063+0000 7fb416332640 1 -- 192.168.123.103:0/3120653669 wait complete. 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cephadm 2026-03-10T07:20:22.899885+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cephadm 2026-03-10T07:20:22.899885+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:23.049055+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:23.049055+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:23.049657+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:23.049657+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:23.051500+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:23.051500+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:23.117802+0000 mon.a (mon.0) 143 : audit [DBG] from='client.? 192.168.123.103:0/3120653669' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:23.117802+0000 mon.a (mon.0) 143 : audit [DBG] from='client.? 192.168.123.103:0/3120653669' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:23.869220+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:23.869220+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:23.916063+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:23.916063+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:24.044308+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:24.044308+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:24.915765+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:24.915765+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:25.044631+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:25.044631+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:25.047600+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:25.047600+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:25.869518+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:25.869518+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:25.916121+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:25.916121+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:26.044630+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:26.044630+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:26.915955+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:26.915955+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:27.044643+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:27.044643+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:27.916098+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:27.916098+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.044745+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.044745+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.054775+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.054775+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T07:20:28.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059496+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059496+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059556+0000 mon.a (mon.0) 156 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059556+0000 mon.a (mon.0) 156 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059596+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T07:20:23.045655+0000 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059596+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T07:20:23.045655+0000 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059635+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059635+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059674+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059674+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059714+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059714+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059753+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059753+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059798+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.059798+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.060293+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.060293+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.060374+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.060374+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.060625+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.060625+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.060898+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: cluster 2026-03-10T07:20:28.060898+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.069032+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.069032+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.073806+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.073806+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.078656+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.078656+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.083309+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.083309+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.093416+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:28.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:28 vm00 bash[20701]: audit 2026-03-10T07:20:28.093416+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:28.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cephadm 2026-03-10T07:20:22.899885+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cephadm 2026-03-10T07:20:22.899885+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:23.049055+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:23.049055+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:23.049657+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:23.049657+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:23.051500+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:23.051500+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:23.117802+0000 mon.a (mon.0) 143 : audit [DBG] from='client.? 192.168.123.103:0/3120653669' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:23.117802+0000 mon.a (mon.0) 143 : audit [DBG] from='client.? 192.168.123.103:0/3120653669' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:23.869220+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:23.869220+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:23.916063+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:23.916063+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:24.044308+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:24.044308+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:24.915765+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:24.915765+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:25.044631+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:25.044631+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:25.047600+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:25.047600+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:25.869518+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:25.869518+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:25.916121+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:25.916121+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:26.044630+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:26.044630+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:26.915955+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:26.915955+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:27.044643+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:27.044643+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:27.916098+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:27.916098+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.044745+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.044745+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.054775+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.054775+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059496+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059496+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059556+0000 mon.a (mon.0) 156 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059556+0000 mon.a (mon.0) 156 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059596+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T07:20:23.045655+0000 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059596+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T07:20:23.045655+0000 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059635+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059635+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059674+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059674+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059714+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059714+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059753+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059753+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059798+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.059798+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.060293+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.060293+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T07:20:28.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.060374+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.060374+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.060625+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.060625+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.060898+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: cluster 2026-03-10T07:20:28.060898+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.069032+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.069032+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.073806+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.073806+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.078656+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.078656+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.083309+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.083309+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.093416+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:28.525 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:28 vm03 bash[23382]: audit 2026-03-10T07:20:28.093416+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:29.144 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T07:20:29.144 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph mon dump -f json 2026-03-10T07:20:29.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:29 vm00 bash[20701]: cluster 2026-03-10T07:20:27.869734+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:29.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:29 vm00 bash[20701]: cluster 2026-03-10T07:20:27.869734+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:29.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:29 vm00 bash[20701]: audit 2026-03-10T07:20:28.916012+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:29.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:29 vm00 bash[20701]: audit 2026-03-10T07:20:28.916012+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:29.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:29 vm00 bash[20701]: audit 2026-03-10T07:20:29.044718+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:29.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:29 vm00 bash[20701]: audit 2026-03-10T07:20:29.044718+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:29.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:29 vm03 bash[23382]: cluster 2026-03-10T07:20:27.869734+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:29.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:29 vm03 bash[23382]: cluster 2026-03-10T07:20:27.869734+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:29.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:29 vm03 bash[23382]: audit 2026-03-10T07:20:28.916012+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:29.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:29 vm03 bash[23382]: audit 2026-03-10T07:20:28.916012+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:29.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:29 vm03 bash[23382]: audit 2026-03-10T07:20:29.044718+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:29.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:29 vm03 bash[23382]: audit 2026-03-10T07:20:29.044718+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:30.391 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:20:30 vm00 bash[20971]: debug 2026-03-10T07:20:30.041+0000 7f525d991640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T07:20:32.884 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:20:35.273 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:29.869911+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:29.869911+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:29.944585+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:29.944585+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:29.944662+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:29.944662+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:29.944711+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:29.944711+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:29.944781+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:29.944781+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:29.947023+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:29.947023+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:30.916534+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:30.916534+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:31.870128+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:31.870128+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:31.916186+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:31.916186+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:31.918791+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:31.918791+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:32.917574+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:32.917574+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:33.870293+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:33.870293+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:33.916261+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:33.916261+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.916556+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.916556+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.949592+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.949592+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.957167+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.957167+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958349+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958349+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958366+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-10T07:20:29.917791+0000 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958366+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-10T07:20:29.917791+0000 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958379+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958379+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958393+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958393+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:35.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958406+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958406+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958418+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958418+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958428+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958428+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958441+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958441+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958761+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958761+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958784+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.958784+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.959151+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.959151+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.959296+0000 mon.a (mon.0) 197 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: cluster 2026-03-10T07:20:34.959296+0000 mon.a (mon.0) 197 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.964221+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.964221+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.968998+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.968998+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.973585+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.973585+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.977878+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.977878+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.982467+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.982467+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.983335+0000 mon.a (mon.0) 203 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.983335+0000 mon.a (mon.0) 203 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.983928+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:35.275 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:35 vm03 bash[23382]: audit 2026-03-10T07:20:34.983928+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cephadm 2026-03-10T07:20:22.899885+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cephadm 2026-03-10T07:20:22.899885+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:23.049055+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:23.049055+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:23.049657+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:23.049657+0000 mon.a (mon.0) 141 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:23.051500+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:23.051500+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:23.117802+0000 mon.a (mon.0) 143 : audit [DBG] from='client.? 192.168.123.103:0/3120653669' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:23.117802+0000 mon.a (mon.0) 143 : audit [DBG] from='client.? 192.168.123.103:0/3120653669' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:23.869220+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:23.869220+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:23.916063+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:23.916063+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:24.044308+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:24.044308+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:24.915765+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:24.915765+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:25.044631+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:25.044631+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:25.047600+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:25.047600+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:25.869518+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:25.869518+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:25.916121+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:25.916121+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:26.044630+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:26.044630+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:26.915955+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:26.915955+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:27.044643+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:27.044643+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:27.916098+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:27.916098+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.044745+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.044745+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.054775+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.054775+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T07:20:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059496+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059496+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059556+0000 mon.a (mon.0) 156 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059556+0000 mon.a (mon.0) 156 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059596+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T07:20:23.045655+0000 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059596+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T07:20:23.045655+0000 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059635+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059635+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059674+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059674+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059714+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059714+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059753+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059753+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059798+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.059798+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.060293+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.060293+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.060374+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.060374+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.060625+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.060625+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e12: y(active, since 36s) 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.060898+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:28.060898+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.069032+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.069032+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.073806+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.073806+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.078656+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.078656+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.083309+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.083309+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.093416+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.093416+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:27.869734+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: cluster 2026-03-10T07:20:27.869734+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.916012+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:28.916012+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:29.044718+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:34 vm00 bash[28005]: audit 2026-03-10T07:20:29.044718+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:29.869911+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:29.869911+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:29.944585+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:29.944585+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:29.944662+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:29.944662+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:29.944711+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:29.944711+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:29.944781+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:29.944781+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:29.947023+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:29.947023+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:30.916534+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:30.916534+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:31.870128+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:31.870128+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:31.916186+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:31.916186+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:31.918791+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T07:20:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:31.918791+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:32.917574+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:32.917574+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:33.870293+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:33.870293+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:33.916261+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:33.916261+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.916556+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.916556+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.949592+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.949592+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.957167+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.957167+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958349+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958349+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958366+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-10T07:20:29.917791+0000 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958366+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-10T07:20:29.917791+0000 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958379+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958379+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958393+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958393+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958406+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958406+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958418+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958418+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958428+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958428+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958441+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958441+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958761+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958761+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958784+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.958784+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.959151+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.959151+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.959296+0000 mon.a (mon.0) 197 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: cluster 2026-03-10T07:20:34.959296+0000 mon.a (mon.0) 197 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.964221+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.964221+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.968998+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.968998+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.973585+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.973585+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.977878+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.977878+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.982467+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.982467+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.983335+0000 mon.a (mon.0) 203 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.983335+0000 mon.a (mon.0) 203 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.983928+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:35 vm00 bash[28005]: audit 2026-03-10T07:20:34.983928+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:29.869911+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:29.869911+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:29.944585+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:29.944585+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:29.944662+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:29.944662+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:29.944711+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:29.944711+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:29.944781+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:29.944781+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:29.947023+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:29.947023+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:30.916534+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:30.916534+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:31.870128+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:31.870128+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:31.916186+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:31.916186+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:31.918791+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:31.918791+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:32.917574+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:32.917574+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:33.870293+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:33.870293+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:33.916261+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:33.916261+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.916556+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.916556+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.949592+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.949592+0000 mon.a (mon.0) 184 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.957167+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.957167+0000 mon.a (mon.0) 185 : cluster [DBG] monmap epoch 3 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958349+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958349+0000 mon.a (mon.0) 186 : cluster [DBG] fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958366+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-10T07:20:29.917791+0000 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958366+0000 mon.a (mon.0) 187 : cluster [DBG] last_changed 2026-03-10T07:20:29.917791+0000 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958379+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958379+0000 mon.a (mon.0) 188 : cluster [DBG] created 2026-03-10T07:19:27.999189+0000 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958393+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958393+0000 mon.a (mon.0) 189 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958406+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958406+0000 mon.a (mon.0) 190 : cluster [DBG] election_strategy: 1 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958418+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958418+0000 mon.a (mon.0) 191 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958428+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958428+0000 mon.a (mon.0) 192 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958441+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958441+0000 mon.a (mon.0) 193 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958761+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958761+0000 mon.a (mon.0) 194 : cluster [DBG] fsmap 2026-03-10T07:20:35.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958784+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.958784+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.959151+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.959151+0000 mon.a (mon.0) 196 : cluster [DBG] mgrmap e12: y(active, since 43s) 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.959296+0000 mon.a (mon.0) 197 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: cluster 2026-03-10T07:20:34.959296+0000 mon.a (mon.0) 197 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.964221+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.964221+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.968998+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.968998+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.973585+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.973585+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.977878+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.977878+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.982467+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.982467+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.983335+0000 mon.a (mon.0) 203 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.983335+0000 mon.a (mon.0) 203 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.983928+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:35 vm00 bash[20701]: audit 2026-03-10T07:20:34.983928+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:36.050 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 -- 192.168.123.103:0/3437693552 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9f5c10a640 msgr2=0x7f9f3c005680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:36.050 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 --2- 192.168.123.103:0/3437693552 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9f5c10a640 0x7f9f3c005680 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f9f44002a00 tx=0x7f9f44030570 comp rx=0 tx=0).stop 2026-03-10T07:20:36.050 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 -- 192.168.123.103:0/3437693552 shutdown_connections 2026-03-10T07:20:36.050 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 --2- 192.168.123.103:0/3437693552 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9f5c10a640 0x7f9f3c005680 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:36.050 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 -- 192.168.123.103:0/3437693552 >> 192.168.123.103:0/3437693552 conn(0x7f9f5c1006d0 msgr2=0x7f9f5c102ac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:36.050 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 -- 192.168.123.103:0/3437693552 shutdown_connections 2026-03-10T07:20:36.050 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 -- 192.168.123.103:0/3437693552 wait complete. 2026-03-10T07:20:36.050 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 Processor -- start 2026-03-10T07:20:36.050 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 -- start start 2026-03-10T07:20:36.050 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9f5c10a640 0x7f9f5c19b680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9f5c19bbc0 0x7f9f5c19ff50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9f5c1a0490 0x7f9f5c1a0940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f9f5c10ba20 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f9f5c10b8a0 con 0x7f9f5c10a640 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f61634640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f9f5c10bba0 con 0x7f9f5c1a0490 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5affd640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9f5c10a640 0x7f9f5c19b680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5affd640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9f5c10a640 0x7f9f5c19b680 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.103:40250/0 (socket says 192.168.123.103:40250) 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5affd640 1 -- 192.168.123.103:0/3554278011 learned_addr learned my addr 192.168.123.103:0/3554278011 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5b7fe640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9f5c1a0490 0x7f9f5c1a0940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5affd640 1 -- 192.168.123.103:0/3554278011 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9f5c10a640 msgr2=0x7f9f5c19b680 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 13 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5affd640 1 -- 192.168.123.103:0/3554278011 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9f5c10a640 msgr2=0x7f9f5c19b680 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5affd640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9f5c10a640 0x7f9f5c19b680 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5b7fe640 1 -- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9f5c1a0490 msgr2=0x7f9f5c1a0940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 14 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5b7fe640 1 -- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9f5c1a0490 msgr2=0x7f9f5c1a0940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5b7fe640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9f5c1a0490 0x7f9f5c1a0940 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5affd640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9f5c10a640 0x7f9f5c19b680 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5b7fe640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9f5c1a0490 0x7f9f5c1a0940 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.039+0000 7f9f5a7fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9f5c19bbc0 0x7f9f5c19ff50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f5a7fc640 1 -- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9f5c1a0490 msgr2=0x7f9f5c1a0940 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f5a7fc640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9f5c1a0490 0x7f9f5c1a0940 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f5a7fc640 1 -- 192.168.123.103:0/3554278011 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9f5c10a640 msgr2=0x7f9f5c19b680 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f5a7fc640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9f5c10a640 0x7f9f5c19b680 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f5a7fc640 1 -- 192.168.123.103:0/3554278011 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9f5c1a8200 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f5a7fc640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9f5c19bbc0 0x7f9f5c19ff50 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f9f5000b550 tx=0x7f9f5000ba20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f3bfff640 1 -- 192.168.123.103:0/3554278011 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9f50004230 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f3bfff640 1 -- 192.168.123.103:0/3554278011 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f9f500043d0 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f3bfff640 1 -- 192.168.123.103:0/3554278011 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9f5000fa70 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f61634640 1 -- 192.168.123.103:0/3554278011 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f9f5c1a8490 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.051 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f61634640 1 -- 192.168.123.103:0/3554278011 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f9f5c1a8950 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.053 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f3bfff640 1 -- 192.168.123.103:0/3554278011 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f9f5001f050 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.053 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f3bfff640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f9f3003dd60 0x7f9f30040220 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:36.053 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f3bfff640 1 -- 192.168.123.103:0/3554278011 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7f9f500517f0 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.054 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f5affd640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f9f3003dd60 0x7f9f30040220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:36.054 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.043+0000 7f9f61634640 1 -- 192.168.123.103:0/3554278011 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9f5c105d80 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.062 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.051+0000 7f9f3bfff640 1 -- 192.168.123.103:0/3554278011 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f9f50013070 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.062 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.051+0000 7f9f5affd640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f9f3003dd60 0x7f9f30040220 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f9f44002840 tx=0x7f9f44038450 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:36.213 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.203+0000 7f9f61634640 1 -- 192.168.123.103:0/3554278011 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7f9f5c19c9e0 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.213 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.203+0000 7f9f3bfff640 1 -- 192.168.123.103:0/3554278011 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 3 v3) ==== 95+0+1309 (secure 0 0 0) 0x7f9f5001ee20 con 0x7f9f5c19bbc0 2026-03-10T07:20:36.213 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:20:36.213 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":3,"fsid":"534d9c8a-1c51-11f1-ac87-d1fb9a119953","modified":"2026-03-10T07:20:29.917791Z","created":"2026-03-10T07:19:27.999189Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3301","nonce":0},{"type":"v1","addr":"192.168.123.100:6790","nonce":0}]},"addr":"192.168.123.100:6790/0","public_addr":"192.168.123.100:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]} 2026-03-10T07:20:36.213 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 3 2026-03-10T07:20:36.216 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 -- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f9f3003dd60 msgr2=0x7f9f30040220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:36.216 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f9f3003dd60 0x7f9f30040220 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f9f44002840 tx=0x7f9f44038450 comp rx=0 tx=0).stop 2026-03-10T07:20:36.216 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 -- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9f5c19bbc0 msgr2=0x7f9f5c19ff50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:36.216 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9f5c19bbc0 0x7f9f5c19ff50 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f9f5000b550 tx=0x7f9f5000ba20 comp rx=0 tx=0).stop 2026-03-10T07:20:36.216 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 -- 192.168.123.103:0/3554278011 shutdown_connections 2026-03-10T07:20:36.216 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f9f3003dd60 0x7f9f30040220 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:36.216 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9f5c1a0490 0x7f9f5c1a0940 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:36.217 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9f5c19bbc0 0x7f9f5c19ff50 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:36.217 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 --2- 192.168.123.103:0/3554278011 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9f5c10a640 0x7f9f5c19b680 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:36.217 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 -- 192.168.123.103:0/3554278011 >> 192.168.123.103:0/3554278011 conn(0x7f9f5c1006d0 msgr2=0x7f9f5c107980 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:36.217 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 -- 192.168.123.103:0/3554278011 shutdown_connections 2026-03-10T07:20:36.217 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:36.207+0000 7f9f39ffb640 1 -- 192.168.123.103:0/3554278011 wait complete. 2026-03-10T07:20:36.267 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T07:20:36.267 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph config generate-minimal-conf 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:34.984504+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:34.984504+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:34.984608+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:34.984608+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.033455+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.033455+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.038371+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.038371+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.071867+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.071867+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.077332+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.077332+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.087886+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.087886+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.094711+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.094711+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.098290+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.098290+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.111502+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.111502+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.114929+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.114929+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.118685+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.118685+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.122415+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.122415+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.122986+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.122986+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.123132+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.123132+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.123617+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.123617+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.124046+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.124046+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.124710+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.124710+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.505721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.505721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.510474+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.510474+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.511416+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.511416+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.511641+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.511641+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.512078+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.512078+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.512475+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.512475+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.513036+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.513036+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.882140+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.882140+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.886734+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.886734+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.887855+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.887855+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.888301+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.888301+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.888641+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.888641+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.916633+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:36 vm00 bash[20701]: audit 2026-03-10T07:20:35.916633+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:34.984504+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:34.984504+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:34.984608+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:34.984608+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.033455+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.033455+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.038371+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.038371+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.071867+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.071867+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.077332+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.077332+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.087886+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.087886+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.094711+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.094711+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.098290+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.098290+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.111502+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.111502+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.114929+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.114929+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.118685+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.118685+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.122415+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.122415+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.122986+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.122986+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.123132+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.123132+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.123617+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.123617+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.124046+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.124046+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.124710+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.124710+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.505721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.505721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.510474+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.510474+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.511416+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.511416+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.511641+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.511641+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.512078+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.512078+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.512475+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.512475+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.513036+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.513036+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.882140+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.882140+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.886734+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.886734+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.887855+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.887855+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.888301+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.888301+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.888641+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.888641+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.916633+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:36 vm00 bash[28005]: audit 2026-03-10T07:20:35.916633+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:36.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:34.984504+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:20:36.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:34.984504+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:20:36.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:34.984608+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:34.984608+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.033455+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.033455+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.038371+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.038371+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.071867+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.071867+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.077332+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.077332+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.087886+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.087886+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.094711+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.094711+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.098290+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.098290+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.111502+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.111502+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.114929+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.114929+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.118685+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.118685+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.122415+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.122415+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.122986+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.122986+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.123132+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.123132+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.123617+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.123617+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.124046+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.124046+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.124710+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.124710+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.505721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.505721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.510474+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.510474+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.511416+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.511416+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.511641+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.511641+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.512078+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.512078+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.512475+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.512475+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.513036+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.513036+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.882140+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.882140+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.886734+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.886734+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.887855+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.887855+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.888301+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.888301+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.888641+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.888641+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.916633+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:36.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:36 vm03 bash[23382]: audit 2026-03-10T07:20:35.916633+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:20:37.391 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:20:36 vm00 bash[20971]: debug 2026-03-10T07:20:36.913+0000 7f525d991640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-10T07:20:37.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: cluster 2026-03-10T07:20:35.870503+0000 mgr.y (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:37.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: cluster 2026-03-10T07:20:35.870503+0000 mgr.y (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:37.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.887380+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T07:20:37.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.887380+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T07:20:37.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.889103+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T07:20:37.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: cephadm 2026-03-10T07:20:35.889103+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T07:20:37.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.208130+0000 mon.a (mon.0) 228 : audit [DBG] from='client.? 192.168.123.103:0/3554278011' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.208130+0000 mon.a (mon.0) 228 : audit [DBG] from='client.? 192.168.123.103:0/3554278011' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.307843+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.307843+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.315507+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.315507+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.317387+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.317387+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.318488+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.318488+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.318981+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.318981+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.324433+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:37 vm00 bash[20701]: audit 2026-03-10T07:20:36.324433+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: cluster 2026-03-10T07:20:35.870503+0000 mgr.y (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: cluster 2026-03-10T07:20:35.870503+0000 mgr.y (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.887380+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.887380+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.889103+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: cephadm 2026-03-10T07:20:35.889103+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.208130+0000 mon.a (mon.0) 228 : audit [DBG] from='client.? 192.168.123.103:0/3554278011' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.208130+0000 mon.a (mon.0) 228 : audit [DBG] from='client.? 192.168.123.103:0/3554278011' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.307843+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.307843+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.315507+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.315507+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.317387+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.317387+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.318488+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.318488+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.318981+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.318981+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.324433+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:37 vm00 bash[28005]: audit 2026-03-10T07:20:36.324433+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: cluster 2026-03-10T07:20:35.870503+0000 mgr.y (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:37.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: cluster 2026-03-10T07:20:35.870503+0000 mgr.y (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:37.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.887380+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T07:20:37.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.887380+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T07:20:37.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.889103+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T07:20:37.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: cephadm 2026-03-10T07:20:35.889103+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T07:20:37.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.208130+0000 mon.a (mon.0) 228 : audit [DBG] from='client.? 192.168.123.103:0/3554278011' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.208130+0000 mon.a (mon.0) 228 : audit [DBG] from='client.? 192.168.123.103:0/3554278011' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.307843+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.307843+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.315507+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.315507+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.317387+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.317387+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:37.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.318488+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:37.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.318488+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:37.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.318981+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:37.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.318981+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:37.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.324433+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:37.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:37 vm03 bash[23382]: audit 2026-03-10T07:20:36.324433+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:39.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:39 vm00 bash[20701]: cluster 2026-03-10T07:20:37.870670+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:39.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:39 vm00 bash[20701]: cluster 2026-03-10T07:20:37.870670+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:39.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:39 vm00 bash[28005]: cluster 2026-03-10T07:20:37.870670+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:39.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:39 vm00 bash[28005]: cluster 2026-03-10T07:20:37.870670+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:39.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:39 vm03 bash[23382]: cluster 2026-03-10T07:20:37.870670+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:39.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:39 vm03 bash[23382]: cluster 2026-03-10T07:20:37.870670+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:40.898 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:20:41.043 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 -- 192.168.123.100:0/2637589066 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3b9c077f40 msgr2=0x7f3b9c113640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:41.043 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 --2- 192.168.123.100:0/2637589066 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3b9c077f40 0x7f3b9c113640 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f3b84009960 tx=0x7f3b8402f140 comp rx=0 tx=0).stop 2026-03-10T07:20:41.043 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 -- 192.168.123.100:0/2637589066 shutdown_connections 2026-03-10T07:20:41.043 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 --2- 192.168.123.100:0/2637589066 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3b9c113b80 0x7f3b9c115f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:41.043 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 --2- 192.168.123.100:0/2637589066 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3b9c077f40 0x7f3b9c113640 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:41.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 --2- 192.168.123.100:0/2637589066 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3b9c077620 0x7f3b9c077a00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:41.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 -- 192.168.123.100:0/2637589066 >> 192.168.123.100:0/2637589066 conn(0x7f3b9c1009e0 msgr2=0x7f3b9c102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:41.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 -- 192.168.123.100:0/2637589066 shutdown_connections 2026-03-10T07:20:41.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 -- 192.168.123.100:0/2637589066 wait complete. 2026-03-10T07:20:41.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 Processor -- start 2026-03-10T07:20:41.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 -- start start 2026-03-10T07:20:41.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3b9c077620 0x7f3b9c1a0e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:41.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3b9c077f40 0x7f3b9c1a1390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3b9c113b80 0x7f3b9c1a5720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f3b9c118860 con 0x7f3b9c077620 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f3b9c1186e0 con 0x7f3b9c113b80 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f3b9c1189e0 con 0x7f3b9c077f40 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9a7fc640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3b9c077f40 0x7f3b9c1a1390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9a7fc640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3b9c077f40 0x7f3b9c1a1390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:50772/0 (socket says 192.168.123.100:50772) 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9a7fc640 1 -- 192.168.123.100:0/1072442234 learned_addr learned my addr 192.168.123.100:0/1072442234 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9a7fc640 1 -- 192.168.123.100:0/1072442234 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3b9c113b80 msgr2=0x7f3b9c1a5720 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9b7fe640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3b9c113b80 0x7f3b9c1a5720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9a7fc640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3b9c113b80 0x7f3b9c1a5720 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9a7fc640 1 -- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3b9c077620 msgr2=0x7f3b9c1a0e50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9affd640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3b9c077620 0x7f3b9c1a0e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9a7fc640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3b9c077620 0x7f3b9c1a0e50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9a7fc640 1 -- 192.168.123.100:0/1072442234 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3b9c1a5ea0 con 0x7f3b9c077f40 2026-03-10T07:20:41.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9b7fe640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3b9c113b80 0x7f3b9c1a5720 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:20:41.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9affd640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3b9c077620 0x7f3b9c1a0e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:20:41.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b9a7fc640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3b9c077f40 0x7f3b9c1a1390 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f3b840058c0 tx=0x7f3b84004340 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:41.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3b7bfff640 1 -- 192.168.123.100:0/1072442234 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3b84046070 con 0x7f3b9c077f40 2026-03-10T07:20:41.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 -- 192.168.123.100:0/1072442234 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3b9c1a6130 con 0x7f3b9c077f40 2026-03-10T07:20:41.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.033+0000 7f3ba1755640 1 -- 192.168.123.100:0/1072442234 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f3b9c1ad9d0 con 0x7f3b9c077f40 2026-03-10T07:20:41.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.037+0000 7f3b7bfff640 1 -- 192.168.123.100:0/1072442234 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3b84002d80 con 0x7f3b9c077f40 2026-03-10T07:20:41.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.037+0000 7f3b7bfff640 1 -- 192.168.123.100:0/1072442234 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3b84038dd0 con 0x7f3b9c077f40 2026-03-10T07:20:41.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.037+0000 7f3b7bfff640 1 -- 192.168.123.100:0/1072442234 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f3b8404b430 con 0x7f3b9c077f40 2026-03-10T07:20:41.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.037+0000 7f3b7bfff640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3b6803de40 0x7f3b68040300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:41.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.037+0000 7f3b7bfff640 1 -- 192.168.123.100:0/1072442234 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7f3b84077460 con 0x7f3b9c077f40 2026-03-10T07:20:41.048 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.037+0000 7f3b9affd640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3b6803de40 0x7f3b68040300 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:41.048 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.037+0000 7f3b9affd640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3b6803de40 0x7f3b68040300 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f3b90004500 tx=0x7f3b90009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:41.048 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.037+0000 7f3ba1755640 1 -- 192.168.123.100:0/1072442234 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3b5c005180 con 0x7f3b9c077f40 2026-03-10T07:20:41.051 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.041+0000 7f3b7bfff640 1 -- 192.168.123.100:0/1072442234 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3b8403dd60 con 0x7f3b9c077f40 2026-03-10T07:20:41.157 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.145+0000 7f3ba1755640 1 -- 192.168.123.100:0/1072442234 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7f3b5c005470 con 0x7f3b9c077f40 2026-03-10T07:20:41.158 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.145+0000 7f3b7bfff640 1 -- 192.168.123.100:0/1072442234 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v9) ==== 76+0+289 (secure 0 0 0) 0x7f3b84038650 con 0x7f3b9c077f40 2026-03-10T07:20:41.158 INFO:teuthology.orchestra.run.vm00.stdout:# minimal ceph.conf for 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:41.159 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-03-10T07:20:41.159 INFO:teuthology.orchestra.run.vm00.stdout: fsid = 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:20:41.159 INFO:teuthology.orchestra.run.vm00.stdout: mon_host = [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 -- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3b6803de40 msgr2=0x7f3b68040300 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3b6803de40 0x7f3b68040300 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f3b90004500 tx=0x7f3b90009290 comp rx=0 tx=0).stop 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 -- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3b9c077f40 msgr2=0x7f3b9c1a1390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3b9c077f40 0x7f3b9c1a1390 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f3b840058c0 tx=0x7f3b84004340 comp rx=0 tx=0).stop 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 -- 192.168.123.100:0/1072442234 shutdown_connections 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3b6803de40 0x7f3b68040300 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3b9c113b80 0x7f3b9c1a5720 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3b9c077f40 0x7f3b9c1a1390 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 --2- 192.168.123.100:0/1072442234 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3b9c077620 0x7f3b9c1a0e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 -- 192.168.123.100:0/1072442234 >> 192.168.123.100:0/1072442234 conn(0x7f3b9c1009e0 msgr2=0x7f3b9c102dd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 -- 192.168.123.100:0/1072442234 shutdown_connections 2026-03-10T07:20:41.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:41.149+0000 7f3ba1755640 1 -- 192.168.123.100:0/1072442234 wait complete. 2026-03-10T07:20:41.169 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:41 vm00 bash[20701]: cluster 2026-03-10T07:20:39.870829+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:41.169 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:41 vm00 bash[20701]: cluster 2026-03-10T07:20:39.870829+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:41.169 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:41 vm00 bash[28005]: cluster 2026-03-10T07:20:39.870829+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:41.169 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:41 vm00 bash[28005]: cluster 2026-03-10T07:20:39.870829+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:41.208 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T07:20:41.209 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:20:41.209 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T07:20:41.216 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:20:41.216 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:20:41.265 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:20:41.265 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T07:20:41.272 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:20:41.272 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:20:41.319 INFO:tasks.cephadm:Adding mgr.y on vm00 2026-03-10T07:20:41.320 INFO:tasks.cephadm:Adding mgr.x on vm03 2026-03-10T07:20:41.320 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch apply mgr '2;vm00=y;vm03=x' 2026-03-10T07:20:41.366 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:41 vm03 bash[23382]: cluster 2026-03-10T07:20:39.870829+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:41.366 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:41 vm03 bash[23382]: cluster 2026-03-10T07:20:39.870829+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:42.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:42 vm00 bash[28005]: audit 2026-03-10T07:20:41.152775+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/1072442234' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:42.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:42 vm00 bash[28005]: audit 2026-03-10T07:20:41.152775+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/1072442234' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:42.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:42 vm00 bash[20701]: audit 2026-03-10T07:20:41.152775+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/1072442234' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:42.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:42 vm00 bash[20701]: audit 2026-03-10T07:20:41.152775+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/1072442234' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:42.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:42 vm03 bash[23382]: audit 2026-03-10T07:20:41.152775+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/1072442234' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:42.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:42 vm03 bash[23382]: audit 2026-03-10T07:20:41.152775+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/1072442234' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:43.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:43 vm00 bash[28005]: cluster 2026-03-10T07:20:41.870973+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:43.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:43 vm00 bash[28005]: cluster 2026-03-10T07:20:41.870973+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:43.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:43 vm00 bash[20701]: cluster 2026-03-10T07:20:41.870973+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:43.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:43 vm00 bash[20701]: cluster 2026-03-10T07:20:41.870973+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:43.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:43 vm03 bash[23382]: cluster 2026-03-10T07:20:41.870973+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:43.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:43 vm03 bash[23382]: cluster 2026-03-10T07:20:41.870973+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:44.969 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:20:45.110 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.099+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/1616777543 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b44104d80 msgr2=0x7f6b44105180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:45.110 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.099+0000 7f6b4b3b1640 1 --2- 192.168.123.103:0/1616777543 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b44104d80 0x7f6b44105180 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f6b34009a30 tx=0x7f6b3402f220 comp rx=0 tx=0).stop 2026-03-10T07:20:45.110 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.099+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/1616777543 shutdown_connections 2026-03-10T07:20:45.110 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.099+0000 7f6b4b3b1640 1 --2- 192.168.123.103:0/1616777543 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b44106940 0x7f6b4410d1d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:45.110 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.099+0000 7f6b4b3b1640 1 --2- 192.168.123.103:0/1616777543 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b44105f80 0x7f6b44106400 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:45.110 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.099+0000 7f6b4b3b1640 1 --2- 192.168.123.103:0/1616777543 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b44104d80 0x7f6b44105180 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:45.110 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.099+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/1616777543 >> 192.168.123.103:0/1616777543 conn(0x7f6b44100510 msgr2=0x7f6b44102950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:45.110 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.099+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/1616777543 shutdown_connections 2026-03-10T07:20:45.110 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/1616777543 wait complete. 2026-03-10T07:20:45.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 Processor -- start 2026-03-10T07:20:45.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 -- start start 2026-03-10T07:20:45.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b44104d80 0x7f6b4419c3e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:45.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b44105f80 0x7f6b4419c920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:45.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b44106940 0x7f6b441a39a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:45.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6b4410fbf0 con 0x7f6b44104d80 2026-03-10T07:20:45.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f6b4410fa70 con 0x7f6b44105f80 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f6b4410fd70 con 0x7f6b44106940 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b49126640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b44104d80 0x7f6b4419c3e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b49126640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b44104d80 0x7f6b4419c3e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.103:52526/0 (socket says 192.168.123.103:52526) 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b49126640 1 -- 192.168.123.103:0/14972112 learned_addr learned my addr 192.168.123.103:0/14972112 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b49927640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b44106940 0x7f6b441a39a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b49126640 1 -- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b44106940 msgr2=0x7f6b441a39a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b48925640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b44105f80 0x7f6b4419c920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b49126640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b44106940 0x7f6b441a39a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b49126640 1 -- 192.168.123.103:0/14972112 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b44105f80 msgr2=0x7f6b4419c920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b49126640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b44105f80 0x7f6b4419c920 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b49126640 1 -- 192.168.123.103:0/14972112 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6b441a40a0 con 0x7f6b44104d80 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b48925640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b44105f80 0x7f6b4419c920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b49126640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b44104d80 0x7f6b4419c3e0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f6b34009a00 tx=0x7f6b34002750 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b327fc640 1 -- 192.168.123.103:0/14972112 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6b340043b0 con 0x7f6b44104d80 2026-03-10T07:20:45.112 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/14972112 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6b441a4330 con 0x7f6b44104d80 2026-03-10T07:20:45.113 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/14972112 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6b441a4810 con 0x7f6b44104d80 2026-03-10T07:20:45.113 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b327fc640 1 -- 192.168.123.103:0/14972112 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f6b34038950 con 0x7f6b44104d80 2026-03-10T07:20:45.113 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b327fc640 1 -- 192.168.123.103:0/14972112 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6b340417a0 con 0x7f6b44104d80 2026-03-10T07:20:45.113 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b327fc640 1 -- 192.168.123.103:0/14972112 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 12) ==== 50306+0+0 (secure 0 0 0) 0x7f6b34041940 con 0x7f6b44104d80 2026-03-10T07:20:45.113 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b327fc640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f6b1803de70 0x7f6b18040330 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:45.114 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b327fc640 1 -- 192.168.123.103:0/14972112 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7f6b34076b50 con 0x7f6b44104d80 2026-03-10T07:20:45.114 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/14972112 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6b0c005180 con 0x7f6b44104d80 2026-03-10T07:20:45.114 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.103+0000 7f6b48925640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f6b1803de70 0x7f6b18040330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:45.117 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.107+0000 7f6b48925640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f6b1803de70 0x7f6b18040330 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f6b38005e00 tx=0x7f6b3800a250 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:45.117 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.107+0000 7f6b327fc640 1 -- 192.168.123.103:0/14972112 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6b34035320 con 0x7f6b44104d80 2026-03-10T07:20:45.218 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.207+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/14972112 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}) -- 0x7f6b0c002bf0 con 0x7f6b1803de70 2026-03-10T07:20:45.225 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.215+0000 7f6b327fc640 1 -- 192.168.123.103:0/14972112 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f6b0c002bf0 con 0x7f6b1803de70 2026-03-10T07:20:45.226 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled mgr update... 2026-03-10T07:20:45.227 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f6b1803de70 msgr2=0x7f6b18040330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:45.227 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f6b1803de70 0x7f6b18040330 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f6b38005e00 tx=0x7f6b3800a250 comp rx=0 tx=0).stop 2026-03-10T07:20:45.227 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b44104d80 msgr2=0x7f6b4419c3e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:45.227 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b44104d80 0x7f6b4419c3e0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f6b34009a00 tx=0x7f6b34002750 comp rx=0 tx=0).stop 2026-03-10T07:20:45.228 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/14972112 shutdown_connections 2026-03-10T07:20:45.228 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f6b1803de70 0x7f6b18040330 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:45.228 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b44106940 0x7f6b441a39a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:45.228 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b44105f80 0x7f6b4419c920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:45.228 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 --2- 192.168.123.103:0/14972112 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b44104d80 0x7f6b4419c3e0 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:45.228 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/14972112 >> 192.168.123.103:0/14972112 conn(0x7f6b44100510 msgr2=0x7f6b44101e10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:45.228 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/14972112 shutdown_connections 2026-03-10T07:20:45.228 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:20:45.219+0000 7f6b4b3b1640 1 -- 192.168.123.103:0/14972112 wait complete. 2026-03-10T07:20:45.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:45 vm03 bash[23382]: cluster 2026-03-10T07:20:43.871142+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:45.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:45 vm03 bash[23382]: cluster 2026-03-10T07:20:43.871142+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:45.324 DEBUG:teuthology.orchestra.run.vm03:mgr.x> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mgr.x.service 2026-03-10T07:20:45.367 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T07:20:45.367 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:20:45.367 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T07:20:45.370 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:20:45.370 DEBUG:teuthology.orchestra.run.vm00:> ls /dev/[sv]d? 2026-03-10T07:20:45.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:45 vm00 bash[20701]: cluster 2026-03-10T07:20:43.871142+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:45.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:45 vm00 bash[20701]: cluster 2026-03-10T07:20:43.871142+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:45.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:45 vm00 bash[28005]: cluster 2026-03-10T07:20:43.871142+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:45.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:45 vm00 bash[28005]: cluster 2026-03-10T07:20:43.871142+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:45.394 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vda 2026-03-10T07:20:45.394 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdb 2026-03-10T07:20:45.394 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdc 2026-03-10T07:20:45.394 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdd 2026-03-10T07:20:45.394 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vde 2026-03-10T07:20:45.394 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T07:20:45.394 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T07:20:45.394 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdb 2026-03-10T07:20:45.437 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdb 2026-03-10T07:20:45.437 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:20:45.437 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T07:20:45.437 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:20:45.437 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 07:13:15.413437024 +0000 2026-03-10T07:20:45.437 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 07:13:14.061437024 +0000 2026-03-10T07:20:45.437 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 07:13:14.061437024 +0000 2026-03-10T07:20:45.437 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T07:20:45.437 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T07:20:45.485 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T07:20:45.485 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T07:20:45.485 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000157465 s, 3.3 MB/s 2026-03-10T07:20:45.486 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T07:20:45.500 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:45 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:45.530 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdc 2026-03-10T07:20:45.573 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdc 2026-03-10T07:20:45.573 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:20:45.573 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T07:20:45.573 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:20:45.573 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 07:13:15.421437024 +0000 2026-03-10T07:20:45.573 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 07:13:14.081437024 +0000 2026-03-10T07:20:45.573 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 07:13:14.081437024 +0000 2026-03-10T07:20:45.573 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T07:20:45.573 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T07:20:45.621 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T07:20:45.621 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T07:20:45.621 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.00020847 s, 2.5 MB/s 2026-03-10T07:20:45.622 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T07:20:45.666 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdd 2026-03-10T07:20:45.713 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdd 2026-03-10T07:20:45.713 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:20:45.713 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T07:20:45.713 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:20:45.713 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 07:13:15.413437024 +0000 2026-03-10T07:20:45.713 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 07:13:14.089437024 +0000 2026-03-10T07:20:45.713 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 07:13:14.089437024 +0000 2026-03-10T07:20:45.713 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T07:20:45.713 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T07:20:45.761 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T07:20:45.761 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T07:20:45.761 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000236071 s, 2.2 MB/s 2026-03-10T07:20:45.762 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T07:20:45.810 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vde 2026-03-10T07:20:45.857 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vde 2026-03-10T07:20:45.857 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:20:45.857 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T07:20:45.857 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:20:45.857 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 07:13:15.421437024 +0000 2026-03-10T07:20:45.857 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 07:13:14.085437024 +0000 2026-03-10T07:20:45.857 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 07:13:14.085437024 +0000 2026-03-10T07:20:45.857 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T07:20:45.857 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T07:20:45.905 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T07:20:45.905 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T07:20:45.905 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.00016611 s, 3.1 MB/s 2026-03-10T07:20:45.906 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T07:20:45.951 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:20:45.951 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T07:20:45.954 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:20:45.954 DEBUG:teuthology.orchestra.run.vm03:> ls /dev/[sv]d? 2026-03-10T07:20:45.997 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vda 2026-03-10T07:20:45.997 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdb 2026-03-10T07:20:45.997 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdc 2026-03-10T07:20:45.997 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdd 2026-03-10T07:20:45.997 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vde 2026-03-10T07:20:45.997 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T07:20:45.997 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T07:20:45.997 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdb 2026-03-10T07:20:46.041 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdb 2026-03-10T07:20:46.042 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:20:46.042 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T07:20:46.042 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:20:46.042 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 07:13:46.165229461 +0000 2026-03-10T07:20:46.042 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 07:13:45.093229461 +0000 2026-03-10T07:20:46.042 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 07:13:45.093229461 +0000 2026-03-10T07:20:46.042 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T07:20:46.042 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T07:20:46.043 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:45 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:46.043 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:45 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:46.044 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:45 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:46.044 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:45 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:46.044 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:45 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:46.044 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:45 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:46.051 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T07:20:46.052 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T07:20:46.052 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000186408 s, 2.7 MB/s 2026-03-10T07:20:46.052 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T07:20:46.101 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdc 2026-03-10T07:20:46.145 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdc 2026-03-10T07:20:46.145 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:20:46.145 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T07:20:46.145 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:20:46.145 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 07:13:46.173229461 +0000 2026-03-10T07:20:46.145 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 07:13:45.077229461 +0000 2026-03-10T07:20:46.145 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 07:13:45.077229461 +0000 2026-03-10T07:20:46.145 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T07:20:46.145 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T07:20:46.206 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T07:20:46.207 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T07:20:46.207 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.00327163 s, 156 kB/s 2026-03-10T07:20:46.207 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T07:20:46.265 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdd 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.213947+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.213947+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: cephadm 2026-03-10T07:20:45.215141+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: cephadm 2026-03-10T07:20:45.215141+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.219973+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.219973+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.220729+0000 mon.a (mon.0) 236 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.220729+0000 mon.a (mon.0) 236 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.221871+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.221871+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.222288+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.222288+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.227221+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.227221+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.228421+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.228421+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.230304+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.230304+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.233349+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.233349+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.233910+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:45.233910+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: cephadm 2026-03-10T07:20:45.234593+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: cephadm 2026-03-10T07:20:45.234593+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:46.086594+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:46.086594+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:46.093019+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:46.093019+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:46.098766+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:46.098766+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:46.104624+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:46.104624+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:46.117030+0000 mon.a (mon.0) 248 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:46 vm03 bash[23382]: audit 2026-03-10T07:20:46.117030+0000 mon.a (mon.0) 248 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.305 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:46 vm03 systemd[1]: Started Ceph mgr.x for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:20:46.305 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:46 vm03 bash[24092]: debug 2026-03-10T07:20:46.251+0000 7f7374505140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T07:20:46.305 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:46 vm03 bash[24092]: debug 2026-03-10T07:20:46.295+0000 7f7374505140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T07:20:46.315 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdd 2026-03-10T07:20:46.316 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:20:46.316 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T07:20:46.316 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:20:46.316 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 07:13:46.165229461 +0000 2026-03-10T07:20:46.316 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 07:13:45.081229461 +0000 2026-03-10T07:20:46.316 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 07:13:45.081229461 +0000 2026-03-10T07:20:46.316 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T07:20:46.316 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T07:20:46.378 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T07:20:46.379 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T07:20:46.379 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.00731137 s, 70.0 kB/s 2026-03-10T07:20:46.383 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T07:20:46.436 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vde 2026-03-10T07:20:46.481 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vde 2026-03-10T07:20:46.481 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:20:46.482 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T07:20:46.482 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:20:46.482 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 07:13:46.173229461 +0000 2026-03-10T07:20:46.482 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 07:13:45.093229461 +0000 2026-03-10T07:20:46.482 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 07:13:45.093229461 +0000 2026-03-10T07:20:46.482 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T07:20:46.482 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T07:20:46.530 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T07:20:46.530 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T07:20:46.530 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000196136 s, 2.6 MB/s 2026-03-10T07:20:46.531 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T07:20:46.581 INFO:tasks.cephadm:Deploying osd.0 on vm00 with /dev/vde... 2026-03-10T07:20:46.581 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- lvm zap /dev/vde 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.213947+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.213947+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: cephadm 2026-03-10T07:20:45.215141+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: cephadm 2026-03-10T07:20:45.215141+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.219973+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.219973+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.220729+0000 mon.a (mon.0) 236 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.220729+0000 mon.a (mon.0) 236 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.221871+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.221871+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.222288+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.222288+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.227221+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.227221+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.228421+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.228421+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.230304+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.230304+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.233349+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.233349+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.233910+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:45.233910+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: cephadm 2026-03-10T07:20:45.234593+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: cephadm 2026-03-10T07:20:45.234593+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:46.086594+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:46.086594+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:46.093019+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:46.093019+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:46.098766+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:46.098766+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:46.104624+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:46.104624+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:46.117030+0000 mon.a (mon.0) 248 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:46 vm00 bash[28005]: audit 2026-03-10T07:20:46.117030+0000 mon.a (mon.0) 248 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.213947+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.213947+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: cephadm 2026-03-10T07:20:45.215141+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: cephadm 2026-03-10T07:20:45.215141+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.219973+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.219973+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.220729+0000 mon.a (mon.0) 236 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.220729+0000 mon.a (mon.0) 236 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.221871+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.221871+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.222288+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.222288+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.227221+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.227221+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.228421+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.228421+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.230304+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.230304+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.233349+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.233349+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.233910+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:45.233910+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: cephadm 2026-03-10T07:20:45.234593+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: cephadm 2026-03-10T07:20:45.234593+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:46.086594+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:46.086594+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:46.093019+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:46.093019+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:46.098766+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:46.098766+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:46.104624+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:46.104624+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:46.117030+0000 mon.a (mon.0) 248 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:46 vm00 bash[20701]: audit 2026-03-10T07:20:46.117030+0000 mon.a (mon.0) 248 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:46.720 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:46 vm03 bash[24092]: debug 2026-03-10T07:20:46.419+0000 7f7374505140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T07:20:46.721 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:46 vm03 bash[24092]: debug 2026-03-10T07:20:46.711+0000 7f7374505140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T07:20:47.504 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:47 vm03 bash[23382]: cluster 2026-03-10T07:20:45.871334+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:47.504 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:47 vm03 bash[23382]: cluster 2026-03-10T07:20:45.871334+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:47.504 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:47 vm03 bash[24092]: debug 2026-03-10T07:20:47.159+0000 7f7374505140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T07:20:47.504 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:47 vm03 bash[24092]: debug 2026-03-10T07:20:47.239+0000 7f7374505140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T07:20:47.504 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:47 vm03 bash[24092]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T07:20:47.504 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:47 vm03 bash[24092]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T07:20:47.504 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:47 vm03 bash[24092]: from numpy import show_config as show_numpy_config 2026-03-10T07:20:47.504 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:47 vm03 bash[24092]: debug 2026-03-10T07:20:47.359+0000 7f7374505140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T07:20:47.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:47 vm00 bash[20701]: cluster 2026-03-10T07:20:45.871334+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:47.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:47 vm00 bash[20701]: cluster 2026-03-10T07:20:45.871334+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:47.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:47 vm00 bash[28005]: cluster 2026-03-10T07:20:45.871334+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:47.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:47 vm00 bash[28005]: cluster 2026-03-10T07:20:45.871334+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:47.773 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:47 vm03 bash[24092]: debug 2026-03-10T07:20:47.495+0000 7f7374505140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T07:20:47.773 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:47 vm03 bash[24092]: debug 2026-03-10T07:20:47.531+0000 7f7374505140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T07:20:47.774 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:47 vm03 bash[24092]: debug 2026-03-10T07:20:47.567+0000 7f7374505140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T07:20:47.774 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:47 vm03 bash[24092]: debug 2026-03-10T07:20:47.607+0000 7f7374505140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T07:20:47.774 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:47 vm03 bash[24092]: debug 2026-03-10T07:20:47.655+0000 7f7374505140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T07:20:48.354 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:48 vm03 bash[24092]: debug 2026-03-10T07:20:48.087+0000 7f7374505140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T07:20:48.354 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:48 vm03 bash[24092]: debug 2026-03-10T07:20:48.123+0000 7f7374505140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T07:20:48.354 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:48 vm03 bash[24092]: debug 2026-03-10T07:20:48.159+0000 7f7374505140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T07:20:48.354 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:48 vm03 bash[24092]: debug 2026-03-10T07:20:48.303+0000 7f7374505140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T07:20:48.656 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:48 vm03 bash[24092]: debug 2026-03-10T07:20:48.343+0000 7f7374505140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T07:20:48.656 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:48 vm03 bash[24092]: debug 2026-03-10T07:20:48.383+0000 7f7374505140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T07:20:48.656 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:48 vm03 bash[24092]: debug 2026-03-10T07:20:48.491+0000 7f7374505140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:20:48.953 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:48 vm03 bash[24092]: debug 2026-03-10T07:20:48.647+0000 7f7374505140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T07:20:48.953 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:48 vm03 bash[24092]: debug 2026-03-10T07:20:48.847+0000 7f7374505140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T07:20:48.953 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:48 vm03 bash[24092]: debug 2026-03-10T07:20:48.887+0000 7f7374505140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T07:20:48.953 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:48 vm03 bash[24092]: debug 2026-03-10T07:20:48.943+0000 7f7374505140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T07:20:49.254 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:49 vm03 bash[24092]: debug 2026-03-10T07:20:49.127+0000 7f7374505140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:20:49.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:49 vm03 bash[23382]: cluster 2026-03-10T07:20:47.871511+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:49.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:49 vm03 bash[23382]: cluster 2026-03-10T07:20:47.871511+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:49.524 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:20:49 vm03 bash[24092]: debug 2026-03-10T07:20:49.407+0000 7f7374505140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T07:20:49.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:49 vm00 bash[20701]: cluster 2026-03-10T07:20:47.871511+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:49.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:49 vm00 bash[20701]: cluster 2026-03-10T07:20:47.871511+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:49.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:49 vm00 bash[28005]: cluster 2026-03-10T07:20:47.871511+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:49.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:49 vm00 bash[28005]: cluster 2026-03-10T07:20:47.871511+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:50.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: audit 2026-03-10T07:20:49.413624+0000 mon.c (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:20:50.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: audit 2026-03-10T07:20:49.413624+0000 mon.c (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:20:50.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: cluster 2026-03-10T07:20:49.413905+0000 mon.a (mon.0) 249 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:20:50.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: cluster 2026-03-10T07:20:49.413905+0000 mon.a (mon.0) 249 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:20:50.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: audit 2026-03-10T07:20:49.414367+0000 mon.c (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:20:50.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: audit 2026-03-10T07:20:49.414367+0000 mon.c (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:20:50.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: audit 2026-03-10T07:20:49.415091+0000 mon.c (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:20:50.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: audit 2026-03-10T07:20:49.415091+0000 mon.c (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:20:50.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: audit 2026-03-10T07:20:49.415481+0000 mon.c (mon.2) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:20:50.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: audit 2026-03-10T07:20:49.415481+0000 mon.c (mon.2) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:20:50.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: audit 2026-03-10T07:20:49.968605+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:50.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:50 vm03 bash[23382]: audit 2026-03-10T07:20:49.968605+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:50.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: audit 2026-03-10T07:20:49.413624+0000 mon.c (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: audit 2026-03-10T07:20:49.413624+0000 mon.c (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: cluster 2026-03-10T07:20:49.413905+0000 mon.a (mon.0) 249 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: cluster 2026-03-10T07:20:49.413905+0000 mon.a (mon.0) 249 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: audit 2026-03-10T07:20:49.414367+0000 mon.c (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: audit 2026-03-10T07:20:49.414367+0000 mon.c (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: audit 2026-03-10T07:20:49.415091+0000 mon.c (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: audit 2026-03-10T07:20:49.415091+0000 mon.c (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: audit 2026-03-10T07:20:49.415481+0000 mon.c (mon.2) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: audit 2026-03-10T07:20:49.415481+0000 mon.c (mon.2) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: audit 2026-03-10T07:20:49.968605+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:50 vm00 bash[20701]: audit 2026-03-10T07:20:49.968605+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: audit 2026-03-10T07:20:49.413624+0000 mon.c (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: audit 2026-03-10T07:20:49.413624+0000 mon.c (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: cluster 2026-03-10T07:20:49.413905+0000 mon.a (mon.0) 249 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: cluster 2026-03-10T07:20:49.413905+0000 mon.a (mon.0) 249 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: audit 2026-03-10T07:20:49.414367+0000 mon.c (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: audit 2026-03-10T07:20:49.414367+0000 mon.c (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: audit 2026-03-10T07:20:49.415091+0000 mon.c (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: audit 2026-03-10T07:20:49.415091+0000 mon.c (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: audit 2026-03-10T07:20:49.415481+0000 mon.c (mon.2) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: audit 2026-03-10T07:20:49.415481+0000 mon.c (mon.2) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/412333876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: audit 2026-03-10T07:20:49.968605+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:50.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:50 vm00 bash[28005]: audit 2026-03-10T07:20:49.968605+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.195 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: cluster 2026-03-10T07:20:49.871703+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: cluster 2026-03-10T07:20:49.871703+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: cluster 2026-03-10T07:20:50.276447+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: cluster 2026-03-10T07:20:50.276447+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:50.276553+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:50.276553+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.108797+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.108797+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.113909+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.113909+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.114698+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.114698+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.115316+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.115316+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.120019+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.120019+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.131679+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.131679+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.132291+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.132291+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.132808+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:51 vm00 bash[20701]: audit 2026-03-10T07:20:51.132808+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: cluster 2026-03-10T07:20:49.871703+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: cluster 2026-03-10T07:20:49.871703+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: cluster 2026-03-10T07:20:50.276447+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: cluster 2026-03-10T07:20:50.276447+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:50.276553+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:50.276553+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.108797+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.108797+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.113909+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.113909+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.114698+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.114698+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.115316+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.115316+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.120019+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.120019+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.131679+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.131679+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:51.478 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.132291+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:51.479 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.132291+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:51.479 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.132808+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:51.479 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:51 vm00 bash[28005]: audit 2026-03-10T07:20:51.132808+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:51.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: cluster 2026-03-10T07:20:49.871703+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: cluster 2026-03-10T07:20:49.871703+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: cluster 2026-03-10T07:20:50.276447+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: cluster 2026-03-10T07:20:50.276447+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e13: y(active, since 58s), standbys: x 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:50.276553+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:50.276553+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.108797+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.108797+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.113909+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.113909+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.114698+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.114698+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.115316+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.115316+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.120019+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.120019+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.131679+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.131679+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.132291+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.132291+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.132808+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:51 vm03 bash[23382]: audit 2026-03-10T07:20:51.132808+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:52.194 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:20:52.209 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch daemon add osd vm00:/dev/vde 2026-03-10T07:20:52.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:52 vm00 bash[20701]: cephadm 2026-03-10T07:20:51.131386+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T07:20:52.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:52 vm00 bash[20701]: cephadm 2026-03-10T07:20:51.131386+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T07:20:52.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:52 vm00 bash[20701]: cephadm 2026-03-10T07:20:51.133304+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T07:20:52.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:52 vm00 bash[20701]: cephadm 2026-03-10T07:20:51.133304+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T07:20:52.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:52 vm00 bash[28005]: cephadm 2026-03-10T07:20:51.131386+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T07:20:52.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:52 vm00 bash[28005]: cephadm 2026-03-10T07:20:51.131386+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T07:20:52.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:52 vm00 bash[28005]: cephadm 2026-03-10T07:20:51.133304+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T07:20:52.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:52 vm00 bash[28005]: cephadm 2026-03-10T07:20:51.133304+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T07:20:52.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:52 vm03 bash[23382]: cephadm 2026-03-10T07:20:51.131386+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T07:20:52.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:52 vm03 bash[23382]: cephadm 2026-03-10T07:20:51.131386+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T07:20:52.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:52 vm03 bash[23382]: cephadm 2026-03-10T07:20:51.133304+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T07:20:52.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:52 vm03 bash[23382]: cephadm 2026-03-10T07:20:51.133304+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T07:20:53.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: cluster 2026-03-10T07:20:51.871881+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: cluster 2026-03-10T07:20:51.871881+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.431179+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.431179+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.435826+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.435826+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.436865+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.436865+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.437899+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.437899+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.438296+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.438296+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.442425+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:53 vm03 bash[23382]: audit 2026-03-10T07:20:52.442425+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: cluster 2026-03-10T07:20:51.871881+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: cluster 2026-03-10T07:20:51.871881+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.431179+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.431179+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.435826+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.435826+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.436865+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.436865+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.437899+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.437899+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.438296+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.438296+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.442425+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:53 vm00 bash[20701]: audit 2026-03-10T07:20:52.442425+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: cluster 2026-03-10T07:20:51.871881+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: cluster 2026-03-10T07:20:51.871881+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.431179+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.431179+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.435826+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.435826+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.436865+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.436865+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.437899+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.437899+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.438296+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.438296+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.442425+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:53.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:53 vm00 bash[28005]: audit 2026-03-10T07:20:52.442425+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:20:55.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:55 vm03 bash[23382]: cluster 2026-03-10T07:20:53.872106+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:55.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:55 vm03 bash[23382]: cluster 2026-03-10T07:20:53.872106+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:55.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:55 vm00 bash[28005]: cluster 2026-03-10T07:20:53.872106+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:55.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:55 vm00 bash[28005]: cluster 2026-03-10T07:20:53.872106+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:55.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:55 vm00 bash[20701]: cluster 2026-03-10T07:20:53.872106+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:55.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:55 vm00 bash[20701]: cluster 2026-03-10T07:20:53.872106+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:56.899 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:20:57.059 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 -- 192.168.123.100:0/3220025205 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb5c0770a0 msgr2=0x7feb5c075500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:20:57.059 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 --2- 192.168.123.100:0/3220025205 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb5c0770a0 0x7feb5c075500 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7feb50009a30 tx=0x7feb5002f260 comp rx=0 tx=0).stop 2026-03-10T07:20:57.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 -- 192.168.123.100:0/3220025205 shutdown_connections 2026-03-10T07:20:57.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 --2- 192.168.123.100:0/3220025205 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb5c1064c0 0x7feb5c1113d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:57.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 --2- 192.168.123.100:0/3220025205 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7feb5c075a40 0x7feb5c075ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:57.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 --2- 192.168.123.100:0/3220025205 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb5c0770a0 0x7feb5c075500 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:57.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 -- 192.168.123.100:0/3220025205 >> 192.168.123.100:0/3220025205 conn(0x7feb5c0fe290 msgr2=0x7feb5c1006b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:20:57.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 -- 192.168.123.100:0/3220025205 shutdown_connections 2026-03-10T07:20:57.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 -- 192.168.123.100:0/3220025205 wait complete. 2026-03-10T07:20:57.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 Processor -- start 2026-03-10T07:20:57.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 -- start start 2026-03-10T07:20:57.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb5c075a40 0x7feb5c1a08b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7feb5c0770a0 0x7feb5c1a0df0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb5a575640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb5c075a40 0x7feb5c1a08b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb5a575640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb5c075a40 0x7feb5c1a08b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:55644/0 (socket says 192.168.123.100:55644) 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb5c1064c0 0x7feb5c1a7e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7feb5c114160 con 0x7feb5c1064c0 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7feb5c113fe0 con 0x7feb5c0770a0 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7feb5c1142e0 con 0x7feb5c075a40 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb5a575640 1 -- 192.168.123.100:0/4271221021 learned_addr learned my addr 192.168.123.100:0/4271221021 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb5a575640 1 -- 192.168.123.100:0/4271221021 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7feb5c0770a0 msgr2=0x7feb5c1a0df0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb59d74640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7feb5c0770a0 0x7feb5c1a0df0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb5a575640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7feb5c0770a0 0x7feb5c1a0df0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb5a575640 1 -- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb5c1064c0 msgr2=0x7feb5c1a7e70 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb5ad76640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb5c1064c0 0x7feb5c1a7e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb5a575640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb5c1064c0 0x7feb5c1a7e70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb5a575640 1 -- 192.168.123.100:0/4271221021 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7feb5c1a84e0 con 0x7feb5c075a40 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb5ad76640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb5c1064c0 0x7feb5c1a7e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:20:57.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb5a575640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb5c075a40 0x7feb5c1a08b0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7feb50009a00 tx=0x7feb5002fcb0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:57.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb437fe640 1 -- 192.168.123.100:0/4271221021 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7feb5002fe50 con 0x7feb5c075a40 2026-03-10T07:20:57.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 -- 192.168.123.100:0/4271221021 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7feb5c1a8770 con 0x7feb5c075a40 2026-03-10T07:20:57.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.049+0000 7feb60b39640 1 -- 192.168.123.100:0/4271221021 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7feb5c1a8c50 con 0x7feb5c075a40 2026-03-10T07:20:57.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.053+0000 7feb437fe640 1 -- 192.168.123.100:0/4271221021 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7feb50002e00 con 0x7feb5c075a40 2026-03-10T07:20:57.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.053+0000 7feb437fe640 1 -- 192.168.123.100:0/4271221021 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7feb50036810 con 0x7feb5c075a40 2026-03-10T07:20:57.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.053+0000 7feb437fe640 1 -- 192.168.123.100:0/4271221021 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 13) ==== 99979+0+0 (secure 0 0 0) 0x7feb50038680 con 0x7feb5c075a40 2026-03-10T07:20:57.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.053+0000 7feb437fe640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7feb2c077820 0x7feb2c079ce0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:20:57.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.053+0000 7feb59d74640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7feb2c077820 0x7feb2c079ce0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:20:57.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.053+0000 7feb437fe640 1 -- 192.168.123.100:0/4271221021 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(4..4 src has 1..4) ==== 1155+0+0 (secure 0 0 0) 0x7feb500bcf00 con 0x7feb5c075a40 2026-03-10T07:20:57.065 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.053+0000 7feb59d74640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7feb2c077820 0x7feb2c079ce0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7feb44005fd0 tx=0x7feb44005d60 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:20:57.065 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.053+0000 7feb60b39640 1 -- 192.168.123.100:0/4271221021 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7feb20005180 con 0x7feb5c075a40 2026-03-10T07:20:57.068 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.057+0000 7feb437fe640 1 -- 192.168.123.100:0/4271221021 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7feb50087530 con 0x7feb5c075a40 2026-03-10T07:20:57.167 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:20:57.157+0000 7feb60b39640 1 -- 192.168.123.100:0/4271221021 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7feb20002bf0 con 0x7feb2c077820 2026-03-10T07:20:57.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:57 vm03 bash[23382]: cluster 2026-03-10T07:20:55.872310+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:57.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:57 vm03 bash[23382]: cluster 2026-03-10T07:20:55.872310+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:57.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:57 vm03 bash[23382]: audit 2026-03-10T07:20:57.164431+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:57.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:57 vm03 bash[23382]: audit 2026-03-10T07:20:57.164431+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:57.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:57 vm03 bash[23382]: audit 2026-03-10T07:20:57.165809+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:20:57.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:57 vm03 bash[23382]: audit 2026-03-10T07:20:57.165809+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:20:57.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:57 vm03 bash[23382]: audit 2026-03-10T07:20:57.166235+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:57.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:57 vm03 bash[23382]: audit 2026-03-10T07:20:57.166235+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:57.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:57 vm00 bash[28005]: cluster 2026-03-10T07:20:55.872310+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:57 vm00 bash[28005]: cluster 2026-03-10T07:20:55.872310+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:57 vm00 bash[28005]: audit 2026-03-10T07:20:57.164431+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:57 vm00 bash[28005]: audit 2026-03-10T07:20:57.164431+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:57 vm00 bash[28005]: audit 2026-03-10T07:20:57.165809+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:57 vm00 bash[28005]: audit 2026-03-10T07:20:57.165809+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:57 vm00 bash[28005]: audit 2026-03-10T07:20:57.166235+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:57 vm00 bash[28005]: audit 2026-03-10T07:20:57.166235+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:57 vm00 bash[20701]: cluster 2026-03-10T07:20:55.872310+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:57 vm00 bash[20701]: cluster 2026-03-10T07:20:55.872310+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:57 vm00 bash[20701]: audit 2026-03-10T07:20:57.164431+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:57 vm00 bash[20701]: audit 2026-03-10T07:20:57.164431+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:57 vm00 bash[20701]: audit 2026-03-10T07:20:57.165809+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:57 vm00 bash[20701]: audit 2026-03-10T07:20:57.165809+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:57 vm00 bash[20701]: audit 2026-03-10T07:20:57.166235+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:57 vm00 bash[20701]: audit 2026-03-10T07:20:57.166235+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:58.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:58 vm03 bash[23382]: audit 2026-03-10T07:20:57.162996+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:58.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:58 vm03 bash[23382]: audit 2026-03-10T07:20:57.162996+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:58.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:58 vm00 bash[28005]: audit 2026-03-10T07:20:57.162996+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:58.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:58 vm00 bash[28005]: audit 2026-03-10T07:20:57.162996+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:58.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:58 vm00 bash[20701]: audit 2026-03-10T07:20:57.162996+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:58.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:58 vm00 bash[20701]: audit 2026-03-10T07:20:57.162996+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:59.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:59 vm03 bash[23382]: cluster 2026-03-10T07:20:57.872505+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:59.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:20:59 vm03 bash[23382]: cluster 2026-03-10T07:20:57.872505+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:59.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:59 vm00 bash[28005]: cluster 2026-03-10T07:20:57.872505+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:59.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:20:59 vm00 bash[28005]: cluster 2026-03-10T07:20:57.872505+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:59.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:59 vm00 bash[20701]: cluster 2026-03-10T07:20:57.872505+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:20:59.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:20:59 vm00 bash[20701]: cluster 2026-03-10T07:20:57.872505+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:01.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:01 vm03 bash[23382]: cluster 2026-03-10T07:20:59.872685+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:01.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:01 vm03 bash[23382]: cluster 2026-03-10T07:20:59.872685+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:01.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:01 vm00 bash[28005]: cluster 2026-03-10T07:20:59.872685+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:01.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:01 vm00 bash[28005]: cluster 2026-03-10T07:20:59.872685+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:01.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:01 vm00 bash[20701]: cluster 2026-03-10T07:20:59.872685+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:01.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:01 vm00 bash[20701]: cluster 2026-03-10T07:20:59.872685+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:03.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: cluster 2026-03-10T07:21:01.872880+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:03.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: cluster 2026-03-10T07:21:01.872880+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: audit 2026-03-10T07:21:02.554423+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/2753183030' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: audit 2026-03-10T07:21:02.554423+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/2753183030' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: audit 2026-03-10T07:21:02.554681+0000 mon.a (mon.0) 270 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: audit 2026-03-10T07:21:02.554681+0000 mon.a (mon.0) 270 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: audit 2026-03-10T07:21:02.556831+0000 mon.a (mon.0) 271 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]': finished 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: audit 2026-03-10T07:21:02.556831+0000 mon.a (mon.0) 271 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]': finished 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: cluster 2026-03-10T07:21:02.559507+0000 mon.a (mon.0) 272 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: cluster 2026-03-10T07:21:02.559507+0000 mon.a (mon.0) 272 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: audit 2026-03-10T07:21:02.559667+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: audit 2026-03-10T07:21:02.559667+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: audit 2026-03-10T07:21:03.151981+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/4139170275' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:03.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:03 vm03 bash[23382]: audit 2026-03-10T07:21:03.151981+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/4139170275' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:03.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: cluster 2026-03-10T07:21:01.872880+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:03.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: cluster 2026-03-10T07:21:01.872880+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: audit 2026-03-10T07:21:02.554423+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/2753183030' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: audit 2026-03-10T07:21:02.554423+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/2753183030' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: audit 2026-03-10T07:21:02.554681+0000 mon.a (mon.0) 270 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: audit 2026-03-10T07:21:02.554681+0000 mon.a (mon.0) 270 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: audit 2026-03-10T07:21:02.556831+0000 mon.a (mon.0) 271 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]': finished 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: audit 2026-03-10T07:21:02.556831+0000 mon.a (mon.0) 271 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]': finished 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: cluster 2026-03-10T07:21:02.559507+0000 mon.a (mon.0) 272 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: cluster 2026-03-10T07:21:02.559507+0000 mon.a (mon.0) 272 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: audit 2026-03-10T07:21:02.559667+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: audit 2026-03-10T07:21:02.559667+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: audit 2026-03-10T07:21:03.151981+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/4139170275' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:03 vm00 bash[20701]: audit 2026-03-10T07:21:03.151981+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/4139170275' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: cluster 2026-03-10T07:21:01.872880+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: cluster 2026-03-10T07:21:01.872880+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: audit 2026-03-10T07:21:02.554423+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/2753183030' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: audit 2026-03-10T07:21:02.554423+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/2753183030' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: audit 2026-03-10T07:21:02.554681+0000 mon.a (mon.0) 270 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: audit 2026-03-10T07:21:02.554681+0000 mon.a (mon.0) 270 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: audit 2026-03-10T07:21:02.556831+0000 mon.a (mon.0) 271 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]': finished 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: audit 2026-03-10T07:21:02.556831+0000 mon.a (mon.0) 271 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "103cba6f-bd9d-4169-adab-61ce873b1107"}]': finished 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: cluster 2026-03-10T07:21:02.559507+0000 mon.a (mon.0) 272 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: cluster 2026-03-10T07:21:02.559507+0000 mon.a (mon.0) 272 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: audit 2026-03-10T07:21:02.559667+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: audit 2026-03-10T07:21:02.559667+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: audit 2026-03-10T07:21:03.151981+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/4139170275' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:03.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:03 vm00 bash[28005]: audit 2026-03-10T07:21:03.151981+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.100:0/4139170275' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:05.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:05 vm03 bash[23382]: cluster 2026-03-10T07:21:03.873041+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:05.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:05 vm03 bash[23382]: cluster 2026-03-10T07:21:03.873041+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:05.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:05 vm00 bash[20701]: cluster 2026-03-10T07:21:03.873041+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:05.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:05 vm00 bash[20701]: cluster 2026-03-10T07:21:03.873041+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:05.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:05 vm00 bash[28005]: cluster 2026-03-10T07:21:03.873041+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:05.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:05 vm00 bash[28005]: cluster 2026-03-10T07:21:03.873041+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:07.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:07 vm03 bash[23382]: cluster 2026-03-10T07:21:05.873191+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:07.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:07 vm03 bash[23382]: cluster 2026-03-10T07:21:05.873191+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:07.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:07 vm00 bash[20701]: cluster 2026-03-10T07:21:05.873191+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:07.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:07 vm00 bash[20701]: cluster 2026-03-10T07:21:05.873191+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:07.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:07 vm00 bash[28005]: cluster 2026-03-10T07:21:05.873191+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:07.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:07 vm00 bash[28005]: cluster 2026-03-10T07:21:05.873191+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:09.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:09 vm03 bash[23382]: cluster 2026-03-10T07:21:07.873395+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:09.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:09 vm03 bash[23382]: cluster 2026-03-10T07:21:07.873395+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:09.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:09 vm00 bash[28005]: cluster 2026-03-10T07:21:07.873395+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:09.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:09 vm00 bash[28005]: cluster 2026-03-10T07:21:07.873395+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:09.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:09 vm00 bash[20701]: cluster 2026-03-10T07:21:07.873395+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:09.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:09 vm00 bash[20701]: cluster 2026-03-10T07:21:07.873395+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:11 vm03 bash[23382]: cluster 2026-03-10T07:21:09.873568+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:11 vm03 bash[23382]: cluster 2026-03-10T07:21:09.873568+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:11.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:11 vm00 bash[28005]: cluster 2026-03-10T07:21:09.873568+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:11.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:11 vm00 bash[28005]: cluster 2026-03-10T07:21:09.873568+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:11.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:11 vm00 bash[20701]: cluster 2026-03-10T07:21:09.873568+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:11.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:11 vm00 bash[20701]: cluster 2026-03-10T07:21:09.873568+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:12.641 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:21:12 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:12.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:12 vm00 bash[28005]: audit 2026-03-10T07:21:11.824587+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:21:12.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:12 vm00 bash[28005]: audit 2026-03-10T07:21:11.824587+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:21:12.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:12 vm00 bash[28005]: audit 2026-03-10T07:21:11.825161+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:12.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:12 vm00 bash[28005]: audit 2026-03-10T07:21:11.825161+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:12.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:12 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:12.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:12 vm00 bash[20701]: audit 2026-03-10T07:21:11.824587+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:21:12.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:12 vm00 bash[20701]: audit 2026-03-10T07:21:11.824587+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:21:12.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:12 vm00 bash[20701]: audit 2026-03-10T07:21:11.825161+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:12.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:12 vm00 bash[20701]: audit 2026-03-10T07:21:11.825161+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:12.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:12 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:12.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:12 vm03 bash[23382]: audit 2026-03-10T07:21:11.824587+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:21:12.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:12 vm03 bash[23382]: audit 2026-03-10T07:21:11.824587+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:21:12.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:12 vm03 bash[23382]: audit 2026-03-10T07:21:11.825161+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:12.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:12 vm03 bash[23382]: audit 2026-03-10T07:21:11.825161+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:13.073 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:12 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:13.073 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:21:12 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:13.073 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:12 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:13.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:13 vm03 bash[23382]: cephadm 2026-03-10T07:21:11.825634+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T07:21:13.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:13 vm03 bash[23382]: cephadm 2026-03-10T07:21:11.825634+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T07:21:13.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:13 vm03 bash[23382]: cluster 2026-03-10T07:21:11.873736+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:13.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:13 vm03 bash[23382]: cluster 2026-03-10T07:21:11.873736+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:13.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:13 vm03 bash[23382]: audit 2026-03-10T07:21:12.819010+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:13.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:13 vm03 bash[23382]: audit 2026-03-10T07:21:12.819010+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:13.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:13 vm03 bash[23382]: audit 2026-03-10T07:21:12.824200+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:13.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:13 vm03 bash[23382]: audit 2026-03-10T07:21:12.824200+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:13.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:13 vm03 bash[23382]: audit 2026-03-10T07:21:12.829872+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:13.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:13 vm03 bash[23382]: audit 2026-03-10T07:21:12.829872+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:13.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:13 vm00 bash[28005]: cephadm 2026-03-10T07:21:11.825634+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T07:21:13.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:13 vm00 bash[28005]: cephadm 2026-03-10T07:21:11.825634+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T07:21:13.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:13 vm00 bash[28005]: cluster 2026-03-10T07:21:11.873736+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:13.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:13 vm00 bash[28005]: cluster 2026-03-10T07:21:11.873736+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:13.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:13 vm00 bash[28005]: audit 2026-03-10T07:21:12.819010+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:13.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:13 vm00 bash[28005]: audit 2026-03-10T07:21:12.819010+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:13.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:13 vm00 bash[28005]: audit 2026-03-10T07:21:12.824200+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:13.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:13 vm00 bash[28005]: audit 2026-03-10T07:21:12.824200+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:13 vm00 bash[28005]: audit 2026-03-10T07:21:12.829872+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:13 vm00 bash[28005]: audit 2026-03-10T07:21:12.829872+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:13 vm00 bash[20701]: cephadm 2026-03-10T07:21:11.825634+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:13 vm00 bash[20701]: cephadm 2026-03-10T07:21:11.825634+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:13 vm00 bash[20701]: cluster 2026-03-10T07:21:11.873736+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:13 vm00 bash[20701]: cluster 2026-03-10T07:21:11.873736+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:13 vm00 bash[20701]: audit 2026-03-10T07:21:12.819010+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:13 vm00 bash[20701]: audit 2026-03-10T07:21:12.819010+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:13 vm00 bash[20701]: audit 2026-03-10T07:21:12.824200+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:13 vm00 bash[20701]: audit 2026-03-10T07:21:12.824200+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:13 vm00 bash[20701]: audit 2026-03-10T07:21:12.829872+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:13.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:13 vm00 bash[20701]: audit 2026-03-10T07:21:12.829872+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:15.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:15 vm00 bash[28005]: cluster 2026-03-10T07:21:13.873944+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:15.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:15 vm00 bash[28005]: cluster 2026-03-10T07:21:13.873944+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:15.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:15 vm00 bash[20701]: cluster 2026-03-10T07:21:13.873944+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:15.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:15 vm00 bash[20701]: cluster 2026-03-10T07:21:13.873944+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:16.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:15 vm03 bash[23382]: cluster 2026-03-10T07:21:13.873944+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:16.038 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:15 vm03 bash[23382]: cluster 2026-03-10T07:21:13.873944+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:16.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:16 vm00 bash[20701]: audit 2026-03-10T07:21:16.368372+0000 mon.c (mon.2) 9 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:16.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:16 vm00 bash[20701]: audit 2026-03-10T07:21:16.368372+0000 mon.c (mon.2) 9 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:16.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:16 vm00 bash[20701]: audit 2026-03-10T07:21:16.368702+0000 mon.a (mon.0) 279 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:16.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:16 vm00 bash[20701]: audit 2026-03-10T07:21:16.368702+0000 mon.a (mon.0) 279 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:16.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:16 vm00 bash[28005]: audit 2026-03-10T07:21:16.368372+0000 mon.c (mon.2) 9 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:16.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:16 vm00 bash[28005]: audit 2026-03-10T07:21:16.368372+0000 mon.c (mon.2) 9 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:16.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:16 vm00 bash[28005]: audit 2026-03-10T07:21:16.368702+0000 mon.a (mon.0) 279 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:16.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:16 vm00 bash[28005]: audit 2026-03-10T07:21:16.368702+0000 mon.a (mon.0) 279 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:17.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:16 vm03 bash[23382]: audit 2026-03-10T07:21:16.368372+0000 mon.c (mon.2) 9 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:17.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:16 vm03 bash[23382]: audit 2026-03-10T07:21:16.368372+0000 mon.c (mon.2) 9 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:17.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:16 vm03 bash[23382]: audit 2026-03-10T07:21:16.368702+0000 mon.a (mon.0) 279 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:17.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:16 vm03 bash[23382]: audit 2026-03-10T07:21:16.368702+0000 mon.a (mon.0) 279 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:21:17.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: cluster 2026-03-10T07:21:15.874210+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:17.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: cluster 2026-03-10T07:21:15.874210+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:17.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: audit 2026-03-10T07:21:16.527310+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:21:17.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: audit 2026-03-10T07:21:16.527310+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:21:17.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: audit 2026-03-10T07:21:16.530280+0000 mon.c (mon.2) 10 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:17.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: audit 2026-03-10T07:21:16.530280+0000 mon.c (mon.2) 10 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:17.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: cluster 2026-03-10T07:21:16.530454+0000 mon.a (mon.0) 281 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: cluster 2026-03-10T07:21:16.530454+0000 mon.a (mon.0) 281 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: audit 2026-03-10T07:21:16.530770+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: audit 2026-03-10T07:21:16.530770+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: audit 2026-03-10T07:21:16.530915+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:17 vm00 bash[28005]: audit 2026-03-10T07:21:16.530915+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: cluster 2026-03-10T07:21:15.874210+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: cluster 2026-03-10T07:21:15.874210+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: audit 2026-03-10T07:21:16.527310+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: audit 2026-03-10T07:21:16.527310+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: audit 2026-03-10T07:21:16.530280+0000 mon.c (mon.2) 10 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: audit 2026-03-10T07:21:16.530280+0000 mon.c (mon.2) 10 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: cluster 2026-03-10T07:21:16.530454+0000 mon.a (mon.0) 281 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: cluster 2026-03-10T07:21:16.530454+0000 mon.a (mon.0) 281 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: audit 2026-03-10T07:21:16.530770+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: audit 2026-03-10T07:21:16.530770+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: audit 2026-03-10T07:21:16.530915+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:17.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:17 vm00 bash[20701]: audit 2026-03-10T07:21:16.530915+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: cluster 2026-03-10T07:21:15.874210+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: cluster 2026-03-10T07:21:15.874210+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: audit 2026-03-10T07:21:16.527310+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: audit 2026-03-10T07:21:16.527310+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: audit 2026-03-10T07:21:16.530280+0000 mon.c (mon.2) 10 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: audit 2026-03-10T07:21:16.530280+0000 mon.c (mon.2) 10 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: cluster 2026-03-10T07:21:16.530454+0000 mon.a (mon.0) 281 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: cluster 2026-03-10T07:21:16.530454+0000 mon.a (mon.0) 281 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: audit 2026-03-10T07:21:16.530770+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: audit 2026-03-10T07:21:16.530770+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: audit 2026-03-10T07:21:16.530915+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:18.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:17 vm03 bash[23382]: audit 2026-03-10T07:21:16.530915+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: audit 2026-03-10T07:21:17.532957+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: audit 2026-03-10T07:21:17.532957+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: cluster 2026-03-10T07:21:17.535970+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: cluster 2026-03-10T07:21:17.535970+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: audit 2026-03-10T07:21:17.539943+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: audit 2026-03-10T07:21:17.539943+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: audit 2026-03-10T07:21:17.540751+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: audit 2026-03-10T07:21:17.540751+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: audit 2026-03-10T07:21:18.461089+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: audit 2026-03-10T07:21:18.461089+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: audit 2026-03-10T07:21:18.539898+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:18 vm00 bash[20701]: audit 2026-03-10T07:21:18.539898+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: audit 2026-03-10T07:21:17.532957+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: audit 2026-03-10T07:21:17.532957+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: cluster 2026-03-10T07:21:17.535970+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: cluster 2026-03-10T07:21:17.535970+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: audit 2026-03-10T07:21:17.539943+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: audit 2026-03-10T07:21:17.539943+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: audit 2026-03-10T07:21:17.540751+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: audit 2026-03-10T07:21:17.540751+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: audit 2026-03-10T07:21:18.461089+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: audit 2026-03-10T07:21:18.461089+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: audit 2026-03-10T07:21:18.539898+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:18.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:18 vm00 bash[28005]: audit 2026-03-10T07:21:18.539898+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: audit 2026-03-10T07:21:17.532957+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: audit 2026-03-10T07:21:17.532957+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: cluster 2026-03-10T07:21:17.535970+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: cluster 2026-03-10T07:21:17.535970+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: audit 2026-03-10T07:21:17.539943+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: audit 2026-03-10T07:21:17.539943+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: audit 2026-03-10T07:21:17.540751+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: audit 2026-03-10T07:21:17.540751+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: audit 2026-03-10T07:21:18.461089+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: audit 2026-03-10T07:21:18.461089+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: audit 2026-03-10T07:21:18.539898+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:19.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:18 vm03 bash[23382]: audit 2026-03-10T07:21:18.539898+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:19.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: cluster 2026-03-10T07:21:17.397876+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: cluster 2026-03-10T07:21:17.397876+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: cluster 2026-03-10T07:21:17.397940+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: cluster 2026-03-10T07:21:17.397940+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: cluster 2026-03-10T07:21:17.874428+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: cluster 2026-03-10T07:21:17.874428+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.007694+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.007694+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.014056+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.014056+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.391870+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.391870+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.393493+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.393493+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.398606+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.398606+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: cluster 2026-03-10T07:21:19.464918+0000 mon.a (mon.0) 295 : cluster [INF] osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] boot 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: cluster 2026-03-10T07:21:19.464918+0000 mon.a (mon.0) 295 : cluster [INF] osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] boot 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: cluster 2026-03-10T07:21:19.466021+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: cluster 2026-03-10T07:21:19.466021+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.466247+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:19 vm00 bash[28005]: audit 2026-03-10T07:21:19.466247+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: cluster 2026-03-10T07:21:17.397876+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: cluster 2026-03-10T07:21:17.397876+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: cluster 2026-03-10T07:21:17.397940+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: cluster 2026-03-10T07:21:17.397940+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: cluster 2026-03-10T07:21:17.874428+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: cluster 2026-03-10T07:21:17.874428+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.007694+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.007694+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.014056+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.014056+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.391870+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.391870+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.393493+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.393493+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.398606+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.398606+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: cluster 2026-03-10T07:21:19.464918+0000 mon.a (mon.0) 295 : cluster [INF] osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] boot 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: cluster 2026-03-10T07:21:19.464918+0000 mon.a (mon.0) 295 : cluster [INF] osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] boot 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: cluster 2026-03-10T07:21:19.466021+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: cluster 2026-03-10T07:21:19.466021+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.466247+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:19.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:19 vm00 bash[20701]: audit 2026-03-10T07:21:19.466247+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: cluster 2026-03-10T07:21:17.397876+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: cluster 2026-03-10T07:21:17.397876+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: cluster 2026-03-10T07:21:17.397940+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: cluster 2026-03-10T07:21:17.397940+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: cluster 2026-03-10T07:21:17.874428+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: cluster 2026-03-10T07:21:17.874428+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.007694+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.007694+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.014056+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.014056+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.391870+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.391870+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.393493+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.393493+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.398606+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.398606+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: cluster 2026-03-10T07:21:19.464918+0000 mon.a (mon.0) 295 : cluster [INF] osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] boot 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: cluster 2026-03-10T07:21:19.464918+0000 mon.a (mon.0) 295 : cluster [INF] osd.0 [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] boot 2026-03-10T07:21:20.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: cluster 2026-03-10T07:21:19.466021+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T07:21:20.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: cluster 2026-03-10T07:21:19.466021+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T07:21:20.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.466247+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:20.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:19 vm03 bash[23382]: audit 2026-03-10T07:21:19.466247+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:21:20.062 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 0 on host 'vm00' 2026-03-10T07:21:20.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.049+0000 7feb437fe640 1 -- 192.168.123.100:0/4271221021 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7feb20002bf0 con 0x7feb2c077820 2026-03-10T07:21:20.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 -- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7feb2c077820 msgr2=0x7feb2c079ce0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:21:20.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7feb2c077820 0x7feb2c079ce0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7feb44005fd0 tx=0x7feb44005d60 comp rx=0 tx=0).stop 2026-03-10T07:21:20.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 -- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb5c075a40 msgr2=0x7feb5c1a08b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:21:20.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb5c075a40 0x7feb5c1a08b0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7feb50009a00 tx=0x7feb5002fcb0 comp rx=0 tx=0).stop 2026-03-10T07:21:20.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 -- 192.168.123.100:0/4271221021 shutdown_connections 2026-03-10T07:21:20.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7feb2c077820 0x7feb2c079ce0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:20.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb5c1064c0 0x7feb5c1a7e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:20.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7feb5c0770a0 0x7feb5c1a0df0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:20.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 --2- 192.168.123.100:0/4271221021 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb5c075a40 0x7feb5c1a08b0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:20.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 -- 192.168.123.100:0/4271221021 >> 192.168.123.100:0/4271221021 conn(0x7feb5c0fe290 msgr2=0x7feb5c1006a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:21:20.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 -- 192.168.123.100:0/4271221021 shutdown_connections 2026-03-10T07:21:20.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:20.053+0000 7feb60b39640 1 -- 192.168.123.100:0/4271221021 wait complete. 2026-03-10T07:21:20.158 DEBUG:teuthology.orchestra.run.vm00:osd.0> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.0.service 2026-03-10T07:21:20.159 INFO:tasks.cephadm:Deploying osd.1 on vm00 with /dev/vdd... 2026-03-10T07:21:20.159 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- lvm zap /dev/vdd 2026-03-10T07:21:20.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:20 vm00 bash[20701]: audit 2026-03-10T07:21:20.039360+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:20.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:20 vm00 bash[20701]: audit 2026-03-10T07:21:20.039360+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:20.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:20 vm00 bash[20701]: audit 2026-03-10T07:21:20.046167+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:20 vm00 bash[20701]: audit 2026-03-10T07:21:20.046167+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:20 vm00 bash[20701]: audit 2026-03-10T07:21:20.052671+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:20 vm00 bash[20701]: audit 2026-03-10T07:21:20.052671+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:20 vm00 bash[28005]: audit 2026-03-10T07:21:20.039360+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:20.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:20 vm00 bash[28005]: audit 2026-03-10T07:21:20.039360+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:20.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:20 vm00 bash[28005]: audit 2026-03-10T07:21:20.046167+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:20 vm00 bash[28005]: audit 2026-03-10T07:21:20.046167+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:20 vm00 bash[28005]: audit 2026-03-10T07:21:20.052671+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:20.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:20 vm00 bash[28005]: audit 2026-03-10T07:21:20.052671+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:21.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:20 vm03 bash[23382]: audit 2026-03-10T07:21:20.039360+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:21.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:20 vm03 bash[23382]: audit 2026-03-10T07:21:20.039360+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:21.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:20 vm03 bash[23382]: audit 2026-03-10T07:21:20.046167+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:21.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:20 vm03 bash[23382]: audit 2026-03-10T07:21:20.046167+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:21.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:20 vm03 bash[23382]: audit 2026-03-10T07:21:20.052671+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:21.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:20 vm03 bash[23382]: audit 2026-03-10T07:21:20.052671+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:21.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:21 vm00 bash[28005]: cluster 2026-03-10T07:21:19.874672+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:21.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:21 vm00 bash[28005]: cluster 2026-03-10T07:21:19.874672+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:21.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:21 vm00 bash[28005]: cluster 2026-03-10T07:21:21.054733+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T07:21:21.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:21 vm00 bash[28005]: cluster 2026-03-10T07:21:21.054733+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T07:21:21.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:21 vm00 bash[20701]: cluster 2026-03-10T07:21:19.874672+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:21.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:21 vm00 bash[20701]: cluster 2026-03-10T07:21:19.874672+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:21.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:21 vm00 bash[20701]: cluster 2026-03-10T07:21:21.054733+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T07:21:21.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:21 vm00 bash[20701]: cluster 2026-03-10T07:21:21.054733+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T07:21:22.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:21 vm03 bash[23382]: cluster 2026-03-10T07:21:19.874672+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:22.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:21 vm03 bash[23382]: cluster 2026-03-10T07:21:19.874672+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:22.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:21 vm03 bash[23382]: cluster 2026-03-10T07:21:21.054733+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T07:21:22.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:21 vm03 bash[23382]: cluster 2026-03-10T07:21:21.054733+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T07:21:22.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:22 vm00 bash[28005]: cluster 2026-03-10T07:21:21.874888+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:22.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:22 vm00 bash[28005]: cluster 2026-03-10T07:21:21.874888+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:22.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:22 vm00 bash[20701]: cluster 2026-03-10T07:21:21.874888+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:22.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:22 vm00 bash[20701]: cluster 2026-03-10T07:21:21.874888+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:23.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:22 vm03 bash[23382]: cluster 2026-03-10T07:21:21.874888+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:23.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:22 vm03 bash[23382]: cluster 2026-03-10T07:21:21.874888+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:24.821 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:21:25.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:24 vm00 bash[20701]: cluster 2026-03-10T07:21:23.875099+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:25.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:24 vm00 bash[20701]: cluster 2026-03-10T07:21:23.875099+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:25.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:24 vm00 bash[28005]: cluster 2026-03-10T07:21:23.875099+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:25.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:24 vm00 bash[28005]: cluster 2026-03-10T07:21:23.875099+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:25.273 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:24 vm03 bash[23382]: cluster 2026-03-10T07:21:23.875099+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:25.273 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:24 vm03 bash[23382]: cluster 2026-03-10T07:21:23.875099+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:25.642 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:21:25.658 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch daemon add osd vm00:/dev/vdd 2026-03-10T07:21:27.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: cluster 2026-03-10T07:21:25.875350+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:27.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: cluster 2026-03-10T07:21:25.875350+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:27.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: cephadm 2026-03-10T07:21:26.385944+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:21:27.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: cephadm 2026-03-10T07:21:26.385944+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:21:27.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.393802+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.393802+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.400822+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.400822+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.402145+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:21:27.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.402145+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:21:27.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.402735+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:27.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.402735+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:27.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.403086+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:27.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.403086+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:27.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.408876+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:27 vm03 bash[23382]: audit 2026-03-10T07:21:26.408876+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: cluster 2026-03-10T07:21:25.875350+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:27.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: cluster 2026-03-10T07:21:25.875350+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:27.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: cephadm 2026-03-10T07:21:26.385944+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:21:27.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: cephadm 2026-03-10T07:21:26.385944+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:21:27.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.393802+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.393802+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.400822+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.400822+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.402145+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.402145+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.402735+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.402735+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.403086+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.403086+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.408876+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:27 vm00 bash[28005]: audit 2026-03-10T07:21:26.408876+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: cluster 2026-03-10T07:21:25.875350+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: cluster 2026-03-10T07:21:25.875350+0000 mgr.y (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: cephadm 2026-03-10T07:21:26.385944+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: cephadm 2026-03-10T07:21:26.385944+0000 mgr.y (mgr.14150) 82 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.393802+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.393802+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.400822+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.400822+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.402145+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.402145+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.402735+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.402735+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.403086+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.403086+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.408876+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:27.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:27 vm00 bash[20701]: audit 2026-03-10T07:21:26.408876+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:29.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:29 vm03 bash[23382]: cluster 2026-03-10T07:21:27.875568+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:29.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:29 vm03 bash[23382]: cluster 2026-03-10T07:21:27.875568+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:29.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:29 vm00 bash[28005]: cluster 2026-03-10T07:21:27.875568+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:29.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:29 vm00 bash[28005]: cluster 2026-03-10T07:21:27.875568+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:29.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:29 vm00 bash[20701]: cluster 2026-03-10T07:21:27.875568+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:29.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:29 vm00 bash[20701]: cluster 2026-03-10T07:21:27.875568+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:30.293 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:21:30.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 -- 192.168.123.100:0/1677692646 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fac80102b90 msgr2=0x7fac80103010 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:21:30.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 --2- 192.168.123.100:0/1677692646 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fac80102b90 0x7fac80103010 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7fac70009a30 tx=0x7fac7002f220 comp rx=0 tx=0).stop 2026-03-10T07:21:30.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 -- 192.168.123.100:0/1677692646 shutdown_connections 2026-03-10T07:21:30.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 --2- 192.168.123.100:0/1677692646 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fac80103550 0x7fac80109de0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:30.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 --2- 192.168.123.100:0/1677692646 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fac80102b90 0x7fac80103010 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:30.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 --2- 192.168.123.100:0/1677692646 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fac80101990 0x7fac80101d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:30.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 -- 192.168.123.100:0/1677692646 >> 192.168.123.100:0/1677692646 conn(0x7fac800fd120 msgr2=0x7fac800ff560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:21:30.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 -- 192.168.123.100:0/1677692646 shutdown_connections 2026-03-10T07:21:30.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 -- 192.168.123.100:0/1677692646 wait complete. 2026-03-10T07:21:30.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 Processor -- start 2026-03-10T07:21:30.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 -- start start 2026-03-10T07:21:30.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fac80101990 0x7fac8019b440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:21:30.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac84a77640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fac80101990 0x7fac8019b440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:21:30.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac84a77640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fac80101990 0x7fac8019b440 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:60772/0 (socket says 192.168.123.100:60772) 2026-03-10T07:21:30.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac84a77640 1 -- 192.168.123.100:0/3895228635 learned_addr learned my addr 192.168.123.100:0/3895228635 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:21:30.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fac80102b90 0x7fac8019b980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:21:30.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fac80103550 0x7fac80197ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:21:30.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fac8010b780 con 0x7fac80101990 2026-03-10T07:21:30.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fac8010b600 con 0x7fac80103550 2026-03-10T07:21:30.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fac8010b900 con 0x7fac80102b90 2026-03-10T07:21:30.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac85278640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fac80103550 0x7fac80197ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:21:30.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac77fff640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fac80102b90 0x7fac8019b980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:21:30.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac84a77640 1 -- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fac80102b90 msgr2=0x7fac8019b980 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:21:30.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac84a77640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fac80102b90 0x7fac8019b980 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:30.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac84a77640 1 -- 192.168.123.100:0/3895228635 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fac80103550 msgr2=0x7fac80197ea0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:21:30.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac84a77640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fac80103550 0x7fac80197ea0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:30.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac84a77640 1 -- 192.168.123.100:0/3895228635 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fac80198760 con 0x7fac80101990 2026-03-10T07:21:30.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac84a77640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fac80101990 0x7fac8019b440 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fac6800cc70 tx=0x7fac68007590 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:21:30.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac77fff640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fac80102b90 0x7fac8019b980 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:21:30.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac75ffb640 1 -- 192.168.123.100:0/3895228635 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fac68013070 con 0x7fac80101990 2026-03-10T07:21:30.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac75ffb640 1 -- 192.168.123.100:0/3895228635 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fac680044b0 con 0x7fac80101990 2026-03-10T07:21:30.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac75ffb640 1 -- 192.168.123.100:0/3895228635 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fac68002e20 con 0x7fac80101990 2026-03-10T07:21:30.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac85278640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fac80103550 0x7fac80197ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:21:30.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fac80198a50 con 0x7fac80101990 2026-03-10T07:21:30.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.441+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fac801a8ca0 con 0x7fac80101990 2026-03-10T07:21:30.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.445+0000 7fac75ffb640 1 -- 192.168.123.100:0/3895228635 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 13) ==== 99979+0+0 (secure 0 0 0) 0x7fac680040a0 con 0x7fac80101990 2026-03-10T07:21:30.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.445+0000 7fac75ffb640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fac48077650 0x7fac48079b10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:21:30.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.445+0000 7fac77fff640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fac48077650 0x7fac48079b10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:21:30.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.445+0000 7fac75ffb640 1 -- 192.168.123.100:0/3895228635 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(9..9 src has 1..9) ==== 1757+0+0 (secure 0 0 0) 0x7fac68098890 con 0x7fac80101990 2026-03-10T07:21:30.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.445+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fac8010bf00 con 0x7fac80101990 2026-03-10T07:21:30.459 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.445+0000 7fac77fff640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fac48077650 0x7fac48079b10 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7fac700097c0 tx=0x7fac700023d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:21:30.460 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.449+0000 7fac75ffb640 1 -- 192.168.123.100:0/3895228635 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fac68062c70 con 0x7fac80101990 2026-03-10T07:21:30.560 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:30.549+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7fac80108a70 con 0x7fac48077650 2026-03-10T07:21:31.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:31 vm03 bash[23382]: cluster 2026-03-10T07:21:29.875836+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:31.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:31 vm03 bash[23382]: cluster 2026-03-10T07:21:29.875836+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:31.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:31 vm03 bash[23382]: audit 2026-03-10T07:21:30.555598+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:31.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:31 vm03 bash[23382]: audit 2026-03-10T07:21:30.555598+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:31.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:31 vm03 bash[23382]: audit 2026-03-10T07:21:30.556944+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:31.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:31 vm03 bash[23382]: audit 2026-03-10T07:21:30.556944+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:31.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:31 vm03 bash[23382]: audit 2026-03-10T07:21:30.558248+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:21:31.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:31 vm03 bash[23382]: audit 2026-03-10T07:21:30.558248+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:21:31.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:31 vm03 bash[23382]: audit 2026-03-10T07:21:30.558679+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:31.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:31 vm03 bash[23382]: audit 2026-03-10T07:21:30.558679+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:31.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:31 vm00 bash[28005]: cluster 2026-03-10T07:21:29.875836+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:31.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:31 vm00 bash[28005]: cluster 2026-03-10T07:21:29.875836+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:31.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:31 vm00 bash[28005]: audit 2026-03-10T07:21:30.555598+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:31.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:31 vm00 bash[28005]: audit 2026-03-10T07:21:30.555598+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:31.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:31 vm00 bash[28005]: audit 2026-03-10T07:21:30.556944+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:31.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:31 vm00 bash[28005]: audit 2026-03-10T07:21:30.556944+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:31.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:31 vm00 bash[28005]: audit 2026-03-10T07:21:30.558248+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:31 vm00 bash[28005]: audit 2026-03-10T07:21:30.558248+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:31 vm00 bash[28005]: audit 2026-03-10T07:21:30.558679+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:31 vm00 bash[28005]: audit 2026-03-10T07:21:30.558679+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:31 vm00 bash[20701]: cluster 2026-03-10T07:21:29.875836+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:31 vm00 bash[20701]: cluster 2026-03-10T07:21:29.875836+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:31 vm00 bash[20701]: audit 2026-03-10T07:21:30.555598+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:31 vm00 bash[20701]: audit 2026-03-10T07:21:30.555598+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:31 vm00 bash[20701]: audit 2026-03-10T07:21:30.556944+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:31 vm00 bash[20701]: audit 2026-03-10T07:21:30.556944+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:31 vm00 bash[20701]: audit 2026-03-10T07:21:30.558248+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:31 vm00 bash[20701]: audit 2026-03-10T07:21:30.558248+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:31 vm00 bash[20701]: audit 2026-03-10T07:21:30.558679+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:31.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:31 vm00 bash[20701]: audit 2026-03-10T07:21:30.558679+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:33.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:33 vm03 bash[23382]: cluster 2026-03-10T07:21:31.876056+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:33.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:33 vm03 bash[23382]: cluster 2026-03-10T07:21:31.876056+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:33.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:33 vm00 bash[28005]: cluster 2026-03-10T07:21:31.876056+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:33.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:33 vm00 bash[28005]: cluster 2026-03-10T07:21:31.876056+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:33.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:33 vm00 bash[20701]: cluster 2026-03-10T07:21:31.876056+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:33.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:33 vm00 bash[20701]: cluster 2026-03-10T07:21:31.876056+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:35.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:35 vm00 bash[28005]: cluster 2026-03-10T07:21:33.876312+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:35.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:35 vm00 bash[28005]: cluster 2026-03-10T07:21:33.876312+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:35.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:35 vm00 bash[20701]: cluster 2026-03-10T07:21:33.876312+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:35.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:35 vm00 bash[20701]: cluster 2026-03-10T07:21:33.876312+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:35.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:35 vm03 bash[23382]: cluster 2026-03-10T07:21:33.876312+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:35.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:35 vm03 bash[23382]: cluster 2026-03-10T07:21:33.876312+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:36.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:36 vm00 bash[28005]: audit 2026-03-10T07:21:35.947803+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]: dispatch 2026-03-10T07:21:36.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:36 vm00 bash[28005]: audit 2026-03-10T07:21:35.947803+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]: dispatch 2026-03-10T07:21:36.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:36 vm00 bash[28005]: audit 2026-03-10T07:21:35.951497+0000 mon.a (mon.0) 312 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]': finished 2026-03-10T07:21:36.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:36 vm00 bash[28005]: audit 2026-03-10T07:21:35.951497+0000 mon.a (mon.0) 312 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]': finished 2026-03-10T07:21:36.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:36 vm00 bash[28005]: cluster 2026-03-10T07:21:35.954010+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T07:21:36.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:36 vm00 bash[28005]: cluster 2026-03-10T07:21:35.954010+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T07:21:36.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:36 vm00 bash[28005]: audit 2026-03-10T07:21:35.954571+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:36.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:36 vm00 bash[28005]: audit 2026-03-10T07:21:35.954571+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:36.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:36 vm00 bash[20701]: audit 2026-03-10T07:21:35.947803+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]: dispatch 2026-03-10T07:21:36.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:36 vm00 bash[20701]: audit 2026-03-10T07:21:35.947803+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]: dispatch 2026-03-10T07:21:36.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:36 vm00 bash[20701]: audit 2026-03-10T07:21:35.951497+0000 mon.a (mon.0) 312 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]': finished 2026-03-10T07:21:36.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:36 vm00 bash[20701]: audit 2026-03-10T07:21:35.951497+0000 mon.a (mon.0) 312 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]': finished 2026-03-10T07:21:36.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:36 vm00 bash[20701]: cluster 2026-03-10T07:21:35.954010+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T07:21:36.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:36 vm00 bash[20701]: cluster 2026-03-10T07:21:35.954010+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T07:21:36.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:36 vm00 bash[20701]: audit 2026-03-10T07:21:35.954571+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:36.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:36 vm00 bash[20701]: audit 2026-03-10T07:21:35.954571+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:36.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:36 vm03 bash[23382]: audit 2026-03-10T07:21:35.947803+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]: dispatch 2026-03-10T07:21:36.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:36 vm03 bash[23382]: audit 2026-03-10T07:21:35.947803+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]: dispatch 2026-03-10T07:21:36.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:36 vm03 bash[23382]: audit 2026-03-10T07:21:35.951497+0000 mon.a (mon.0) 312 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]': finished 2026-03-10T07:21:36.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:36 vm03 bash[23382]: audit 2026-03-10T07:21:35.951497+0000 mon.a (mon.0) 312 : audit [INF] from='client.? 192.168.123.100:0/3661151104' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "99ca2b37-ae0a-4199-ac17-e89aa50eb255"}]': finished 2026-03-10T07:21:36.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:36 vm03 bash[23382]: cluster 2026-03-10T07:21:35.954010+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T07:21:36.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:36 vm03 bash[23382]: cluster 2026-03-10T07:21:35.954010+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T07:21:36.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:36 vm03 bash[23382]: audit 2026-03-10T07:21:35.954571+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:36.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:36 vm03 bash[23382]: audit 2026-03-10T07:21:35.954571+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:37.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:37 vm03 bash[23382]: cluster 2026-03-10T07:21:35.876570+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:37.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:37 vm03 bash[23382]: cluster 2026-03-10T07:21:35.876570+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:37.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:37 vm03 bash[23382]: audit 2026-03-10T07:21:36.572031+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/3520139306' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:37.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:37 vm03 bash[23382]: audit 2026-03-10T07:21:36.572031+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/3520139306' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:37.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:37 vm00 bash[28005]: cluster 2026-03-10T07:21:35.876570+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:37.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:37 vm00 bash[28005]: cluster 2026-03-10T07:21:35.876570+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:37.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:37 vm00 bash[28005]: audit 2026-03-10T07:21:36.572031+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/3520139306' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:37.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:37 vm00 bash[28005]: audit 2026-03-10T07:21:36.572031+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/3520139306' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:37.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:37 vm00 bash[20701]: cluster 2026-03-10T07:21:35.876570+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:37.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:37 vm00 bash[20701]: cluster 2026-03-10T07:21:35.876570+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:37.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:37 vm00 bash[20701]: audit 2026-03-10T07:21:36.572031+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/3520139306' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:37.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:37 vm00 bash[20701]: audit 2026-03-10T07:21:36.572031+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/3520139306' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:21:39.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:39 vm03 bash[23382]: cluster 2026-03-10T07:21:37.876787+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:39.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:39 vm03 bash[23382]: cluster 2026-03-10T07:21:37.876787+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:39.785 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:39 vm00 bash[28005]: cluster 2026-03-10T07:21:37.876787+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:39.785 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:39 vm00 bash[28005]: cluster 2026-03-10T07:21:37.876787+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:39.785 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:39 vm00 bash[20701]: cluster 2026-03-10T07:21:37.876787+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:39.785 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:39 vm00 bash[20701]: cluster 2026-03-10T07:21:37.876787+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:41.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:41 vm03 bash[23382]: cluster 2026-03-10T07:21:39.877000+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:41.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:41 vm03 bash[23382]: cluster 2026-03-10T07:21:39.877000+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:41.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:41 vm00 bash[28005]: cluster 2026-03-10T07:21:39.877000+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:41.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:41 vm00 bash[28005]: cluster 2026-03-10T07:21:39.877000+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:41.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:41 vm00 bash[20701]: cluster 2026-03-10T07:21:39.877000+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:41.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:41 vm00 bash[20701]: cluster 2026-03-10T07:21:39.877000+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:43 vm03 bash[23382]: cluster 2026-03-10T07:21:41.877212+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:43 vm03 bash[23382]: cluster 2026-03-10T07:21:41.877212+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:43.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:43 vm00 bash[28005]: cluster 2026-03-10T07:21:41.877212+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:43.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:43 vm00 bash[28005]: cluster 2026-03-10T07:21:41.877212+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:43.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:43 vm00 bash[20701]: cluster 2026-03-10T07:21:41.877212+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:43.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:43 vm00 bash[20701]: cluster 2026-03-10T07:21:41.877212+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:45 vm00 bash[20701]: cluster 2026-03-10T07:21:43.877524+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:45 vm00 bash[20701]: cluster 2026-03-10T07:21:43.877524+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:45 vm00 bash[20701]: audit 2026-03-10T07:21:44.998728+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:45 vm00 bash[20701]: audit 2026-03-10T07:21:44.998728+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:45 vm00 bash[20701]: audit 2026-03-10T07:21:44.999210+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:45 vm00 bash[20701]: audit 2026-03-10T07:21:44.999210+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:45 vm00 bash[28005]: cluster 2026-03-10T07:21:43.877524+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:45 vm00 bash[28005]: cluster 2026-03-10T07:21:43.877524+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:45 vm00 bash[28005]: audit 2026-03-10T07:21:44.998728+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:45 vm00 bash[28005]: audit 2026-03-10T07:21:44.998728+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:45 vm00 bash[28005]: audit 2026-03-10T07:21:44.999210+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:45.551 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:45 vm00 bash[28005]: audit 2026-03-10T07:21:44.999210+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:45.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:45 vm03 bash[23382]: cluster 2026-03-10T07:21:43.877524+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:45.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:45 vm03 bash[23382]: cluster 2026-03-10T07:21:43.877524+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:45.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:45 vm03 bash[23382]: audit 2026-03-10T07:21:44.998728+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:21:45.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:45 vm03 bash[23382]: audit 2026-03-10T07:21:44.998728+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:21:45.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:45 vm03 bash[23382]: audit 2026-03-10T07:21:44.999210+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:45.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:45 vm03 bash[23382]: audit 2026-03-10T07:21:44.999210+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:45.834 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:45 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:45.834 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:21:45 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:45.835 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:45 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:45.835 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:21:45 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:46.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:45 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:46.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:45 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:46.134 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:21:45 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:46.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:21:45 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:46.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:46 vm03 bash[23382]: cephadm 2026-03-10T07:21:44.999606+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T07:21:46.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:46 vm03 bash[23382]: cephadm 2026-03-10T07:21:44.999606+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T07:21:46.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:46 vm03 bash[23382]: audit 2026-03-10T07:21:46.092394+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:46.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:46 vm03 bash[23382]: audit 2026-03-10T07:21:46.092394+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:46.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:46 vm03 bash[23382]: audit 2026-03-10T07:21:46.096974+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:46.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:46 vm03 bash[23382]: audit 2026-03-10T07:21:46.096974+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:46.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:46 vm03 bash[23382]: audit 2026-03-10T07:21:46.103109+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:46.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:46 vm03 bash[23382]: audit 2026-03-10T07:21:46.103109+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:46.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:46 vm00 bash[28005]: cephadm 2026-03-10T07:21:44.999606+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T07:21:46.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:46 vm00 bash[28005]: cephadm 2026-03-10T07:21:44.999606+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T07:21:46.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:46 vm00 bash[28005]: audit 2026-03-10T07:21:46.092394+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:46.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:46 vm00 bash[28005]: audit 2026-03-10T07:21:46.092394+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:46 vm00 bash[28005]: audit 2026-03-10T07:21:46.096974+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:46 vm00 bash[28005]: audit 2026-03-10T07:21:46.096974+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:46 vm00 bash[28005]: audit 2026-03-10T07:21:46.103109+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:46 vm00 bash[28005]: audit 2026-03-10T07:21:46.103109+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:46 vm00 bash[20701]: cephadm 2026-03-10T07:21:44.999606+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:46 vm00 bash[20701]: cephadm 2026-03-10T07:21:44.999606+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:46 vm00 bash[20701]: audit 2026-03-10T07:21:46.092394+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:46 vm00 bash[20701]: audit 2026-03-10T07:21:46.092394+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:46 vm00 bash[20701]: audit 2026-03-10T07:21:46.096974+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:46 vm00 bash[20701]: audit 2026-03-10T07:21:46.096974+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:46 vm00 bash[20701]: audit 2026-03-10T07:21:46.103109+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:46.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:46 vm00 bash[20701]: audit 2026-03-10T07:21:46.103109+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:47.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:47 vm00 bash[28005]: cluster 2026-03-10T07:21:45.877756+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:47.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:47 vm00 bash[28005]: cluster 2026-03-10T07:21:45.877756+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:47.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:47 vm00 bash[20701]: cluster 2026-03-10T07:21:45.877756+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:47.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:47 vm00 bash[20701]: cluster 2026-03-10T07:21:45.877756+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:47.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:47 vm03 bash[23382]: cluster 2026-03-10T07:21:45.877756+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:47.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:47 vm03 bash[23382]: cluster 2026-03-10T07:21:45.877756+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:49.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:49 vm03 bash[23382]: cluster 2026-03-10T07:21:47.877970+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:49.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:49 vm03 bash[23382]: cluster 2026-03-10T07:21:47.877970+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:49.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:49 vm00 bash[28005]: cluster 2026-03-10T07:21:47.877970+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:49.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:49 vm00 bash[28005]: cluster 2026-03-10T07:21:47.877970+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:49.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:49 vm00 bash[20701]: cluster 2026-03-10T07:21:47.877970+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:49.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:49 vm00 bash[20701]: cluster 2026-03-10T07:21:47.877970+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:50.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:50 vm03 bash[23382]: audit 2026-03-10T07:21:50.009648+0000 mon.a (mon.0) 320 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:21:50.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:50 vm03 bash[23382]: audit 2026-03-10T07:21:50.009648+0000 mon.a (mon.0) 320 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:21:50.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:50 vm00 bash[20701]: audit 2026-03-10T07:21:50.009648+0000 mon.a (mon.0) 320 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:21:50.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:50 vm00 bash[20701]: audit 2026-03-10T07:21:50.009648+0000 mon.a (mon.0) 320 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:21:50.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:50 vm00 bash[28005]: audit 2026-03-10T07:21:50.009648+0000 mon.a (mon.0) 320 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:21:50.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:50 vm00 bash[28005]: audit 2026-03-10T07:21:50.009648+0000 mon.a (mon.0) 320 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:21:51.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:51 vm03 bash[23382]: cluster 2026-03-10T07:21:49.878233+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:51.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:51 vm03 bash[23382]: cluster 2026-03-10T07:21:49.878233+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:51.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:51 vm03 bash[23382]: audit 2026-03-10T07:21:50.483311+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:21:51.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:51 vm03 bash[23382]: audit 2026-03-10T07:21:50.483311+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:21:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:51 vm03 bash[23382]: cluster 2026-03-10T07:21:50.487204+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T07:21:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:51 vm03 bash[23382]: cluster 2026-03-10T07:21:50.487204+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T07:21:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:51 vm03 bash[23382]: audit 2026-03-10T07:21:50.487390+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:51 vm03 bash[23382]: audit 2026-03-10T07:21:50.487390+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:51 vm03 bash[23382]: audit 2026-03-10T07:21:50.487476+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:51.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:51 vm03 bash[23382]: audit 2026-03-10T07:21:50.487476+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:51.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:51 vm00 bash[28005]: cluster 2026-03-10T07:21:49.878233+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:51.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:51 vm00 bash[28005]: cluster 2026-03-10T07:21:49.878233+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:51 vm00 bash[28005]: audit 2026-03-10T07:21:50.483311+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:51 vm00 bash[28005]: audit 2026-03-10T07:21:50.483311+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:51 vm00 bash[28005]: cluster 2026-03-10T07:21:50.487204+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:51 vm00 bash[28005]: cluster 2026-03-10T07:21:50.487204+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:51 vm00 bash[28005]: audit 2026-03-10T07:21:50.487390+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:51 vm00 bash[28005]: audit 2026-03-10T07:21:50.487390+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:51 vm00 bash[28005]: audit 2026-03-10T07:21:50.487476+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:51 vm00 bash[28005]: audit 2026-03-10T07:21:50.487476+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:51 vm00 bash[20701]: cluster 2026-03-10T07:21:49.878233+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:51 vm00 bash[20701]: cluster 2026-03-10T07:21:49.878233+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:51 vm00 bash[20701]: audit 2026-03-10T07:21:50.483311+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:51 vm00 bash[20701]: audit 2026-03-10T07:21:50.483311+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:51 vm00 bash[20701]: cluster 2026-03-10T07:21:50.487204+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:51 vm00 bash[20701]: cluster 2026-03-10T07:21:50.487204+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:51 vm00 bash[20701]: audit 2026-03-10T07:21:50.487390+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:51 vm00 bash[20701]: audit 2026-03-10T07:21:50.487390+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:51 vm00 bash[20701]: audit 2026-03-10T07:21:50.487476+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:51.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:51 vm00 bash[20701]: audit 2026-03-10T07:21:50.487476+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:51.486082+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:51.486082+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: cluster 2026-03-10T07:21:51.490038+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: cluster 2026-03-10T07:21:51.490038+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:51.491257+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:51.491257+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:51.498527+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:51.498527+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:52.334557+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:52.334557+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:52.341049+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:52.341049+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:52.343642+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:52.343642+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:52.344310+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:52.344310+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:52.348656+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:52 vm03 bash[23382]: audit 2026-03-10T07:21:52.348656+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:51.486082+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:51.486082+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: cluster 2026-03-10T07:21:51.490038+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: cluster 2026-03-10T07:21:51.490038+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:51.491257+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:51.491257+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:51.498527+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:51.498527+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:52.334557+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:52.334557+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:52.341049+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:52.341049+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:52.343642+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:52.343642+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:52.344310+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:52.344310+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:52.348656+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:52 vm00 bash[28005]: audit 2026-03-10T07:21:52.348656+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:51.486082+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:51.486082+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: cluster 2026-03-10T07:21:51.490038+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: cluster 2026-03-10T07:21:51.490038+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:51.491257+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:51.491257+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:51.498527+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:51.498527+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:52.334557+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:52.334557+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:52.341049+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:52.341049+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:52.343642+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:52.343642+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:52.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:52.344310+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:52.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:52.344310+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:52.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:52.348656+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:52.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:52 vm00 bash[20701]: audit 2026-03-10T07:21:52.348656+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.315 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 1 on host 'vm00' 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac75ffb640 1 -- 192.168.123.100:0/3895228635 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7fac80108a70 con 0x7fac48077650 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fac48077650 msgr2=0x7fac48079b10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fac48077650 0x7fac48079b10 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7fac700097c0 tx=0x7fac700023d0 comp rx=0 tx=0).stop 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fac80101990 msgr2=0x7fac8019b440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fac80101990 0x7fac8019b440 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fac6800cc70 tx=0x7fac68007590 comp rx=0 tx=0).stop 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 shutdown_connections 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fac48077650 0x7fac48079b10 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fac80103550 0x7fac80197ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fac80102b90 0x7fac8019b980 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 --2- 192.168.123.100:0/3895228635 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fac80101990 0x7fac8019b440 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 >> 192.168.123.100:0/3895228635 conn(0x7fac800fd120 msgr2=0x7fac801060c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 shutdown_connections 2026-03-10T07:21:53.316 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:21:53.301+0000 7fac86d02640 1 -- 192.168.123.100:0/3895228635 wait complete. 2026-03-10T07:21:53.394 DEBUG:teuthology.orchestra.run.vm00:osd.1> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.1.service 2026-03-10T07:21:53.395 INFO:tasks.cephadm:Deploying osd.2 on vm00 with /dev/vdc... 2026-03-10T07:21:53.395 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- lvm zap /dev/vdc 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: cluster 2026-03-10T07:21:51.044056+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: cluster 2026-03-10T07:21:51.044056+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: cluster 2026-03-10T07:21:51.044128+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: cluster 2026-03-10T07:21:51.044128+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: cluster 2026-03-10T07:21:51.878440+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: cluster 2026-03-10T07:21:51.878440+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: audit 2026-03-10T07:21:52.493851+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: audit 2026-03-10T07:21:52.493851+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: cluster 2026-03-10T07:21:52.507923+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] boot 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: cluster 2026-03-10T07:21:52.507923+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] boot 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: cluster 2026-03-10T07:21:52.507988+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: cluster 2026-03-10T07:21:52.507988+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: audit 2026-03-10T07:21:52.508186+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: audit 2026-03-10T07:21:52.508186+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: audit 2026-03-10T07:21:53.293542+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: audit 2026-03-10T07:21:53.293542+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: audit 2026-03-10T07:21:53.298490+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: audit 2026-03-10T07:21:53.298490+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: audit 2026-03-10T07:21:53.304038+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:53 vm00 bash[28005]: audit 2026-03-10T07:21:53.304038+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: cluster 2026-03-10T07:21:51.044056+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: cluster 2026-03-10T07:21:51.044056+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: cluster 2026-03-10T07:21:51.044128+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: cluster 2026-03-10T07:21:51.044128+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: cluster 2026-03-10T07:21:51.878440+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: cluster 2026-03-10T07:21:51.878440+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: audit 2026-03-10T07:21:52.493851+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: audit 2026-03-10T07:21:52.493851+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: cluster 2026-03-10T07:21:52.507923+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] boot 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: cluster 2026-03-10T07:21:52.507923+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] boot 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: cluster 2026-03-10T07:21:52.507988+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: cluster 2026-03-10T07:21:52.507988+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: audit 2026-03-10T07:21:52.508186+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: audit 2026-03-10T07:21:52.508186+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: audit 2026-03-10T07:21:53.293542+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: audit 2026-03-10T07:21:53.293542+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: audit 2026-03-10T07:21:53.298490+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: audit 2026-03-10T07:21:53.298490+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: audit 2026-03-10T07:21:53.304038+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:53 vm00 bash[20701]: audit 2026-03-10T07:21:53.304038+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: cluster 2026-03-10T07:21:51.044056+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:53.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: cluster 2026-03-10T07:21:51.044056+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: cluster 2026-03-10T07:21:51.044128+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: cluster 2026-03-10T07:21:51.044128+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: cluster 2026-03-10T07:21:51.878440+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: cluster 2026-03-10T07:21:51.878440+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: audit 2026-03-10T07:21:52.493851+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: audit 2026-03-10T07:21:52.493851+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: cluster 2026-03-10T07:21:52.507923+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] boot 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: cluster 2026-03-10T07:21:52.507923+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] boot 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: cluster 2026-03-10T07:21:52.507988+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: cluster 2026-03-10T07:21:52.507988+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: audit 2026-03-10T07:21:52.508186+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: audit 2026-03-10T07:21:52.508186+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: audit 2026-03-10T07:21:53.293542+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: audit 2026-03-10T07:21:53.293542+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: audit 2026-03-10T07:21:53.298490+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: audit 2026-03-10T07:21:53.298490+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: audit 2026-03-10T07:21:53.304038+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:53 vm03 bash[23382]: audit 2026-03-10T07:21:53.304038+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:21:54.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:54 vm00 bash[28005]: cluster 2026-03-10T07:21:53.523874+0000 mon.a (mon.0) 341 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T07:21:54.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:54 vm00 bash[28005]: cluster 2026-03-10T07:21:53.523874+0000 mon.a (mon.0) 341 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T07:21:54.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:54 vm00 bash[20701]: cluster 2026-03-10T07:21:53.523874+0000 mon.a (mon.0) 341 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T07:21:54.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:54 vm00 bash[20701]: cluster 2026-03-10T07:21:53.523874+0000 mon.a (mon.0) 341 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T07:21:55.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:54 vm03 bash[23382]: cluster 2026-03-10T07:21:53.523874+0000 mon.a (mon.0) 341 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T07:21:55.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:54 vm03 bash[23382]: cluster 2026-03-10T07:21:53.523874+0000 mon.a (mon.0) 341 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T07:21:55.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:55 vm00 bash[28005]: cluster 2026-03-10T07:21:53.878693+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:55.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:55 vm00 bash[28005]: cluster 2026-03-10T07:21:53.878693+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:55.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:55 vm00 bash[20701]: cluster 2026-03-10T07:21:53.878693+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:55.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:55 vm00 bash[20701]: cluster 2026-03-10T07:21:53.878693+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:56.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:55 vm03 bash[23382]: cluster 2026-03-10T07:21:53.878693+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:56.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:55 vm03 bash[23382]: cluster 2026-03-10T07:21:53.878693+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:57.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:57 vm00 bash[28005]: cluster 2026-03-10T07:21:55.878932+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:57.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:57 vm00 bash[28005]: cluster 2026-03-10T07:21:55.878932+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:57.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:57 vm00 bash[20701]: cluster 2026-03-10T07:21:55.878932+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:57.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:57 vm00 bash[20701]: cluster 2026-03-10T07:21:55.878932+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:58.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:57 vm03 bash[23382]: cluster 2026-03-10T07:21:55.878932+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:58.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:57 vm03 bash[23382]: cluster 2026-03-10T07:21:55.878932+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:58.073 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:21:58.970 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:21:58.991 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch daemon add osd vm00:/dev/vdc 2026-03-10T07:21:59.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:59 vm00 bash[28005]: cluster 2026-03-10T07:21:57.879166+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:59.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:21:59 vm00 bash[28005]: cluster 2026-03-10T07:21:57.879166+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:59.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:59 vm00 bash[20701]: cluster 2026-03-10T07:21:57.879166+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:21:59.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:21:59 vm00 bash[20701]: cluster 2026-03-10T07:21:57.879166+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:00.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:59 vm03 bash[23382]: cluster 2026-03-10T07:21:57.879166+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:00.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:21:59 vm03 bash[23382]: cluster 2026-03-10T07:21:57.879166+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:01.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: cephadm 2026-03-10T07:21:59.755545+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:01.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: cephadm 2026-03-10T07:21:59.755545+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:01.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.761910+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.761910+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.768165+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.768165+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.769005+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:01.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.769005+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.771503+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.771503+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.771985+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.771985+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.776478+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: audit 2026-03-10T07:21:59.776478+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: cluster 2026-03-10T07:21:59.879377+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:00 vm03 bash[23382]: cluster 2026-03-10T07:21:59.879377+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: cephadm 2026-03-10T07:21:59.755545+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: cephadm 2026-03-10T07:21:59.755545+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.761910+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.761910+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.768165+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.768165+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.769005+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.769005+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.771503+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.771503+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.771985+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.771985+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.776478+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: audit 2026-03-10T07:21:59.776478+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: cluster 2026-03-10T07:21:59.879377+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:00 vm00 bash[20701]: cluster 2026-03-10T07:21:59.879377+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: cephadm 2026-03-10T07:21:59.755545+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: cephadm 2026-03-10T07:21:59.755545+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.761910+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.761910+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.768165+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.768165+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.769005+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.769005+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.771503+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.771503+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.771985+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.771985+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.776478+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: audit 2026-03-10T07:21:59.776478+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: cluster 2026-03-10T07:21:59.879377+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:01.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:00 vm00 bash[28005]: cluster 2026-03-10T07:21:59.879377+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:03.273 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:02 vm03 bash[23382]: cluster 2026-03-10T07:22:01.879628+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:03.273 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:02 vm03 bash[23382]: cluster 2026-03-10T07:22:01.879628+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:03.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:02 vm00 bash[28005]: cluster 2026-03-10T07:22:01.879628+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:03.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:02 vm00 bash[28005]: cluster 2026-03-10T07:22:01.879628+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:03.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:02 vm00 bash[20701]: cluster 2026-03-10T07:22:01.879628+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:03.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:02 vm00 bash[20701]: cluster 2026-03-10T07:22:01.879628+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:03.668 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:22:03.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.825+0000 7ff8943b4640 1 -- 192.168.123.100:0/2378193724 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff88c108800 msgr2=0x7ff88c10ac10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:22:03.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.825+0000 7ff8943b4640 1 --2- 192.168.123.100:0/2378193724 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff88c108800 0x7ff88c10ac10 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7ff88800b0a0 tx=0x7ff88802f450 comp rx=0 tx=0).stop 2026-03-10T07:22:03.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.825+0000 7ff8943b4640 1 -- 192.168.123.100:0/2378193724 shutdown_connections 2026-03-10T07:22:03.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.825+0000 7ff8943b4640 1 --2- 192.168.123.100:0/2378193724 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff88c108800 0x7ff88c10ac10 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:03.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.825+0000 7ff8943b4640 1 --2- 192.168.123.100:0/2378193724 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff88c105ed0 0x7ff88c1082c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:03.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.825+0000 7ff8943b4640 1 --2- 192.168.123.100:0/2378193724 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff88c1035a0 0x7ff88c105990 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:03.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.825+0000 7ff8943b4640 1 -- 192.168.123.100:0/2378193724 >> 192.168.123.100:0/2378193724 conn(0x7ff88c0fd120 msgr2=0x7ff88c0ff560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:22:03.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.825+0000 7ff8943b4640 1 -- 192.168.123.100:0/2378193724 shutdown_connections 2026-03-10T07:22:03.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.825+0000 7ff8943b4640 1 -- 192.168.123.100:0/2378193724 wait complete. 2026-03-10T07:22:03.838 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff8943b4640 1 Processor -- start 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff8943b4640 1 -- start start 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff8943b4640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff88c1035a0 0x7ff88c19c320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff8943b4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff88c105ed0 0x7ff88c19c860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff891928640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff88c105ed0 0x7ff88c19c860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff891928640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff88c105ed0 0x7ff88c19c860 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:54426/0 (socket says 192.168.123.100:54426) 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff892129640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff88c1035a0 0x7ff88c19c320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff8943b4640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff88c108800 0x7ff88c1a38e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff8943b4640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff88c10d9b0 con 0x7ff88c105ed0 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff8943b4640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7ff88c10d830 con 0x7ff88c1035a0 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff8943b4640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7ff88c10db30 con 0x7ff88c108800 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff892129640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff88c1035a0 0x7ff88c19c320 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:60952/0 (socket says 192.168.123.100:60952) 2026-03-10T07:22:03.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff892129640 1 -- 192.168.123.100:0/3337222417 learned_addr learned my addr 192.168.123.100:0/3337222417 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:22:03.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff89292a640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff88c108800 0x7ff88c1a38e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:22:03.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff89292a640 1 -- 192.168.123.100:0/3337222417 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff88c1035a0 msgr2=0x7ff88c19c320 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:22:03.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff89292a640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff88c1035a0 0x7ff88c19c320 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:03.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff89292a640 1 -- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff88c105ed0 msgr2=0x7ff88c19c860 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:22:03.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff89292a640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff88c105ed0 0x7ff88c19c860 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:03.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff89292a640 1 -- 192.168.123.100:0/3337222417 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff88c1a3f50 con 0x7ff88c108800 2026-03-10T07:22:03.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff892129640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff88c1035a0 0x7ff88c19c320 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:22:03.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff89292a640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff88c108800 0x7ff88c1a38e0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7ff888004450 tx=0x7ff888004750 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:22:03.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff87b7fe640 1 -- 192.168.123.100:0/3337222417 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff888047070 con 0x7ff88c108800 2026-03-10T07:22:03.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff8943b4640 1 -- 192.168.123.100:0/3337222417 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff88c1a41e0 con 0x7ff88c108800 2026-03-10T07:22:03.842 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff8943b4640 1 -- 192.168.123.100:0/3337222417 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7ff88c1a4720 con 0x7ff88c108800 2026-03-10T07:22:03.842 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff87b7fe640 1 -- 192.168.123.100:0/3337222417 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ff888002c60 con 0x7ff88c108800 2026-03-10T07:22:03.842 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff87b7fe640 1 -- 192.168.123.100:0/3337222417 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff888042420 con 0x7ff88c108800 2026-03-10T07:22:03.842 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.829+0000 7ff8943b4640 1 -- 192.168.123.100:0/3337222417 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff85c005180 con 0x7ff88c108800 2026-03-10T07:22:03.846 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.833+0000 7ff87b7fe640 1 -- 192.168.123.100:0/3337222417 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 13) ==== 99979+0+0 (secure 0 0 0) 0x7ff888007660 con 0x7ff88c108800 2026-03-10T07:22:03.846 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.837+0000 7ff87b7fe640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff858077680 0x7ff858079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:22:03.847 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.837+0000 7ff892129640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff858077680 0x7ff858079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:22:03.847 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.837+0000 7ff87b7fe640 1 -- 192.168.123.100:0/3337222417 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(14..14 src has 1..14) ==== 2189+0+0 (secure 0 0 0) 0x7ff8880bda30 con 0x7ff88c108800 2026-03-10T07:22:03.847 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.837+0000 7ff87b7fe640 1 -- 192.168.123.100:0/3337222417 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff88803d070 con 0x7ff88c108800 2026-03-10T07:22:03.847 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.837+0000 7ff892129640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff858077680 0x7ff858079b40 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7ff880004240 tx=0x7ff88000a480 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:22:03.952 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:03.941+0000 7ff8943b4640 1 -- 192.168.123.100:0/3337222417 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7ff85c002bf0 con 0x7ff858077680 2026-03-10T07:22:05.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:05 vm00 bash[28005]: cluster 2026-03-10T07:22:03.879897+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:05.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:05 vm00 bash[28005]: cluster 2026-03-10T07:22:03.879897+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:05.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:05 vm00 bash[28005]: audit 2026-03-10T07:22:03.947383+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24152 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:05.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:05 vm00 bash[28005]: audit 2026-03-10T07:22:03.947383+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24152 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:05.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:05 vm00 bash[28005]: audit 2026-03-10T07:22:03.948640+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:05.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:05 vm00 bash[28005]: audit 2026-03-10T07:22:03.948640+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:05.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:05 vm00 bash[28005]: audit 2026-03-10T07:22:03.949831+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:05.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:05 vm00 bash[28005]: audit 2026-03-10T07:22:03.949831+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:05 vm00 bash[28005]: audit 2026-03-10T07:22:03.950168+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:05 vm00 bash[28005]: audit 2026-03-10T07:22:03.950168+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:05 vm00 bash[20701]: cluster 2026-03-10T07:22:03.879897+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:05 vm00 bash[20701]: cluster 2026-03-10T07:22:03.879897+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:05 vm00 bash[20701]: audit 2026-03-10T07:22:03.947383+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24152 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:05 vm00 bash[20701]: audit 2026-03-10T07:22:03.947383+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24152 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:05 vm00 bash[20701]: audit 2026-03-10T07:22:03.948640+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:05 vm00 bash[20701]: audit 2026-03-10T07:22:03.948640+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:05 vm00 bash[20701]: audit 2026-03-10T07:22:03.949831+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:05 vm00 bash[20701]: audit 2026-03-10T07:22:03.949831+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:05 vm00 bash[20701]: audit 2026-03-10T07:22:03.950168+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:05.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:05 vm00 bash[20701]: audit 2026-03-10T07:22:03.950168+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:05.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:05 vm03 bash[23382]: cluster 2026-03-10T07:22:03.879897+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:05.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:05 vm03 bash[23382]: cluster 2026-03-10T07:22:03.879897+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:05.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:05 vm03 bash[23382]: audit 2026-03-10T07:22:03.947383+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24152 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:05.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:05 vm03 bash[23382]: audit 2026-03-10T07:22:03.947383+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24152 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:05.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:05 vm03 bash[23382]: audit 2026-03-10T07:22:03.948640+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:05.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:05 vm03 bash[23382]: audit 2026-03-10T07:22:03.948640+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:05.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:05 vm03 bash[23382]: audit 2026-03-10T07:22:03.949831+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:05.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:05 vm03 bash[23382]: audit 2026-03-10T07:22:03.949831+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:05.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:05 vm03 bash[23382]: audit 2026-03-10T07:22:03.950168+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:05.524 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:05 vm03 bash[23382]: audit 2026-03-10T07:22:03.950168+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:07.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:07 vm00 bash[28005]: cluster 2026-03-10T07:22:05.880137+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:07.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:07 vm00 bash[28005]: cluster 2026-03-10T07:22:05.880137+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:07.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:07 vm00 bash[20701]: cluster 2026-03-10T07:22:05.880137+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:07.391 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:07 vm00 bash[20701]: cluster 2026-03-10T07:22:05.880137+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:07.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:07 vm03 bash[23382]: cluster 2026-03-10T07:22:05.880137+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:07.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:07 vm03 bash[23382]: cluster 2026-03-10T07:22:05.880137+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:09.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:09 vm00 bash[28005]: cluster 2026-03-10T07:22:07.880353+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:09.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:09 vm00 bash[28005]: cluster 2026-03-10T07:22:07.880353+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:09.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:09 vm00 bash[20701]: cluster 2026-03-10T07:22:07.880353+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:09.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:09 vm00 bash[20701]: cluster 2026-03-10T07:22:07.880353+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:09.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:09 vm03 bash[23382]: cluster 2026-03-10T07:22:07.880353+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:09.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:09 vm03 bash[23382]: cluster 2026-03-10T07:22:07.880353+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:10.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:10 vm00 bash[28005]: audit 2026-03-10T07:22:09.325902+0000 mon.a (mon.0) 351 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]: dispatch 2026-03-10T07:22:10.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:10 vm00 bash[28005]: audit 2026-03-10T07:22:09.325902+0000 mon.a (mon.0) 351 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]: dispatch 2026-03-10T07:22:10.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:10 vm00 bash[28005]: audit 2026-03-10T07:22:09.328853+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]': finished 2026-03-10T07:22:10.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:10 vm00 bash[28005]: audit 2026-03-10T07:22:09.328853+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]': finished 2026-03-10T07:22:10.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:10 vm00 bash[28005]: cluster 2026-03-10T07:22:09.331952+0000 mon.a (mon.0) 353 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T07:22:10.391 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:10 vm00 bash[28005]: cluster 2026-03-10T07:22:09.331952+0000 mon.a (mon.0) 353 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:10 vm00 bash[28005]: audit 2026-03-10T07:22:09.332083+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:10 vm00 bash[28005]: audit 2026-03-10T07:22:09.332083+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:10 vm00 bash[28005]: audit 2026-03-10T07:22:09.939869+0000 mon.a (mon.0) 355 : audit [DBG] from='client.? 192.168.123.100:0/3355182218' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:10 vm00 bash[28005]: audit 2026-03-10T07:22:09.939869+0000 mon.a (mon.0) 355 : audit [DBG] from='client.? 192.168.123.100:0/3355182218' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:10 vm00 bash[20701]: audit 2026-03-10T07:22:09.325902+0000 mon.a (mon.0) 351 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]: dispatch 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:10 vm00 bash[20701]: audit 2026-03-10T07:22:09.325902+0000 mon.a (mon.0) 351 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]: dispatch 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:10 vm00 bash[20701]: audit 2026-03-10T07:22:09.328853+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]': finished 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:10 vm00 bash[20701]: audit 2026-03-10T07:22:09.328853+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]': finished 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:10 vm00 bash[20701]: cluster 2026-03-10T07:22:09.331952+0000 mon.a (mon.0) 353 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:10 vm00 bash[20701]: cluster 2026-03-10T07:22:09.331952+0000 mon.a (mon.0) 353 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:10 vm00 bash[20701]: audit 2026-03-10T07:22:09.332083+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:10 vm00 bash[20701]: audit 2026-03-10T07:22:09.332083+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:10 vm00 bash[20701]: audit 2026-03-10T07:22:09.939869+0000 mon.a (mon.0) 355 : audit [DBG] from='client.? 192.168.123.100:0/3355182218' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:10.392 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:10 vm00 bash[20701]: audit 2026-03-10T07:22:09.939869+0000 mon.a (mon.0) 355 : audit [DBG] from='client.? 192.168.123.100:0/3355182218' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:10.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:10 vm03 bash[23382]: audit 2026-03-10T07:22:09.325902+0000 mon.a (mon.0) 351 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]: dispatch 2026-03-10T07:22:10.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:10 vm03 bash[23382]: audit 2026-03-10T07:22:09.325902+0000 mon.a (mon.0) 351 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]: dispatch 2026-03-10T07:22:10.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:10 vm03 bash[23382]: audit 2026-03-10T07:22:09.328853+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]': finished 2026-03-10T07:22:10.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:10 vm03 bash[23382]: audit 2026-03-10T07:22:09.328853+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.100:0/2187340346' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7d09342f-42e2-41fc-9c97-fa4b821fa628"}]': finished 2026-03-10T07:22:10.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:10 vm03 bash[23382]: cluster 2026-03-10T07:22:09.331952+0000 mon.a (mon.0) 353 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T07:22:10.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:10 vm03 bash[23382]: cluster 2026-03-10T07:22:09.331952+0000 mon.a (mon.0) 353 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T07:22:10.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:10 vm03 bash[23382]: audit 2026-03-10T07:22:09.332083+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:10.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:10 vm03 bash[23382]: audit 2026-03-10T07:22:09.332083+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:10.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:10 vm03 bash[23382]: audit 2026-03-10T07:22:09.939869+0000 mon.a (mon.0) 355 : audit [DBG] from='client.? 192.168.123.100:0/3355182218' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:10.523 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:10 vm03 bash[23382]: audit 2026-03-10T07:22:09.939869+0000 mon.a (mon.0) 355 : audit [DBG] from='client.? 192.168.123.100:0/3355182218' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:11.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:11 vm00 bash[28005]: cluster 2026-03-10T07:22:09.880619+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:11.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:11 vm00 bash[28005]: cluster 2026-03-10T07:22:09.880619+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:11.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:11 vm00 bash[20701]: cluster 2026-03-10T07:22:09.880619+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:11.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:11 vm00 bash[20701]: cluster 2026-03-10T07:22:09.880619+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:11 vm03 bash[23382]: cluster 2026-03-10T07:22:09.880619+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:11 vm03 bash[23382]: cluster 2026-03-10T07:22:09.880619+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:13.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:13 vm00 bash[28005]: cluster 2026-03-10T07:22:11.880839+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:13.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:13 vm00 bash[28005]: cluster 2026-03-10T07:22:11.880839+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:13.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:13 vm00 bash[20701]: cluster 2026-03-10T07:22:11.880839+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:13.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:13 vm00 bash[20701]: cluster 2026-03-10T07:22:11.880839+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:13.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:13 vm03 bash[23382]: cluster 2026-03-10T07:22:11.880839+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:13.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:13 vm03 bash[23382]: cluster 2026-03-10T07:22:11.880839+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:15.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:15 vm00 bash[20701]: cluster 2026-03-10T07:22:13.881090+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:15.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:15 vm00 bash[20701]: cluster 2026-03-10T07:22:13.881090+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:15.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:15 vm00 bash[28005]: cluster 2026-03-10T07:22:13.881090+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:15.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:15 vm00 bash[28005]: cluster 2026-03-10T07:22:13.881090+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:15.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:15 vm03 bash[23382]: cluster 2026-03-10T07:22:13.881090+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:15.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:15 vm03 bash[23382]: cluster 2026-03-10T07:22:13.881090+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:17 vm00 bash[20701]: cluster 2026-03-10T07:22:15.881377+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:17 vm00 bash[20701]: cluster 2026-03-10T07:22:15.881377+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:17.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:17 vm00 bash[28005]: cluster 2026-03-10T07:22:15.881377+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:17.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:17 vm00 bash[28005]: cluster 2026-03-10T07:22:15.881377+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:17.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:17 vm03 bash[23382]: cluster 2026-03-10T07:22:15.881377+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:17.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:17 vm03 bash[23382]: cluster 2026-03-10T07:22:15.881377+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:19.250 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:22:19 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:19.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:19 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:19.251 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:22:19 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:19.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:19 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:19.251 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 07:22:19 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:22:19 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:19.505 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:22:19 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:19.505 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 07:22:19 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:19 vm00 bash[20701]: cluster 2026-03-10T07:22:17.881624+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:19 vm00 bash[20701]: cluster 2026-03-10T07:22:17.881624+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:19 vm00 bash[20701]: audit 2026-03-10T07:22:18.398709+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:19 vm00 bash[20701]: audit 2026-03-10T07:22:18.398709+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:19 vm00 bash[20701]: audit 2026-03-10T07:22:18.399367+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:19 vm00 bash[20701]: audit 2026-03-10T07:22:18.399367+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:19 vm00 bash[20701]: cephadm 2026-03-10T07:22:18.399844+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:19 vm00 bash[20701]: cephadm 2026-03-10T07:22:18.399844+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:19 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:19 vm00 bash[28005]: cluster 2026-03-10T07:22:17.881624+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:19 vm00 bash[28005]: cluster 2026-03-10T07:22:17.881624+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:19 vm00 bash[28005]: audit 2026-03-10T07:22:18.398709+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:19 vm00 bash[28005]: audit 2026-03-10T07:22:18.398709+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:19 vm00 bash[28005]: audit 2026-03-10T07:22:18.399367+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:19 vm00 bash[28005]: audit 2026-03-10T07:22:18.399367+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:19 vm00 bash[28005]: cephadm 2026-03-10T07:22:18.399844+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:19 vm00 bash[28005]: cephadm 2026-03-10T07:22:18.399844+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T07:22:19.505 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:19 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:19.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:19 vm03 bash[23382]: cluster 2026-03-10T07:22:17.881624+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:19.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:19 vm03 bash[23382]: cluster 2026-03-10T07:22:17.881624+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:19.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:19 vm03 bash[23382]: audit 2026-03-10T07:22:18.398709+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T07:22:19.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:19 vm03 bash[23382]: audit 2026-03-10T07:22:18.398709+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T07:22:19.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:19 vm03 bash[23382]: audit 2026-03-10T07:22:18.399367+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:19.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:19 vm03 bash[23382]: audit 2026-03-10T07:22:18.399367+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:19.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:19 vm03 bash[23382]: cephadm 2026-03-10T07:22:18.399844+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T07:22:19.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:19 vm03 bash[23382]: cephadm 2026-03-10T07:22:18.399844+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:20 vm00 bash[28005]: audit 2026-03-10T07:22:19.500871+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:20 vm00 bash[28005]: audit 2026-03-10T07:22:19.500871+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:20 vm00 bash[28005]: audit 2026-03-10T07:22:19.509287+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:20 vm00 bash[28005]: audit 2026-03-10T07:22:19.509287+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:20 vm00 bash[28005]: audit 2026-03-10T07:22:19.522623+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:20 vm00 bash[28005]: audit 2026-03-10T07:22:19.522623+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:20 vm00 bash[20701]: audit 2026-03-10T07:22:19.500871+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:20 vm00 bash[20701]: audit 2026-03-10T07:22:19.500871+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:20 vm00 bash[20701]: audit 2026-03-10T07:22:19.509287+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:20 vm00 bash[20701]: audit 2026-03-10T07:22:19.509287+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:20 vm00 bash[20701]: audit 2026-03-10T07:22:19.522623+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:20.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:20 vm00 bash[20701]: audit 2026-03-10T07:22:19.522623+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:20.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:20 vm03 bash[23382]: audit 2026-03-10T07:22:19.500871+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:20.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:20 vm03 bash[23382]: audit 2026-03-10T07:22:19.500871+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:20.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:20 vm03 bash[23382]: audit 2026-03-10T07:22:19.509287+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:20.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:20 vm03 bash[23382]: audit 2026-03-10T07:22:19.509287+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:20.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:20 vm03 bash[23382]: audit 2026-03-10T07:22:19.522623+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:20.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:20 vm03 bash[23382]: audit 2026-03-10T07:22:19.522623+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:21.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:21 vm00 bash[28005]: cluster 2026-03-10T07:22:19.881885+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:21.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:21 vm00 bash[28005]: cluster 2026-03-10T07:22:19.881885+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:21.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:21 vm00 bash[20701]: cluster 2026-03-10T07:22:19.881885+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:21.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:21 vm00 bash[20701]: cluster 2026-03-10T07:22:19.881885+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:22.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:21 vm03 bash[23382]: cluster 2026-03-10T07:22:19.881885+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:22.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:21 vm03 bash[23382]: cluster 2026-03-10T07:22:19.881885+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:23.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:23 vm00 bash[28005]: cluster 2026-03-10T07:22:21.882156+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:23.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:23 vm00 bash[28005]: cluster 2026-03-10T07:22:21.882156+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:23.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:23 vm00 bash[28005]: audit 2026-03-10T07:22:22.861742+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:22:23.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:23 vm00 bash[28005]: audit 2026-03-10T07:22:22.861742+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:22:23.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:23 vm00 bash[20701]: cluster 2026-03-10T07:22:21.882156+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:23.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:23 vm00 bash[20701]: cluster 2026-03-10T07:22:21.882156+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:23.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:23 vm00 bash[20701]: audit 2026-03-10T07:22:22.861742+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:22:23.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:23 vm00 bash[20701]: audit 2026-03-10T07:22:22.861742+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:22:24.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:23 vm03 bash[23382]: cluster 2026-03-10T07:22:21.882156+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:24.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:23 vm03 bash[23382]: cluster 2026-03-10T07:22:21.882156+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:24.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:23 vm03 bash[23382]: audit 2026-03-10T07:22:22.861742+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:22:24.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:23 vm03 bash[23382]: audit 2026-03-10T07:22:22.861742+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:22:24.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: audit 2026-03-10T07:22:23.529519+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: audit 2026-03-10T07:22:23.529519+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: cluster 2026-03-10T07:22:23.531978+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: cluster 2026-03-10T07:22:23.531978+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: audit 2026-03-10T07:22:23.532651+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: audit 2026-03-10T07:22:23.532651+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: audit 2026-03-10T07:22:23.532730+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: audit 2026-03-10T07:22:23.532730+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: audit 2026-03-10T07:22:24.532271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: audit 2026-03-10T07:22:24.532271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: cluster 2026-03-10T07:22:24.538509+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:24 vm00 bash[28005]: cluster 2026-03-10T07:22:24.538509+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: audit 2026-03-10T07:22:23.529519+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: audit 2026-03-10T07:22:23.529519+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: cluster 2026-03-10T07:22:23.531978+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: cluster 2026-03-10T07:22:23.531978+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: audit 2026-03-10T07:22:23.532651+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: audit 2026-03-10T07:22:23.532651+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: audit 2026-03-10T07:22:23.532730+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: audit 2026-03-10T07:22:23.532730+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: audit 2026-03-10T07:22:24.532271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: audit 2026-03-10T07:22:24.532271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: cluster 2026-03-10T07:22:24.538509+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T07:22:24.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:24 vm00 bash[20701]: cluster 2026-03-10T07:22:24.538509+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T07:22:25.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: audit 2026-03-10T07:22:23.529519+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T07:22:25.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: audit 2026-03-10T07:22:23.529519+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T07:22:25.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: cluster 2026-03-10T07:22:23.531978+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T07:22:25.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: cluster 2026-03-10T07:22:23.531978+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T07:22:25.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: audit 2026-03-10T07:22:23.532651+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:25.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: audit 2026-03-10T07:22:23.532651+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:25.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: audit 2026-03-10T07:22:23.532730+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:25.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: audit 2026-03-10T07:22:23.532730+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:25.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: audit 2026-03-10T07:22:24.532271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:25.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: audit 2026-03-10T07:22:24.532271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:25.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: cluster 2026-03-10T07:22:24.538509+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T07:22:25.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:24 vm03 bash[23382]: cluster 2026-03-10T07:22:24.538509+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T07:22:25.820 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:25 vm00 bash[28005]: cluster 2026-03-10T07:22:23.882416+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:25.820 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:25 vm00 bash[28005]: cluster 2026-03-10T07:22:23.882416+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:25.820 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:25 vm00 bash[28005]: audit 2026-03-10T07:22:24.543371+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:25.820 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:25 vm00 bash[28005]: audit 2026-03-10T07:22:24.543371+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:25.820 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:25 vm00 bash[28005]: audit 2026-03-10T07:22:25.541091+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:25.820 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:25 vm00 bash[28005]: audit 2026-03-10T07:22:25.541091+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:25.820 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:25 vm00 bash[20701]: cluster 2026-03-10T07:22:23.882416+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:25.820 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:25 vm00 bash[20701]: cluster 2026-03-10T07:22:23.882416+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:25.820 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:25 vm00 bash[20701]: audit 2026-03-10T07:22:24.543371+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:25.820 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:25 vm00 bash[20701]: audit 2026-03-10T07:22:24.543371+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:25.821 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:25 vm00 bash[20701]: audit 2026-03-10T07:22:25.541091+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:25.821 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:25 vm00 bash[20701]: audit 2026-03-10T07:22:25.541091+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:26.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:25 vm03 bash[23382]: cluster 2026-03-10T07:22:23.882416+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:26.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:25 vm03 bash[23382]: cluster 2026-03-10T07:22:23.882416+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:26.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:25 vm03 bash[23382]: audit 2026-03-10T07:22:24.543371+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:26.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:25 vm03 bash[23382]: audit 2026-03-10T07:22:24.543371+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:26.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:25 vm03 bash[23382]: audit 2026-03-10T07:22:25.541091+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:26.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:25 vm03 bash[23382]: audit 2026-03-10T07:22:25.541091+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: cluster 2026-03-10T07:22:23.888218+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: cluster 2026-03-10T07:22:23.888218+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: cluster 2026-03-10T07:22:23.888279+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: cluster 2026-03-10T07:22:23.888279+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:25.640085+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:25.640085+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:25.869944+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:25.869944+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:25.875174+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:25.875174+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: cluster 2026-03-10T07:22:25.882666+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: cluster 2026-03-10T07:22:25.882666+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:26.316000+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:26.316000+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:26.316610+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:26.316610+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:26.321681+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:26.321681+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:26.541170+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:26.850 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:26 vm00 bash[28005]: audit 2026-03-10T07:22:26.541170+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: cluster 2026-03-10T07:22:23.888218+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: cluster 2026-03-10T07:22:23.888218+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: cluster 2026-03-10T07:22:23.888279+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: cluster 2026-03-10T07:22:23.888279+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:25.640085+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:25.640085+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:25.869944+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:25.869944+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:25.875174+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:25.875174+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: cluster 2026-03-10T07:22:25.882666+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: cluster 2026-03-10T07:22:25.882666+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:26.316000+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:26.316000+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:26.316610+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:26.316610+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:26.321681+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:26.321681+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:26.541170+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:26.851 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:26 vm00 bash[20701]: audit 2026-03-10T07:22:26.541170+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:26.922 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 2 on host 'vm00' 2026-03-10T07:22:26.923 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.909+0000 7ff87b7fe640 1 -- 192.168.123.100:0/3337222417 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7ff85c002bf0 con 0x7ff858077680 2026-03-10T07:22:26.923 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.909+0000 7ff8943b4640 1 -- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff858077680 msgr2=0x7ff858079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:22:26.923 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.909+0000 7ff8943b4640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff858077680 0x7ff858079b40 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7ff880004240 tx=0x7ff88000a480 comp rx=0 tx=0).stop 2026-03-10T07:22:26.923 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.909+0000 7ff8943b4640 1 -- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff88c108800 msgr2=0x7ff88c1a38e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:22:26.923 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.909+0000 7ff8943b4640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff88c108800 0x7ff88c1a38e0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7ff888004450 tx=0x7ff888004750 comp rx=0 tx=0).stop 2026-03-10T07:22:26.925 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.913+0000 7ff8943b4640 1 -- 192.168.123.100:0/3337222417 shutdown_connections 2026-03-10T07:22:26.925 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.913+0000 7ff8943b4640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7ff858077680 0x7ff858079b40 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:26.925 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.913+0000 7ff8943b4640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff88c108800 0x7ff88c1a38e0 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:26.925 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.913+0000 7ff8943b4640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff88c105ed0 0x7ff88c19c860 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:26.925 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.913+0000 7ff8943b4640 1 --2- 192.168.123.100:0/3337222417 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff88c1035a0 0x7ff88c19c320 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:26.925 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.913+0000 7ff8943b4640 1 -- 192.168.123.100:0/3337222417 >> 192.168.123.100:0/3337222417 conn(0x7ff88c0fd120 msgr2=0x7ff88c1041a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:22:26.925 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.913+0000 7ff8943b4640 1 -- 192.168.123.100:0/3337222417 shutdown_connections 2026-03-10T07:22:26.925 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:26.913+0000 7ff8943b4640 1 -- 192.168.123.100:0/3337222417 wait complete. 2026-03-10T07:22:27.023 DEBUG:teuthology.orchestra.run.vm00:osd.2> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.2.service 2026-03-10T07:22:27.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: cluster 2026-03-10T07:22:23.888218+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:22:27.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: cluster 2026-03-10T07:22:23.888218+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: cluster 2026-03-10T07:22:23.888279+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: cluster 2026-03-10T07:22:23.888279+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:25.640085+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:25.640085+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437]' entity='osd.2' 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:25.869944+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:25.869944+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:25.875174+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:25.875174+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: cluster 2026-03-10T07:22:25.882666+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: cluster 2026-03-10T07:22:25.882666+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:26.316000+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:26.316000+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:26.316610+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:26.316610+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:26.321681+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:26.321681+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:26.541170+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:27.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:26 vm03 bash[23382]: audit 2026-03-10T07:22:26.541170+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:27.024 INFO:tasks.cephadm:Deploying osd.3 on vm00 with /dev/vdb... 2026-03-10T07:22:27.024 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- lvm zap /dev/vdb 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: cluster 2026-03-10T07:22:26.658874+0000 mon.a (mon.0) 377 : cluster [INF] osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] boot 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: cluster 2026-03-10T07:22:26.658874+0000 mon.a (mon.0) 377 : cluster [INF] osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] boot 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: cluster 2026-03-10T07:22:26.659476+0000 mon.a (mon.0) 378 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: cluster 2026-03-10T07:22:26.659476+0000 mon.a (mon.0) 378 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: audit 2026-03-10T07:22:26.659622+0000 mon.a (mon.0) 379 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: audit 2026-03-10T07:22:26.659622+0000 mon.a (mon.0) 379 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: audit 2026-03-10T07:22:26.901129+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: audit 2026-03-10T07:22:26.901129+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: audit 2026-03-10T07:22:26.906423+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: audit 2026-03-10T07:22:26.906423+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: audit 2026-03-10T07:22:26.911965+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:28.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:27 vm03 bash[23382]: audit 2026-03-10T07:22:26.911965+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: cluster 2026-03-10T07:22:26.658874+0000 mon.a (mon.0) 377 : cluster [INF] osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] boot 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: cluster 2026-03-10T07:22:26.658874+0000 mon.a (mon.0) 377 : cluster [INF] osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] boot 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: cluster 2026-03-10T07:22:26.659476+0000 mon.a (mon.0) 378 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: cluster 2026-03-10T07:22:26.659476+0000 mon.a (mon.0) 378 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: audit 2026-03-10T07:22:26.659622+0000 mon.a (mon.0) 379 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: audit 2026-03-10T07:22:26.659622+0000 mon.a (mon.0) 379 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: audit 2026-03-10T07:22:26.901129+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: audit 2026-03-10T07:22:26.901129+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: audit 2026-03-10T07:22:26.906423+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: audit 2026-03-10T07:22:26.906423+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: audit 2026-03-10T07:22:26.911965+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:28.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:27 vm00 bash[28005]: audit 2026-03-10T07:22:26.911965+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: cluster 2026-03-10T07:22:26.658874+0000 mon.a (mon.0) 377 : cluster [INF] osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] boot 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: cluster 2026-03-10T07:22:26.658874+0000 mon.a (mon.0) 377 : cluster [INF] osd.2 [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] boot 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: cluster 2026-03-10T07:22:26.659476+0000 mon.a (mon.0) 378 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: cluster 2026-03-10T07:22:26.659476+0000 mon.a (mon.0) 378 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: audit 2026-03-10T07:22:26.659622+0000 mon.a (mon.0) 379 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: audit 2026-03-10T07:22:26.659622+0000 mon.a (mon.0) 379 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: audit 2026-03-10T07:22:26.901129+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: audit 2026-03-10T07:22:26.901129+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: audit 2026-03-10T07:22:26.906423+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: audit 2026-03-10T07:22:26.906423+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: audit 2026-03-10T07:22:26.911965+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:28.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:27 vm00 bash[20701]: audit 2026-03-10T07:22:26.911965+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:29.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:28 vm03 bash[23382]: cluster 2026-03-10T07:22:27.667181+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T07:22:29.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:28 vm03 bash[23382]: cluster 2026-03-10T07:22:27.667181+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T07:22:29.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:28 vm03 bash[23382]: cluster 2026-03-10T07:22:27.882853+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:29.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:28 vm03 bash[23382]: cluster 2026-03-10T07:22:27.882853+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:29.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:28 vm03 bash[23382]: audit 2026-03-10T07:22:27.925227+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:29.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:28 vm03 bash[23382]: audit 2026-03-10T07:22:27.925227+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:29.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:28 vm00 bash[28005]: cluster 2026-03-10T07:22:27.667181+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T07:22:29.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:28 vm00 bash[28005]: cluster 2026-03-10T07:22:27.667181+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T07:22:29.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:28 vm00 bash[28005]: cluster 2026-03-10T07:22:27.882853+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:29.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:28 vm00 bash[28005]: cluster 2026-03-10T07:22:27.882853+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:29.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:28 vm00 bash[28005]: audit 2026-03-10T07:22:27.925227+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:29.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:28 vm00 bash[28005]: audit 2026-03-10T07:22:27.925227+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:29.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:28 vm00 bash[20701]: cluster 2026-03-10T07:22:27.667181+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T07:22:29.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:28 vm00 bash[20701]: cluster 2026-03-10T07:22:27.667181+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T07:22:29.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:28 vm00 bash[20701]: cluster 2026-03-10T07:22:27.882853+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:29.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:28 vm00 bash[20701]: cluster 2026-03-10T07:22:27.882853+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:29.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:28 vm00 bash[20701]: audit 2026-03-10T07:22:27.925227+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:29.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:28 vm00 bash[20701]: audit 2026-03-10T07:22:27.925227+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:30.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:29 vm03 bash[23382]: audit 2026-03-10T07:22:28.672866+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:30.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:29 vm03 bash[23382]: audit 2026-03-10T07:22:28.672866+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:30.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:29 vm03 bash[23382]: cluster 2026-03-10T07:22:28.680474+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T07:22:30.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:29 vm03 bash[23382]: cluster 2026-03-10T07:22:28.680474+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T07:22:30.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:29 vm03 bash[23382]: audit 2026-03-10T07:22:28.682071+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:30.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:29 vm03 bash[23382]: audit 2026-03-10T07:22:28.682071+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:30.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:29 vm00 bash[28005]: audit 2026-03-10T07:22:28.672866+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:30.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:29 vm00 bash[28005]: audit 2026-03-10T07:22:28.672866+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:30.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:29 vm00 bash[28005]: cluster 2026-03-10T07:22:28.680474+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T07:22:30.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:29 vm00 bash[28005]: cluster 2026-03-10T07:22:28.680474+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T07:22:30.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:29 vm00 bash[28005]: audit 2026-03-10T07:22:28.682071+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:30.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:29 vm00 bash[28005]: audit 2026-03-10T07:22:28.682071+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:30.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:29 vm00 bash[20701]: audit 2026-03-10T07:22:28.672866+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:30.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:29 vm00 bash[20701]: audit 2026-03-10T07:22:28.672866+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:30.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:29 vm00 bash[20701]: cluster 2026-03-10T07:22:28.680474+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T07:22:30.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:29 vm00 bash[20701]: cluster 2026-03-10T07:22:28.680474+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T07:22:30.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:29 vm00 bash[20701]: audit 2026-03-10T07:22:28.682071+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:30.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:29 vm00 bash[20701]: audit 2026-03-10T07:22:28.682071+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:22:31.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.680038+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.680038+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: cluster 2026-03-10T07:22:29.685052+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: cluster 2026-03-10T07:22:29.685052+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.800694+0000 mon.a (mon.0) 390 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.800694+0000 mon.a (mon.0) 390 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.817671+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.817671+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.817842+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.817842+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.818218+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.818218+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.818293+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.818293+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.820441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.820441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.820494+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.820494+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.820691+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.820691+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.820972+0000 mon.b (mon.1) 3 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.820972+0000 mon.b (mon.1) 3 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.840186+0000 mon.b (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.840186+0000 mon.b (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.841265+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.841265+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.843039+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.843039+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.843097+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.843097+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.843137+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.843137+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.860048+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: audit 2026-03-10T07:22:29.860048+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: cluster 2026-03-10T07:22:29.883110+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:31.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:30 vm03 bash[23382]: cluster 2026-03-10T07:22:29.883110+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:31.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.680038+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:31.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.680038+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:31.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: cluster 2026-03-10T07:22:29.685052+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T07:22:31.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: cluster 2026-03-10T07:22:29.685052+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T07:22:31.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.800694+0000 mon.a (mon.0) 390 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.800694+0000 mon.a (mon.0) 390 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.817671+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.817671+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.817842+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.817842+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.818218+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.818218+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.818293+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.818293+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.820441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.820441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.820494+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.820494+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.820691+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.820691+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.820972+0000 mon.b (mon.1) 3 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.820972+0000 mon.b (mon.1) 3 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.840186+0000 mon.b (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.840186+0000 mon.b (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.841265+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.841265+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.843039+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.843039+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.843097+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.843097+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.843137+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.843137+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.860048+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: audit 2026-03-10T07:22:29.860048+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: cluster 2026-03-10T07:22:29.883110+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:30 vm00 bash[28005]: cluster 2026-03-10T07:22:29.883110+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.680038+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.680038+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: cluster 2026-03-10T07:22:29.685052+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: cluster 2026-03-10T07:22:29.685052+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.800694+0000 mon.a (mon.0) 390 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.800694+0000 mon.a (mon.0) 390 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.817671+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.817671+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.817842+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.817842+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.818218+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.818218+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.818293+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.818293+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.820441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.820441+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.820494+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.820494+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.820691+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.820691+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.820972+0000 mon.b (mon.1) 3 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.820972+0000 mon.b (mon.1) 3 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.840186+0000 mon.b (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.840186+0000 mon.b (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.841265+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.143 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.841265+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:22:31.143 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.843039+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.143 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.843039+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:22:31.143 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.843097+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.143 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.843097+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:22:31.143 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.843137+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.143 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.843137+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:22:31.143 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.860048+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.143 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: audit 2026-03-10T07:22:29.860048+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:22:31.143 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: cluster 2026-03-10T07:22:29.883110+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:31.143 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:30 vm00 bash[20701]: cluster 2026-03-10T07:22:29.883110+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:31.693 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:22:32.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:31 vm03 bash[23382]: cluster 2026-03-10T07:22:30.695804+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T07:22:32.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:31 vm03 bash[23382]: cluster 2026-03-10T07:22:30.695804+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T07:22:32.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:31 vm03 bash[23382]: cluster 2026-03-10T07:22:30.713740+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-10T07:22:32.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:31 vm03 bash[23382]: cluster 2026-03-10T07:22:30.713740+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-10T07:22:32.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:31 vm00 bash[28005]: cluster 2026-03-10T07:22:30.695804+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T07:22:32.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:31 vm00 bash[28005]: cluster 2026-03-10T07:22:30.695804+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T07:22:32.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:31 vm00 bash[28005]: cluster 2026-03-10T07:22:30.713740+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-10T07:22:32.141 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:31 vm00 bash[28005]: cluster 2026-03-10T07:22:30.713740+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-10T07:22:32.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:31 vm00 bash[20701]: cluster 2026-03-10T07:22:30.695804+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T07:22:32.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:31 vm00 bash[20701]: cluster 2026-03-10T07:22:30.695804+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T07:22:32.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:31 vm00 bash[20701]: cluster 2026-03-10T07:22:30.713740+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-10T07:22:32.141 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:31 vm00 bash[20701]: cluster 2026-03-10T07:22:30.713740+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e14: y(active, since 2m), standbys: x 2026-03-10T07:22:32.598 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:22:32.609 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch daemon add osd vm00:/dev/vdb 2026-03-10T07:22:32.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:32 vm00 bash[20701]: cluster 2026-03-10T07:22:31.883356+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:32.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:32 vm00 bash[20701]: cluster 2026-03-10T07:22:31.883356+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:32.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:32 vm00 bash[28005]: cluster 2026-03-10T07:22:31.883356+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:32.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:32 vm00 bash[28005]: cluster 2026-03-10T07:22:31.883356+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:33.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:32 vm03 bash[23382]: cluster 2026-03-10T07:22:31.883356+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:33.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:32 vm03 bash[23382]: cluster 2026-03-10T07:22:31.883356+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:34.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: cephadm 2026-03-10T07:22:33.404434+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:34.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: cephadm 2026-03-10T07:22:33.404434+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.411448+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.411448+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.418285+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.418285+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.419794+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.419794+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.420546+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.420546+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.420926+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.420926+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.425604+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:34 vm03 bash[23382]: audit 2026-03-10T07:22:33.425604+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: cephadm 2026-03-10T07:22:33.404434+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:34.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: cephadm 2026-03-10T07:22:33.404434+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.411448+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.411448+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.418285+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.418285+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.419794+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.419794+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.420546+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.420546+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.420926+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.420926+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.425604+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:34 vm00 bash[20701]: audit 2026-03-10T07:22:33.425604+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: cephadm 2026-03-10T07:22:33.404434+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: cephadm 2026-03-10T07:22:33.404434+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.411448+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.411448+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.418285+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.418285+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.419794+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.419794+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.420546+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.420546+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.420926+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.420926+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.425604+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:34.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:34 vm00 bash[28005]: audit 2026-03-10T07:22:33.425604+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:35.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:35 vm03 bash[23382]: cluster 2026-03-10T07:22:33.883690+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:35.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:35 vm03 bash[23382]: cluster 2026-03-10T07:22:33.883690+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:35.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:35 vm00 bash[20701]: cluster 2026-03-10T07:22:33.883690+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:35.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:35 vm00 bash[20701]: cluster 2026-03-10T07:22:33.883690+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:35.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:35 vm00 bash[28005]: cluster 2026-03-10T07:22:33.883690+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:35.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:35 vm00 bash[28005]: cluster 2026-03-10T07:22:33.883690+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:37.290 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:22:37.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 -- 192.168.123.100:0/4135585317 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f63c8104550 msgr2=0x7f63c810ade0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:22:37.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 --2- 192.168.123.100:0/4135585317 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f63c8104550 0x7f63c810ade0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f63c400b0a0 tx=0x7f63c402f450 comp rx=0 tx=0).stop 2026-03-10T07:22:37.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 -- 192.168.123.100:0/4135585317 shutdown_connections 2026-03-10T07:22:37.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 --2- 192.168.123.100:0/4135585317 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f63c8104550 0x7f63c810ade0 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:37.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 --2- 192.168.123.100:0/4135585317 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f63c8103b90 0x7f63c8104010 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:37.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 --2- 192.168.123.100:0/4135585317 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f63c8102990 0x7f63c8102d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:37.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 -- 192.168.123.100:0/4135585317 >> 192.168.123.100:0/4135585317 conn(0x7f63c80fe140 msgr2=0x7f63c8100560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:22:37.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 -- 192.168.123.100:0/4135585317 shutdown_connections 2026-03-10T07:22:37.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 -- 192.168.123.100:0/4135585317 wait complete. 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 Processor -- start 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 -- start start 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f63c8102990 0x7f63c81a0600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f63c8103b90 0x7f63c81a0b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f63c8104550 0x7f63c81a7bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f63c8077140 con 0x7f63c8104550 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f63c8076fc0 con 0x7f63c8103b90 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cde7e640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f63c80772c0 con 0x7f63c8102990 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cd67d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f63c8104550 0x7f63c81a7bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cd67d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f63c8104550 0x7f63c81a7bc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:52764/0 (socket says 192.168.123.100:52764) 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cd67d640 1 -- 192.168.123.100:0/1873358401 learned_addr learned my addr 192.168.123.100:0/1873358401 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63bffff640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f63c8103b90 0x7f63c81a0b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:22:37.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cd67d640 1 -- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f63c8102990 msgr2=0x7f63c81a0600 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:22:37.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cce7c640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f63c8102990 0x7f63c81a0600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:22:37.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cd67d640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f63c8102990 0x7f63c81a0600 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:37.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cd67d640 1 -- 192.168.123.100:0/1873358401 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f63c8103b90 msgr2=0x7f63c81a0b40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:22:37.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cd67d640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f63c8103b90 0x7f63c81a0b40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:22:37.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cd67d640 1 -- 192.168.123.100:0/1873358401 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f63c81a8230 con 0x7f63c8104550 2026-03-10T07:22:37.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cce7c640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f63c8102990 0x7f63c81a0600 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:22:37.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63cd67d640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f63c8104550 0x7f63c81a7bc0 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f63c402f960 tx=0x7f63c4005be0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:22:37.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63bffff640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f63c8103b90 0x7f63c81a0b40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:22:37.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.445+0000 7f63bdffb640 1 -- 192.168.123.100:0/1873358401 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f63c4047070 con 0x7f63c8104550 2026-03-10T07:22:37.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.449+0000 7f63bdffb640 1 -- 192.168.123.100:0/1873358401 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f63c4004810 con 0x7f63c8104550 2026-03-10T07:22:37.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.449+0000 7f63cde7e640 1 -- 192.168.123.100:0/1873358401 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f63c81a84c0 con 0x7f63c8104550 2026-03-10T07:22:37.459 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.449+0000 7f63bdffb640 1 -- 192.168.123.100:0/1873358401 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f63c4042420 con 0x7f63c8104550 2026-03-10T07:22:37.459 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.449+0000 7f63cde7e640 1 -- 192.168.123.100:0/1873358401 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f63c81a8a00 con 0x7f63c8104550 2026-03-10T07:22:37.462 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.449+0000 7f63cde7e640 1 -- 192.168.123.100:0/1873358401 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6390005180 con 0x7f63c8104550 2026-03-10T07:22:37.462 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.449+0000 7f63bdffb640 1 -- 192.168.123.100:0/1873358401 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f63c4002a60 con 0x7f63c8104550 2026-03-10T07:22:37.462 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.449+0000 7f63bdffb640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f63a00775d0 0x7f63a0079a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:22:37.463 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.453+0000 7f63bdffb640 1 -- 192.168.123.100:0/1873358401 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(22..22 src has 1..22) ==== 3007+0+0 (secure 0 0 0) 0x7f63c4082d20 con 0x7f63c8104550 2026-03-10T07:22:37.463 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.453+0000 7f63cce7c640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f63a00775d0 0x7f63a0079a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:22:37.463 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.453+0000 7f63bdffb640 1 -- 192.168.123.100:0/1873358401 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f63c403d070 con 0x7f63c8104550 2026-03-10T07:22:37.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.453+0000 7f63cce7c640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f63a00775d0 0x7f63a0079a90 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f63c81039f0 tx=0x7f63b0005eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:22:37.564 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:22:37.553+0000 7f63cde7e640 1 -- 192.168.123.100:0/1873358401 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7f6390002bf0 con 0x7f63a00775d0 2026-03-10T07:22:37.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:37 vm00 bash[28005]: cluster 2026-03-10T07:22:35.883941+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:37.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:37 vm00 bash[28005]: cluster 2026-03-10T07:22:35.883941+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:37.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:37 vm00 bash[20701]: cluster 2026-03-10T07:22:35.883941+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:37.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:37 vm00 bash[20701]: cluster 2026-03-10T07:22:35.883941+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:37.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:37 vm03 bash[23382]: cluster 2026-03-10T07:22:35.883941+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:37.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:37 vm03 bash[23382]: cluster 2026-03-10T07:22:35.883941+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:38.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:38 vm03 bash[23382]: audit 2026-03-10T07:22:37.559505+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:38.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:38 vm03 bash[23382]: audit 2026-03-10T07:22:37.559505+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:38.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:38 vm03 bash[23382]: audit 2026-03-10T07:22:37.560979+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:38.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:38 vm03 bash[23382]: audit 2026-03-10T07:22:37.560979+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:38.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:38 vm03 bash[23382]: audit 2026-03-10T07:22:37.562822+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:38.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:38 vm03 bash[23382]: audit 2026-03-10T07:22:37.562822+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:38.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:38 vm03 bash[23382]: audit 2026-03-10T07:22:37.563693+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:38.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:38 vm03 bash[23382]: audit 2026-03-10T07:22:37.563693+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:38 vm00 bash[28005]: audit 2026-03-10T07:22:37.559505+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:38 vm00 bash[28005]: audit 2026-03-10T07:22:37.559505+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:38 vm00 bash[28005]: audit 2026-03-10T07:22:37.560979+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:38 vm00 bash[28005]: audit 2026-03-10T07:22:37.560979+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:38 vm00 bash[28005]: audit 2026-03-10T07:22:37.562822+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:38 vm00 bash[28005]: audit 2026-03-10T07:22:37.562822+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:38 vm00 bash[28005]: audit 2026-03-10T07:22:37.563693+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:38 vm00 bash[28005]: audit 2026-03-10T07:22:37.563693+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:38 vm00 bash[20701]: audit 2026-03-10T07:22:37.559505+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:38 vm00 bash[20701]: audit 2026-03-10T07:22:37.559505+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:38 vm00 bash[20701]: audit 2026-03-10T07:22:37.560979+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:38 vm00 bash[20701]: audit 2026-03-10T07:22:37.560979+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:38 vm00 bash[20701]: audit 2026-03-10T07:22:37.562822+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:38 vm00 bash[20701]: audit 2026-03-10T07:22:37.562822+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:38 vm00 bash[20701]: audit 2026-03-10T07:22:37.563693+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:38.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:38 vm00 bash[20701]: audit 2026-03-10T07:22:37.563693+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:39.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:39 vm03 bash[23382]: cluster 2026-03-10T07:22:37.884182+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:39.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:39 vm03 bash[23382]: cluster 2026-03-10T07:22:37.884182+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:39.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:39 vm00 bash[28005]: cluster 2026-03-10T07:22:37.884182+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:39.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:39 vm00 bash[28005]: cluster 2026-03-10T07:22:37.884182+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:39.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:39 vm00 bash[20701]: cluster 2026-03-10T07:22:37.884182+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:39.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:39 vm00 bash[20701]: cluster 2026-03-10T07:22:37.884182+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:41.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:41 vm03 bash[23382]: cluster 2026-03-10T07:22:39.884487+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:41.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:41 vm03 bash[23382]: cluster 2026-03-10T07:22:39.884487+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:41.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:41 vm00 bash[28005]: cluster 2026-03-10T07:22:39.884487+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:41.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:41 vm00 bash[28005]: cluster 2026-03-10T07:22:39.884487+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:41.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:41 vm00 bash[20701]: cluster 2026-03-10T07:22:39.884487+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:41.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:41 vm00 bash[20701]: cluster 2026-03-10T07:22:39.884487+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:43.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: cluster 2026-03-10T07:22:41.884793+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: cluster 2026-03-10T07:22:41.884793+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: audit 2026-03-10T07:22:42.970389+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.100:0/1154705345' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: audit 2026-03-10T07:22:42.970389+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.100:0/1154705345' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: audit 2026-03-10T07:22:42.970656+0000 mon.a (mon.0) 412 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: audit 2026-03-10T07:22:42.970656+0000 mon.a (mon.0) 412 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: audit 2026-03-10T07:22:42.973289+0000 mon.a (mon.0) 413 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]': finished 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: audit 2026-03-10T07:22:42.973289+0000 mon.a (mon.0) 413 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]': finished 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: cluster 2026-03-10T07:22:42.976690+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: cluster 2026-03-10T07:22:42.976690+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: audit 2026-03-10T07:22:42.976863+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:43 vm00 bash[28005]: audit 2026-03-10T07:22:42.976863+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: cluster 2026-03-10T07:22:41.884793+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: cluster 2026-03-10T07:22:41.884793+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: audit 2026-03-10T07:22:42.970389+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.100:0/1154705345' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: audit 2026-03-10T07:22:42.970389+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.100:0/1154705345' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: audit 2026-03-10T07:22:42.970656+0000 mon.a (mon.0) 412 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: audit 2026-03-10T07:22:42.970656+0000 mon.a (mon.0) 412 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: audit 2026-03-10T07:22:42.973289+0000 mon.a (mon.0) 413 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]': finished 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: audit 2026-03-10T07:22:42.973289+0000 mon.a (mon.0) 413 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]': finished 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: cluster 2026-03-10T07:22:42.976690+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: cluster 2026-03-10T07:22:42.976690+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: audit 2026-03-10T07:22:42.976863+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:43.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:43 vm00 bash[20701]: audit 2026-03-10T07:22:42.976863+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: cluster 2026-03-10T07:22:41.884793+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: cluster 2026-03-10T07:22:41.884793+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: audit 2026-03-10T07:22:42.970389+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.100:0/1154705345' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: audit 2026-03-10T07:22:42.970389+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.100:0/1154705345' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: audit 2026-03-10T07:22:42.970656+0000 mon.a (mon.0) 412 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: audit 2026-03-10T07:22:42.970656+0000 mon.a (mon.0) 412 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]: dispatch 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: audit 2026-03-10T07:22:42.973289+0000 mon.a (mon.0) 413 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]': finished 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: audit 2026-03-10T07:22:42.973289+0000 mon.a (mon.0) 413 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "76d2f5e3-81b1-4e08-917a-1bb3561d67e1"}]': finished 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: cluster 2026-03-10T07:22:42.976690+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: cluster 2026-03-10T07:22:42.976690+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: audit 2026-03-10T07:22:42.976863+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:43.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:43 vm03 bash[23382]: audit 2026-03-10T07:22:42.976863+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:44.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:44 vm03 bash[23382]: audit 2026-03-10T07:22:43.588020+0000 mon.a (mon.0) 416 : audit [DBG] from='client.? 192.168.123.100:0/1393893102' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:44.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:44 vm03 bash[23382]: audit 2026-03-10T07:22:43.588020+0000 mon.a (mon.0) 416 : audit [DBG] from='client.? 192.168.123.100:0/1393893102' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:44.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:44 vm00 bash[28005]: audit 2026-03-10T07:22:43.588020+0000 mon.a (mon.0) 416 : audit [DBG] from='client.? 192.168.123.100:0/1393893102' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:44.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:44 vm00 bash[28005]: audit 2026-03-10T07:22:43.588020+0000 mon.a (mon.0) 416 : audit [DBG] from='client.? 192.168.123.100:0/1393893102' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:44.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:44 vm00 bash[20701]: audit 2026-03-10T07:22:43.588020+0000 mon.a (mon.0) 416 : audit [DBG] from='client.? 192.168.123.100:0/1393893102' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:44.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:44 vm00 bash[20701]: audit 2026-03-10T07:22:43.588020+0000 mon.a (mon.0) 416 : audit [DBG] from='client.? 192.168.123.100:0/1393893102' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:22:45.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:45 vm03 bash[23382]: cluster 2026-03-10T07:22:43.885057+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:45.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:45 vm03 bash[23382]: cluster 2026-03-10T07:22:43.885057+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:45.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:45 vm00 bash[28005]: cluster 2026-03-10T07:22:43.885057+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:45.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:45 vm00 bash[28005]: cluster 2026-03-10T07:22:43.885057+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:45.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:45 vm00 bash[20701]: cluster 2026-03-10T07:22:43.885057+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:45.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:45 vm00 bash[20701]: cluster 2026-03-10T07:22:43.885057+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:47.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:47 vm03 bash[23382]: cluster 2026-03-10T07:22:45.885336+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:47.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:47 vm03 bash[23382]: cluster 2026-03-10T07:22:45.885336+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:47.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:47 vm00 bash[28005]: cluster 2026-03-10T07:22:45.885336+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:47.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:47 vm00 bash[28005]: cluster 2026-03-10T07:22:45.885336+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:47.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:47 vm00 bash[20701]: cluster 2026-03-10T07:22:45.885336+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:47.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:47 vm00 bash[20701]: cluster 2026-03-10T07:22:45.885336+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:49.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:49 vm03 bash[23382]: cluster 2026-03-10T07:22:47.885668+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:49.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:49 vm03 bash[23382]: cluster 2026-03-10T07:22:47.885668+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:49.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:49 vm00 bash[28005]: cluster 2026-03-10T07:22:47.885668+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:49.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:49 vm00 bash[28005]: cluster 2026-03-10T07:22:47.885668+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:49.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:49 vm00 bash[20701]: cluster 2026-03-10T07:22:47.885668+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:49.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:49 vm00 bash[20701]: cluster 2026-03-10T07:22:47.885668+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:51.756 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:51 vm00 bash[28005]: cluster 2026-03-10T07:22:49.886002+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:51.756 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:51 vm00 bash[28005]: cluster 2026-03-10T07:22:49.886002+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:51.756 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:51 vm00 bash[20701]: cluster 2026-03-10T07:22:49.886002+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:51.756 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:51 vm00 bash[20701]: cluster 2026-03-10T07:22:49.886002+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:51.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:51 vm03 bash[23382]: cluster 2026-03-10T07:22:49.886002+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:51.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:51 vm03 bash[23382]: cluster 2026-03-10T07:22:49.886002+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:52.687 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:52 vm00 bash[20701]: audit 2026-03-10T07:22:52.137064+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T07:22:52.687 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:52 vm00 bash[20701]: audit 2026-03-10T07:22:52.137064+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T07:22:52.687 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:52 vm00 bash[20701]: audit 2026-03-10T07:22:52.137817+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:52.687 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:52 vm00 bash[20701]: audit 2026-03-10T07:22:52.137817+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:52.687 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:52 vm00 bash[28005]: audit 2026-03-10T07:22:52.137064+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T07:22:52.687 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:52 vm00 bash[28005]: audit 2026-03-10T07:22:52.137064+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T07:22:52.687 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:52 vm00 bash[28005]: audit 2026-03-10T07:22:52.137817+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:52.687 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:52 vm00 bash[28005]: audit 2026-03-10T07:22:52.137817+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:52 vm03 bash[23382]: audit 2026-03-10T07:22:52.137064+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T07:22:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:52 vm03 bash[23382]: audit 2026-03-10T07:22:52.137064+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T07:22:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:52 vm03 bash[23382]: audit 2026-03-10T07:22:52.137817+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:52.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:52 vm03 bash[23382]: audit 2026-03-10T07:22:52.137817+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:53.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.072 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:22:52 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.072 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 07:22:52 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.073 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:22:53 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.073 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.073 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 07:22:52 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.325 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:22:53 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.325 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.325 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:22:53 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.325 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 07:22:53 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.325 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 07:22:53 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:22:53.601 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 bash[20701]: cluster 2026-03-10T07:22:51.886288+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:53.601 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 bash[20701]: cluster 2026-03-10T07:22:51.886288+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:53.601 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 bash[20701]: cephadm 2026-03-10T07:22:52.138410+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T07:22:53.601 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 bash[20701]: cephadm 2026-03-10T07:22:52.138410+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T07:22:53.601 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 bash[20701]: audit 2026-03-10T07:22:53.292958+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:53.601 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 bash[20701]: audit 2026-03-10T07:22:53.292958+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:53.601 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 bash[20701]: audit 2026-03-10T07:22:53.298398+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:53.601 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 bash[20701]: audit 2026-03-10T07:22:53.298398+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:53.601 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 bash[20701]: audit 2026-03-10T07:22:53.307809+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:53.601 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:53 vm00 bash[20701]: audit 2026-03-10T07:22:53.307809+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:53.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 bash[28005]: cluster 2026-03-10T07:22:51.886288+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:53.602 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 bash[28005]: cluster 2026-03-10T07:22:51.886288+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:53.602 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 bash[28005]: cephadm 2026-03-10T07:22:52.138410+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T07:22:53.602 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 bash[28005]: cephadm 2026-03-10T07:22:52.138410+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T07:22:53.602 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 bash[28005]: audit 2026-03-10T07:22:53.292958+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:53.602 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 bash[28005]: audit 2026-03-10T07:22:53.292958+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:53.602 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 bash[28005]: audit 2026-03-10T07:22:53.298398+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:53.602 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 bash[28005]: audit 2026-03-10T07:22:53.298398+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:53.602 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 bash[28005]: audit 2026-03-10T07:22:53.307809+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:53.602 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:53 vm00 bash[28005]: audit 2026-03-10T07:22:53.307809+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:53 vm03 bash[23382]: cluster 2026-03-10T07:22:51.886288+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:53 vm03 bash[23382]: cluster 2026-03-10T07:22:51.886288+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:53 vm03 bash[23382]: cephadm 2026-03-10T07:22:52.138410+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T07:22:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:53 vm03 bash[23382]: cephadm 2026-03-10T07:22:52.138410+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T07:22:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:53 vm03 bash[23382]: audit 2026-03-10T07:22:53.292958+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:53 vm03 bash[23382]: audit 2026-03-10T07:22:53.292958+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:53 vm03 bash[23382]: audit 2026-03-10T07:22:53.298398+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:53 vm03 bash[23382]: audit 2026-03-10T07:22:53.298398+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:53 vm03 bash[23382]: audit 2026-03-10T07:22:53.307809+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:53.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:53 vm03 bash[23382]: audit 2026-03-10T07:22:53.307809+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:22:55.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:55 vm03 bash[23382]: cluster 2026-03-10T07:22:53.886693+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:55.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:55 vm03 bash[23382]: cluster 2026-03-10T07:22:53.886693+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:55.891 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:55 vm00 bash[28005]: cluster 2026-03-10T07:22:53.886693+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:55.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:55 vm00 bash[28005]: cluster 2026-03-10T07:22:53.886693+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:55.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:55 vm00 bash[20701]: cluster 2026-03-10T07:22:53.886693+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:55.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:55 vm00 bash[20701]: cluster 2026-03-10T07:22:53.886693+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:57.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:57 vm03 bash[23382]: cluster 2026-03-10T07:22:55.886958+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:57.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:57 vm03 bash[23382]: cluster 2026-03-10T07:22:55.886958+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:57.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:57 vm03 bash[23382]: audit 2026-03-10T07:22:56.703072+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:57.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:57 vm03 bash[23382]: audit 2026-03-10T07:22:56.703072+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:57.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:57 vm03 bash[23382]: audit 2026-03-10T07:22:56.703370+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:57.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:57 vm03 bash[23382]: audit 2026-03-10T07:22:56.703370+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:57 vm00 bash[28005]: cluster 2026-03-10T07:22:55.886958+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:57 vm00 bash[28005]: cluster 2026-03-10T07:22:55.886958+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:57 vm00 bash[28005]: audit 2026-03-10T07:22:56.703072+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:57 vm00 bash[28005]: audit 2026-03-10T07:22:56.703072+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:57 vm00 bash[28005]: audit 2026-03-10T07:22:56.703370+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:57 vm00 bash[28005]: audit 2026-03-10T07:22:56.703370+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:57 vm00 bash[20701]: cluster 2026-03-10T07:22:55.886958+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:57 vm00 bash[20701]: cluster 2026-03-10T07:22:55.886958+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:57 vm00 bash[20701]: audit 2026-03-10T07:22:56.703072+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:57 vm00 bash[20701]: audit 2026-03-10T07:22:56.703072+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:57 vm00 bash[20701]: audit 2026-03-10T07:22:56.703370+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:57.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:57 vm00 bash[20701]: audit 2026-03-10T07:22:56.703370+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: audit 2026-03-10T07:22:57.506529+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: audit 2026-03-10T07:22:57.506529+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: cluster 2026-03-10T07:22:57.511953+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: cluster 2026-03-10T07:22:57.511953+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: audit 2026-03-10T07:22:57.512518+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: audit 2026-03-10T07:22:57.512518+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: audit 2026-03-10T07:22:57.513049+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: audit 2026-03-10T07:22:57.513049+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: audit 2026-03-10T07:22:57.513165+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: audit 2026-03-10T07:22:57.513165+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: audit 2026-03-10T07:22:58.509797+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: audit 2026-03-10T07:22:58.509797+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: cluster 2026-03-10T07:22:58.512711+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:58 vm00 bash[28005]: cluster 2026-03-10T07:22:58.512711+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: audit 2026-03-10T07:22:57.506529+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: audit 2026-03-10T07:22:57.506529+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: cluster 2026-03-10T07:22:57.511953+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: cluster 2026-03-10T07:22:57.511953+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: audit 2026-03-10T07:22:57.512518+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: audit 2026-03-10T07:22:57.512518+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: audit 2026-03-10T07:22:57.513049+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: audit 2026-03-10T07:22:57.513049+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: audit 2026-03-10T07:22:57.513165+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: audit 2026-03-10T07:22:57.513165+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: audit 2026-03-10T07:22:58.509797+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: audit 2026-03-10T07:22:58.509797+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: cluster 2026-03-10T07:22:58.512711+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T07:22:58.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:58 vm00 bash[20701]: cluster 2026-03-10T07:22:58.512711+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T07:22:59.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: audit 2026-03-10T07:22:57.506529+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: audit 2026-03-10T07:22:57.506529+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: cluster 2026-03-10T07:22:57.511953+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: cluster 2026-03-10T07:22:57.511953+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: audit 2026-03-10T07:22:57.512518+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: audit 2026-03-10T07:22:57.512518+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: audit 2026-03-10T07:22:57.513049+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: audit 2026-03-10T07:22:57.513049+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: audit 2026-03-10T07:22:57.513165+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: audit 2026-03-10T07:22:57.513165+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: audit 2026-03-10T07:22:58.509797+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: audit 2026-03-10T07:22:58.509797+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: cluster 2026-03-10T07:22:58.512711+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T07:22:59.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:58 vm03 bash[23382]: cluster 2026-03-10T07:22:58.512711+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T07:22:59.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:59 vm00 bash[28005]: cluster 2026-03-10T07:22:57.887236+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:59.894 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:59 vm00 bash[28005]: cluster 2026-03-10T07:22:57.887236+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:59.894 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:59 vm00 bash[28005]: audit 2026-03-10T07:22:58.516650+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:59.894 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:59 vm00 bash[28005]: audit 2026-03-10T07:22:58.516650+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:59.894 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:59 vm00 bash[28005]: audit 2026-03-10T07:22:59.516075+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:59.894 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:22:59 vm00 bash[28005]: audit 2026-03-10T07:22:59.516075+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:59.894 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:59 vm00 bash[20701]: cluster 2026-03-10T07:22:57.887236+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:59.894 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:59 vm00 bash[20701]: cluster 2026-03-10T07:22:57.887236+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:22:59.894 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:59 vm00 bash[20701]: audit 2026-03-10T07:22:58.516650+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:59.894 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:59 vm00 bash[20701]: audit 2026-03-10T07:22:58.516650+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:59.894 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:59 vm00 bash[20701]: audit 2026-03-10T07:22:59.516075+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:22:59.894 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:22:59 vm00 bash[20701]: audit 2026-03-10T07:22:59.516075+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:23:00.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:59 vm03 bash[23382]: cluster 2026-03-10T07:22:57.887236+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:23:00.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:59 vm03 bash[23382]: cluster 2026-03-10T07:22:57.887236+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:23:00.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:59 vm03 bash[23382]: audit 2026-03-10T07:22:58.516650+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:23:00.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:59 vm03 bash[23382]: audit 2026-03-10T07:22:58.516650+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:23:00.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:59 vm03 bash[23382]: audit 2026-03-10T07:22:59.516075+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:23:00.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:22:59 vm03 bash[23382]: audit 2026-03-10T07:22:59.516075+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:23:00.539 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.525+0000 7f63bdffb640 1 -- 192.168.123.100:0/1873358401 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f6390002bf0 con 0x7f63a00775d0 2026-03-10T07:23:00.539 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 -- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f63a00775d0 msgr2=0x7f63a0079a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:23:00.539 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f63a00775d0 0x7f63a0079a90 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f63c81039f0 tx=0x7f63b0005eb0 comp rx=0 tx=0).stop 2026-03-10T07:23:00.539 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 -- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f63c8104550 msgr2=0x7f63c81a7bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:23:00.539 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f63c8104550 0x7f63c81a7bc0 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f63c402f960 tx=0x7f63c4005be0 comp rx=0 tx=0).stop 2026-03-10T07:23:00.539 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 -- 192.168.123.100:0/1873358401 shutdown_connections 2026-03-10T07:23:00.539 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f63a00775d0 0x7f63a0079a90 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:00.539 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f63c8104550 0x7f63c81a7bc0 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:00.539 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f63c8103b90 0x7f63c81a0b40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:00.539 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 --2- 192.168.123.100:0/1873358401 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f63c8102990 0x7f63c81a0600 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:00.540 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 -- 192.168.123.100:0/1873358401 >> 192.168.123.100:0/1873358401 conn(0x7f63c80fe140 msgr2=0x7f63c80ffbd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:23:00.540 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 3 on host 'vm00' 2026-03-10T07:23:00.541 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 -- 192.168.123.100:0/1873358401 shutdown_connections 2026-03-10T07:23:00.541 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:23:00.529+0000 7f63cde7e640 1 -- 192.168.123.100:0/1873358401 wait complete. 2026-03-10T07:23:00.650 DEBUG:teuthology.orchestra.run.vm00:osd.3> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.3.service 2026-03-10T07:23:00.651 INFO:tasks.cephadm:Deploying osd.4 on vm03 with /dev/vde... 2026-03-10T07:23:00.651 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- lvm zap /dev/vde 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: cluster 2026-03-10T07:22:57.672251+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: cluster 2026-03-10T07:22:57.672251+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: cluster 2026-03-10T07:22:57.672321+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: cluster 2026-03-10T07:22:57.672321+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: cluster 2026-03-10T07:22:59.526555+0000 mon.a (mon.0) 431 : cluster [INF] osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] boot 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: cluster 2026-03-10T07:22:59.526555+0000 mon.a (mon.0) 431 : cluster [INF] osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] boot 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: cluster 2026-03-10T07:22:59.527036+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: cluster 2026-03-10T07:22:59.527036+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.531738+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.531738+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.563056+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.563056+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.568838+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.568838+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.570658+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.570658+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.571134+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.571134+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.574727+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:22:59.574727+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:23:00.517987+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:23:00.517987+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:23:00.524150+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:23:00.524150+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:23:00.529313+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:00 vm00 bash[28005]: audit 2026-03-10T07:23:00.529313+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: cluster 2026-03-10T07:22:57.672251+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: cluster 2026-03-10T07:22:57.672251+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: cluster 2026-03-10T07:22:57.672321+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: cluster 2026-03-10T07:22:57.672321+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: cluster 2026-03-10T07:22:59.526555+0000 mon.a (mon.0) 431 : cluster [INF] osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] boot 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: cluster 2026-03-10T07:22:59.526555+0000 mon.a (mon.0) 431 : cluster [INF] osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] boot 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: cluster 2026-03-10T07:22:59.527036+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: cluster 2026-03-10T07:22:59.527036+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T07:23:00.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.531738+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.531738+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.563056+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.563056+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.568838+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.568838+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.570658+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.570658+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.571134+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.571134+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.574727+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:22:59.574727+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:23:00.517987+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:23:00.517987+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:23:00.524150+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:23:00.524150+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:23:00.529313+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:00.893 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:00 vm00 bash[20701]: audit 2026-03-10T07:23:00.529313+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: cluster 2026-03-10T07:22:57.672251+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: cluster 2026-03-10T07:22:57.672251+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: cluster 2026-03-10T07:22:57.672321+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: cluster 2026-03-10T07:22:57.672321+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: cluster 2026-03-10T07:22:59.526555+0000 mon.a (mon.0) 431 : cluster [INF] osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] boot 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: cluster 2026-03-10T07:22:59.526555+0000 mon.a (mon.0) 431 : cluster [INF] osd.3 [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] boot 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: cluster 2026-03-10T07:22:59.527036+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: cluster 2026-03-10T07:22:59.527036+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.531738+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.531738+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.563056+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.563056+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.568838+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.568838+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.570658+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.570658+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.571134+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.571134+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.574727+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:22:59.574727+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:23:00.517987+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:23:00.517987+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:23:00.524150+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:23:00.524150+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:23:00.529313+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:01.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:00 vm03 bash[23382]: audit 2026-03-10T07:23:00.529313+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:01.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:01 vm00 bash[28005]: cluster 2026-03-10T07:22:59.887522+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:23:01.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:01 vm00 bash[28005]: cluster 2026-03-10T07:22:59.887522+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:23:01.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:01 vm00 bash[28005]: cluster 2026-03-10T07:23:00.588215+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T07:23:01.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:01 vm00 bash[28005]: cluster 2026-03-10T07:23:00.588215+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T07:23:01.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:01 vm00 bash[20701]: cluster 2026-03-10T07:22:59.887522+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:23:01.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:01 vm00 bash[20701]: cluster 2026-03-10T07:22:59.887522+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:23:01.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:01 vm00 bash[20701]: cluster 2026-03-10T07:23:00.588215+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T07:23:01.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:01 vm00 bash[20701]: cluster 2026-03-10T07:23:00.588215+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T07:23:02.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:01 vm03 bash[23382]: cluster 2026-03-10T07:22:59.887522+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:23:02.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:01 vm03 bash[23382]: cluster 2026-03-10T07:22:59.887522+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:23:02.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:01 vm03 bash[23382]: cluster 2026-03-10T07:23:00.588215+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T07:23:02.024 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:01 vm03 bash[23382]: cluster 2026-03-10T07:23:00.588215+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T07:23:02.891 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:02 vm00 bash[20701]: cluster 2026-03-10T07:23:01.887781+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:02.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:02 vm00 bash[20701]: cluster 2026-03-10T07:23:01.887781+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:02.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:02 vm00 bash[28005]: cluster 2026-03-10T07:23:01.887781+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:02.892 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:02 vm00 bash[28005]: cluster 2026-03-10T07:23:01.887781+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:03.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:02 vm03 bash[23382]: cluster 2026-03-10T07:23:01.887781+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:03.023 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:02 vm03 bash[23382]: cluster 2026-03-10T07:23:01.887781+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:05.263 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:23:05.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:04 vm03 bash[23382]: cluster 2026-03-10T07:23:03.888257+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:05.274 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:04 vm03 bash[23382]: cluster 2026-03-10T07:23:03.888257+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:05.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:04 vm00 bash[20701]: cluster 2026-03-10T07:23:03.888257+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:05.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:04 vm00 bash[20701]: cluster 2026-03-10T07:23:03.888257+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:05.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:04 vm00 bash[28005]: cluster 2026-03-10T07:23:03.888257+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:05.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:04 vm00 bash[28005]: cluster 2026-03-10T07:23:03.888257+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:06.111 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:23:06.125 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch daemon add osd vm03:/dev/vde 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: cluster 2026-03-10T07:23:05.888568+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: cluster 2026-03-10T07:23:05.888568+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: cephadm 2026-03-10T07:23:06.226573+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: cephadm 2026-03-10T07:23:06.226573+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.283574+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.283574+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.288373+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.288373+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.290057+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.290057+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.290716+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.290716+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.291167+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.291167+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.294979+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:07 vm00 bash[28005]: audit 2026-03-10T07:23:06.294979+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: cluster 2026-03-10T07:23:05.888568+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: cluster 2026-03-10T07:23:05.888568+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: cephadm 2026-03-10T07:23:06.226573+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: cephadm 2026-03-10T07:23:06.226573+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.283574+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.283574+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.288373+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.288373+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.290057+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.290057+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.290716+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.290716+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.291167+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.291167+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.294979+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:07 vm00 bash[20701]: audit 2026-03-10T07:23:06.294979+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: cluster 2026-03-10T07:23:05.888568+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:07.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: cluster 2026-03-10T07:23:05.888568+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:07.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: cephadm 2026-03-10T07:23:06.226573+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:23:07.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: cephadm 2026-03-10T07:23:06.226573+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T07:23:07.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.283574+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.283574+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.288373+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.288373+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.290057+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:07.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.290057+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:07.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.290716+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:07.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.290716+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:07.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.291167+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:07.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.291167+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:07.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.294979+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:07.774 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:07 vm03 bash[23382]: audit 2026-03-10T07:23:06.294979+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:09.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:09 vm00 bash[28005]: cluster 2026-03-10T07:23:07.888809+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:09.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:09 vm00 bash[28005]: cluster 2026-03-10T07:23:07.888809+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:09.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:09 vm00 bash[20701]: cluster 2026-03-10T07:23:07.888809+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:09.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:09 vm00 bash[20701]: cluster 2026-03-10T07:23:07.888809+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:09.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:09 vm03 bash[23382]: cluster 2026-03-10T07:23:07.888809+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:09.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:09 vm03 bash[23382]: cluster 2026-03-10T07:23:07.888809+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:10.746 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:23:10.899 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 -- 192.168.123.103:0/3041143986 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f2064069a50 msgr2=0x7f2064101e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:23:10.899 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 --2- 192.168.123.103:0/3041143986 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f2064069a50 0x7f2064101e90 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f205400ade0 tx=0x7f2054030860 comp rx=0 tx=0).stop 2026-03-10T07:23:10.899 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 -- 192.168.123.103:0/3041143986 shutdown_connections 2026-03-10T07:23:10.899 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 --2- 192.168.123.103:0/3041143986 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f20641070d0 0x7f20641094c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:10.899 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 --2- 192.168.123.103:0/3041143986 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20641023d0 0x7f2064102850 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:10.899 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 --2- 192.168.123.103:0/3041143986 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f2064069a50 0x7f2064101e90 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:10.899 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 -- 192.168.123.103:0/3041143986 >> 192.168.123.103:0/3041143986 conn(0x7f20640fc420 msgr2=0x7f20640fe860 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:23:10.899 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 -- 192.168.123.103:0/3041143986 shutdown_connections 2026-03-10T07:23:10.899 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 -- 192.168.123.103:0/3041143986 wait complete. 2026-03-10T07:23:10.899 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 Processor -- start 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 -- start start 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2064069a50 0x7f206419a320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20641023d0 0x7f206419a860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f20641070d0 0x7f20641a18e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f206410b800 con 0x7f20641023d0 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f206410b680 con 0x7f20641070d0 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f206410b980 con 0x7f2064069a50 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206a15c640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2064069a50 0x7f206419a320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206a15c640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2064069a50 0x7f206419a320 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.103:42090/0 (socket says 192.168.123.103:42090) 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206a15c640 1 -- 192.168.123.103:0/726225374 learned_addr learned my addr 192.168.123.103:0/726225374 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206a95d640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f20641070d0 0x7f20641a18e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:23:10.900 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206995b640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20641023d0 0x7f206419a860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:23:10.901 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206a95d640 1 -- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2064069a50 msgr2=0x7f206419a320 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:23:10.901 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206a95d640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2064069a50 0x7f206419a320 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:10.901 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206a95d640 1 -- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20641023d0 msgr2=0x7f206419a860 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:23:10.901 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206a95d640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20641023d0 0x7f206419a860 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:10.901 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206a95d640 1 -- 192.168.123.103:0/726225374 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f20641a1fe0 con 0x7f20641070d0 2026-03-10T07:23:10.901 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206995b640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20641023d0 0x7f206419a860 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:23:10.901 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206a15c640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2064069a50 0x7f206419a320 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:23:10.901 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206a95d640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f20641070d0 0x7f20641a18e0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f206000bdf0 tx=0x7f206000bef0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:23:10.902 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f204b7fe640 1 -- 192.168.123.103:0/726225374 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f206000ca60 con 0x7f20641070d0 2026-03-10T07:23:10.902 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f204b7fe640 1 -- 192.168.123.103:0/726225374 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f2060010070 con 0x7f20641070d0 2026-03-10T07:23:10.902 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f204b7fe640 1 -- 192.168.123.103:0/726225374 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2060015470 con 0x7f20641070d0 2026-03-10T07:23:10.902 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 -- 192.168.123.103:0/726225374 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f20640fc860 con 0x7f20641070d0 2026-03-10T07:23:10.903 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.891+0000 7f206c3e7640 1 -- 192.168.123.103:0/726225374 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f20640fcda0 con 0x7f20641070d0 2026-03-10T07:23:10.906 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.895+0000 7f206c3e7640 1 -- 192.168.123.103:0/726225374 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f202c005180 con 0x7f20641070d0 2026-03-10T07:23:10.906 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.895+0000 7f204b7fe640 1 -- 192.168.123.103:0/726225374 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f20600040a0 con 0x7f20641070d0 2026-03-10T07:23:10.907 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.899+0000 7f204b7fe640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f20400775d0 0x7f2040079a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:23:10.907 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.899+0000 7f204b7fe640 1 -- 192.168.123.103:0/726225374 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(27..27 src has 1..27) ==== 3439+0+0 (secure 0 0 0) 0x7f206009d050 con 0x7f20641070d0 2026-03-10T07:23:10.907 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.899+0000 7f206a15c640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f20400775d0 0x7f2040079a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:23:10.907 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.899+0000 7f204b7fe640 1 -- 192.168.123.103:0/726225374 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f20600a1450 con 0x7f20641070d0 2026-03-10T07:23:10.907 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.899+0000 7f206a15c640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f20400775d0 0x7f2040079a90 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f2054002410 tx=0x7f205400a7b0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:23:11.010 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:10.999+0000 7f206c3e7640 1 -- 192.168.123.103:0/726225374 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7f202c002bf0 con 0x7f20400775d0 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:11 vm00 bash[28005]: cluster 2026-03-10T07:23:09.889123+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:11 vm00 bash[28005]: cluster 2026-03-10T07:23:09.889123+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:11 vm00 bash[28005]: audit 2026-03-10T07:23:11.007583+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:11 vm00 bash[28005]: audit 2026-03-10T07:23:11.007583+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:11 vm00 bash[28005]: audit 2026-03-10T07:23:11.009438+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:11 vm00 bash[28005]: audit 2026-03-10T07:23:11.009438+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:11 vm00 bash[28005]: audit 2026-03-10T07:23:11.010372+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:11 vm00 bash[28005]: audit 2026-03-10T07:23:11.010372+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:11 vm00 bash[20701]: cluster 2026-03-10T07:23:09.889123+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:11 vm00 bash[20701]: cluster 2026-03-10T07:23:09.889123+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:11 vm00 bash[20701]: audit 2026-03-10T07:23:11.007583+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:11 vm00 bash[20701]: audit 2026-03-10T07:23:11.007583+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:11 vm00 bash[20701]: audit 2026-03-10T07:23:11.009438+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:11 vm00 bash[20701]: audit 2026-03-10T07:23:11.009438+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:11 vm00 bash[20701]: audit 2026-03-10T07:23:11.010372+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:11.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:11 vm00 bash[20701]: audit 2026-03-10T07:23:11.010372+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:11 vm03 bash[23382]: cluster 2026-03-10T07:23:09.889123+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:11 vm03 bash[23382]: cluster 2026-03-10T07:23:09.889123+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:11 vm03 bash[23382]: audit 2026-03-10T07:23:11.007583+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:11 vm03 bash[23382]: audit 2026-03-10T07:23:11.007583+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:11 vm03 bash[23382]: audit 2026-03-10T07:23:11.009438+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:11 vm03 bash[23382]: audit 2026-03-10T07:23:11.009438+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:11 vm03 bash[23382]: audit 2026-03-10T07:23:11.010372+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:11.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:11 vm03 bash[23382]: audit 2026-03-10T07:23:11.010372+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:12.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:12 vm00 bash[28005]: audit 2026-03-10T07:23:11.005840+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24166 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:12.642 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:12 vm00 bash[28005]: audit 2026-03-10T07:23:11.005840+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24166 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:12.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:12 vm00 bash[20701]: audit 2026-03-10T07:23:11.005840+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24166 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:12.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:12 vm00 bash[20701]: audit 2026-03-10T07:23:11.005840+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24166 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:12.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:12 vm03 bash[23382]: audit 2026-03-10T07:23:11.005840+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24166 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:12.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:12 vm03 bash[23382]: audit 2026-03-10T07:23:11.005840+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24166 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:13.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:13 vm00 bash[28005]: cluster 2026-03-10T07:23:11.889395+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:13.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:13 vm00 bash[28005]: cluster 2026-03-10T07:23:11.889395+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:13.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:13 vm00 bash[20701]: cluster 2026-03-10T07:23:11.889395+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:13.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:13 vm00 bash[20701]: cluster 2026-03-10T07:23:11.889395+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:13.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:13 vm03 bash[23382]: cluster 2026-03-10T07:23:11.889395+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:13.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:13 vm03 bash[23382]: cluster 2026-03-10T07:23:11.889395+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:15.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:15 vm00 bash[28005]: cluster 2026-03-10T07:23:13.889682+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:15.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:15 vm00 bash[28005]: cluster 2026-03-10T07:23:13.889682+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:15.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:15 vm00 bash[20701]: cluster 2026-03-10T07:23:13.889682+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:15.641 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:15 vm00 bash[20701]: cluster 2026-03-10T07:23:13.889682+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:15.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:15 vm03 bash[23382]: cluster 2026-03-10T07:23:13.889682+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:15.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:15 vm03 bash[23382]: cluster 2026-03-10T07:23:13.889682+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: cluster 2026-03-10T07:23:15.890035+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: cluster 2026-03-10T07:23:15.890035+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: audit 2026-03-10T07:23:16.403629+0000 mon.a (mon.0) 452 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: audit 2026-03-10T07:23:16.403629+0000 mon.a (mon.0) 452 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: audit 2026-03-10T07:23:16.404413+0000 mon.b (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/1316933097' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: audit 2026-03-10T07:23:16.404413+0000 mon.b (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/1316933097' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: audit 2026-03-10T07:23:16.490786+0000 mon.a (mon.0) 453 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]': finished 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: audit 2026-03-10T07:23:16.490786+0000 mon.a (mon.0) 453 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]': finished 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: cluster 2026-03-10T07:23:16.498024+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: cluster 2026-03-10T07:23:16.498024+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: audit 2026-03-10T07:23:16.498312+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: audit 2026-03-10T07:23:16.498312+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: audit 2026-03-10T07:23:17.098280+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/2087807596' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:17.641 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:17 vm00 bash[28005]: audit 2026-03-10T07:23:17.098280+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/2087807596' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: cluster 2026-03-10T07:23:15.890035+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: cluster 2026-03-10T07:23:15.890035+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: audit 2026-03-10T07:23:16.403629+0000 mon.a (mon.0) 452 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: audit 2026-03-10T07:23:16.403629+0000 mon.a (mon.0) 452 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: audit 2026-03-10T07:23:16.404413+0000 mon.b (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/1316933097' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: audit 2026-03-10T07:23:16.404413+0000 mon.b (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/1316933097' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: audit 2026-03-10T07:23:16.490786+0000 mon.a (mon.0) 453 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]': finished 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: audit 2026-03-10T07:23:16.490786+0000 mon.a (mon.0) 453 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]': finished 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: cluster 2026-03-10T07:23:16.498024+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: cluster 2026-03-10T07:23:16.498024+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: audit 2026-03-10T07:23:16.498312+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: audit 2026-03-10T07:23:16.498312+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: audit 2026-03-10T07:23:17.098280+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/2087807596' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:17.642 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:17 vm00 bash[20701]: audit 2026-03-10T07:23:17.098280+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/2087807596' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:17.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: cluster 2026-03-10T07:23:15.890035+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:17.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: cluster 2026-03-10T07:23:15.890035+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:17.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: audit 2026-03-10T07:23:16.403629+0000 mon.a (mon.0) 452 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: audit 2026-03-10T07:23:16.403629+0000 mon.a (mon.0) 452 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: audit 2026-03-10T07:23:16.404413+0000 mon.b (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/1316933097' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: audit 2026-03-10T07:23:16.404413+0000 mon.b (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/1316933097' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]: dispatch 2026-03-10T07:23:17.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: audit 2026-03-10T07:23:16.490786+0000 mon.a (mon.0) 453 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]': finished 2026-03-10T07:23:17.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: audit 2026-03-10T07:23:16.490786+0000 mon.a (mon.0) 453 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f7c9bda9-fb82-468f-b7f9-e588fcc193bf"}]': finished 2026-03-10T07:23:17.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: cluster 2026-03-10T07:23:16.498024+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T07:23:17.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: cluster 2026-03-10T07:23:16.498024+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T07:23:17.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: audit 2026-03-10T07:23:16.498312+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:17.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: audit 2026-03-10T07:23:16.498312+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:17.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: audit 2026-03-10T07:23:17.098280+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/2087807596' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:17.773 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:17 vm03 bash[23382]: audit 2026-03-10T07:23:17.098280+0000 mon.c (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/2087807596' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:19.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:19 vm03 bash[23382]: cluster 2026-03-10T07:23:17.890298+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:19.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:19 vm03 bash[23382]: cluster 2026-03-10T07:23:17.890298+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:19.890 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:19 vm00 bash[28005]: cluster 2026-03-10T07:23:17.890298+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:19.890 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:19 vm00 bash[28005]: cluster 2026-03-10T07:23:17.890298+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:19.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:19 vm00 bash[20701]: cluster 2026-03-10T07:23:17.890298+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:19.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:19 vm00 bash[20701]: cluster 2026-03-10T07:23:17.890298+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:21.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:21 vm03 bash[23382]: cluster 2026-03-10T07:23:19.890574+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:21.772 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:21 vm03 bash[23382]: cluster 2026-03-10T07:23:19.890574+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:21.890 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:21 vm00 bash[28005]: cluster 2026-03-10T07:23:19.890574+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:21.890 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:21 vm00 bash[28005]: cluster 2026-03-10T07:23:19.890574+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:21.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:21 vm00 bash[20701]: cluster 2026-03-10T07:23:19.890574+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:21.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:21 vm00 bash[20701]: cluster 2026-03-10T07:23:19.890574+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:23.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:23 vm03 bash[23382]: cluster 2026-03-10T07:23:21.890944+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:23.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:23 vm03 bash[23382]: cluster 2026-03-10T07:23:21.890944+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:23.890 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:23 vm00 bash[28005]: cluster 2026-03-10T07:23:21.890944+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:23.890 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:23 vm00 bash[28005]: cluster 2026-03-10T07:23:21.890944+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:23.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:23 vm00 bash[20701]: cluster 2026-03-10T07:23:21.890944+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:23.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:23 vm00 bash[20701]: cluster 2026-03-10T07:23:21.890944+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:25.686 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:25 vm03 bash[23382]: cluster 2026-03-10T07:23:23.891247+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:25.686 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:25 vm03 bash[23382]: cluster 2026-03-10T07:23:23.891247+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:25.686 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:25 vm03 bash[23382]: audit 2026-03-10T07:23:25.419466+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T07:23:25.686 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:25 vm03 bash[23382]: audit 2026-03-10T07:23:25.419466+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T07:23:25.686 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:25 vm03 bash[23382]: audit 2026-03-10T07:23:25.419986+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:25.686 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:25 vm03 bash[23382]: audit 2026-03-10T07:23:25.419986+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:25.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:25 vm00 bash[28005]: cluster 2026-03-10T07:23:23.891247+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:25.890 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:25 vm00 bash[28005]: cluster 2026-03-10T07:23:23.891247+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:25.890 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:25 vm00 bash[28005]: audit 2026-03-10T07:23:25.419466+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T07:23:25.890 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:25 vm00 bash[28005]: audit 2026-03-10T07:23:25.419466+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T07:23:25.890 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:25 vm00 bash[28005]: audit 2026-03-10T07:23:25.419986+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:25.890 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:25 vm00 bash[28005]: audit 2026-03-10T07:23:25.419986+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:25.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:25 vm00 bash[20701]: cluster 2026-03-10T07:23:23.891247+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:25.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:25 vm00 bash[20701]: cluster 2026-03-10T07:23:23.891247+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:25.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:25 vm00 bash[20701]: audit 2026-03-10T07:23:25.419466+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T07:23:25.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:25 vm00 bash[20701]: audit 2026-03-10T07:23:25.419466+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T07:23:25.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:25 vm00 bash[20701]: audit 2026-03-10T07:23:25.419986+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:25.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:25 vm00 bash[20701]: audit 2026-03-10T07:23:25.419986+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:26.521 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:26 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:23:26.521 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:26 vm03 bash[23382]: cephadm 2026-03-10T07:23:25.420385+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T07:23:26.521 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:26 vm03 bash[23382]: cephadm 2026-03-10T07:23:25.420385+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T07:23:26.521 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:26 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:23:26.521 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:23:26 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:23:26.521 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:23:26 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:23:26.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:26 vm00 bash[28005]: cephadm 2026-03-10T07:23:25.420385+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T07:23:26.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:26 vm00 bash[28005]: cephadm 2026-03-10T07:23:25.420385+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T07:23:26.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:26 vm00 bash[20701]: cephadm 2026-03-10T07:23:25.420385+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T07:23:26.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:26 vm00 bash[20701]: cephadm 2026-03-10T07:23:25.420385+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T07:23:27.735 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:27 vm03 bash[23382]: cluster 2026-03-10T07:23:25.891676+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:27.735 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:27 vm03 bash[23382]: cluster 2026-03-10T07:23:25.891676+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:27.735 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:27 vm03 bash[23382]: audit 2026-03-10T07:23:26.563154+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:27.735 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:27 vm03 bash[23382]: audit 2026-03-10T07:23:26.563154+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:27.735 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:27 vm03 bash[23382]: audit 2026-03-10T07:23:26.567934+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:27.735 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:27 vm03 bash[23382]: audit 2026-03-10T07:23:26.567934+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:27.735 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:27 vm03 bash[23382]: audit 2026-03-10T07:23:26.573150+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:27.735 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:27 vm03 bash[23382]: audit 2026-03-10T07:23:26.573150+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:27 vm00 bash[28005]: cluster 2026-03-10T07:23:25.891676+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:27 vm00 bash[28005]: cluster 2026-03-10T07:23:25.891676+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:27 vm00 bash[28005]: audit 2026-03-10T07:23:26.563154+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:27 vm00 bash[28005]: audit 2026-03-10T07:23:26.563154+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:27 vm00 bash[28005]: audit 2026-03-10T07:23:26.567934+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:27 vm00 bash[28005]: audit 2026-03-10T07:23:26.567934+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:27 vm00 bash[28005]: audit 2026-03-10T07:23:26.573150+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:27 vm00 bash[28005]: audit 2026-03-10T07:23:26.573150+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:27 vm00 bash[20701]: cluster 2026-03-10T07:23:25.891676+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:27 vm00 bash[20701]: cluster 2026-03-10T07:23:25.891676+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:27 vm00 bash[20701]: audit 2026-03-10T07:23:26.563154+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:27 vm00 bash[20701]: audit 2026-03-10T07:23:26.563154+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:27 vm00 bash[20701]: audit 2026-03-10T07:23:26.567934+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:27 vm00 bash[20701]: audit 2026-03-10T07:23:26.567934+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:27 vm00 bash[20701]: audit 2026-03-10T07:23:26.573150+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:27.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:27 vm00 bash[20701]: audit 2026-03-10T07:23:26.573150+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:29.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:29 vm03 bash[23382]: cluster 2026-03-10T07:23:27.891921+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:29.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:29 vm03 bash[23382]: cluster 2026-03-10T07:23:27.891921+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:29.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:29 vm00 bash[28005]: cluster 2026-03-10T07:23:27.891921+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:29.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:29 vm00 bash[28005]: cluster 2026-03-10T07:23:27.891921+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:29.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:29 vm00 bash[20701]: cluster 2026-03-10T07:23:27.891921+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:29.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:29 vm00 bash[20701]: cluster 2026-03-10T07:23:27.891921+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:30.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:30 vm03 bash[23382]: audit 2026-03-10T07:23:30.211573+0000 mon.a (mon.0) 461 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:30.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:30 vm03 bash[23382]: audit 2026-03-10T07:23:30.211573+0000 mon.a (mon.0) 461 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:30.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:30 vm03 bash[23382]: audit 2026-03-10T07:23:30.212083+0000 mon.b (mon.1) 6 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:30.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:30 vm03 bash[23382]: audit 2026-03-10T07:23:30.212083+0000 mon.b (mon.1) 6 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:30.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:30 vm00 bash[28005]: audit 2026-03-10T07:23:30.211573+0000 mon.a (mon.0) 461 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:30.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:30 vm00 bash[28005]: audit 2026-03-10T07:23:30.211573+0000 mon.a (mon.0) 461 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:30.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:30 vm00 bash[28005]: audit 2026-03-10T07:23:30.212083+0000 mon.b (mon.1) 6 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:30.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:30 vm00 bash[28005]: audit 2026-03-10T07:23:30.212083+0000 mon.b (mon.1) 6 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:30.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:30 vm00 bash[20701]: audit 2026-03-10T07:23:30.211573+0000 mon.a (mon.0) 461 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:30.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:30 vm00 bash[20701]: audit 2026-03-10T07:23:30.211573+0000 mon.a (mon.0) 461 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:30.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:30 vm00 bash[20701]: audit 2026-03-10T07:23:30.212083+0000 mon.b (mon.1) 6 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:30.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:30 vm00 bash[20701]: audit 2026-03-10T07:23:30.212083+0000 mon.b (mon.1) 6 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: cluster 2026-03-10T07:23:29.892174+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: cluster 2026-03-10T07:23:29.892174+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: audit 2026-03-10T07:23:30.493393+0000 mon.a (mon.0) 462 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: audit 2026-03-10T07:23:30.493393+0000 mon.a (mon.0) 462 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: cluster 2026-03-10T07:23:30.496271+0000 mon.a (mon.0) 463 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: cluster 2026-03-10T07:23:30.496271+0000 mon.a (mon.0) 463 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: audit 2026-03-10T07:23:30.496393+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: audit 2026-03-10T07:23:30.496393+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: audit 2026-03-10T07:23:30.498185+0000 mon.a (mon.0) 465 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: audit 2026-03-10T07:23:30.498185+0000 mon.a (mon.0) 465 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: audit 2026-03-10T07:23:30.498773+0000 mon.b (mon.1) 7 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:31.771 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:31 vm03 bash[23382]: audit 2026-03-10T07:23:30.498773+0000 mon.b (mon.1) 7 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: cluster 2026-03-10T07:23:29.892174+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: cluster 2026-03-10T07:23:29.892174+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: audit 2026-03-10T07:23:30.493393+0000 mon.a (mon.0) 462 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: audit 2026-03-10T07:23:30.493393+0000 mon.a (mon.0) 462 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: cluster 2026-03-10T07:23:30.496271+0000 mon.a (mon.0) 463 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: cluster 2026-03-10T07:23:30.496271+0000 mon.a (mon.0) 463 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: audit 2026-03-10T07:23:30.496393+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: audit 2026-03-10T07:23:30.496393+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: audit 2026-03-10T07:23:30.498185+0000 mon.a (mon.0) 465 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: audit 2026-03-10T07:23:30.498185+0000 mon.a (mon.0) 465 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: audit 2026-03-10T07:23:30.498773+0000 mon.b (mon.1) 7 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:31 vm00 bash[28005]: audit 2026-03-10T07:23:30.498773+0000 mon.b (mon.1) 7 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: cluster 2026-03-10T07:23:29.892174+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: cluster 2026-03-10T07:23:29.892174+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: audit 2026-03-10T07:23:30.493393+0000 mon.a (mon.0) 462 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: audit 2026-03-10T07:23:30.493393+0000 mon.a (mon.0) 462 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: cluster 2026-03-10T07:23:30.496271+0000 mon.a (mon.0) 463 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: cluster 2026-03-10T07:23:30.496271+0000 mon.a (mon.0) 463 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: audit 2026-03-10T07:23:30.496393+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: audit 2026-03-10T07:23:30.496393+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: audit 2026-03-10T07:23:30.498185+0000 mon.a (mon.0) 465 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: audit 2026-03-10T07:23:30.498185+0000 mon.a (mon.0) 465 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: audit 2026-03-10T07:23:30.498773+0000 mon.b (mon.1) 7 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:31.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:31 vm00 bash[20701]: audit 2026-03-10T07:23:30.498773+0000 mon.b (mon.1) 7 : audit [INF] from='osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:23:32.782 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:32 vm03 bash[23382]: audit 2026-03-10T07:23:31.496179+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:23:32.782 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:32 vm03 bash[23382]: audit 2026-03-10T07:23:31.496179+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:23:32.782 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:32 vm03 bash[23382]: cluster 2026-03-10T07:23:31.502819+0000 mon.a (mon.0) 467 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T07:23:32.782 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:32 vm03 bash[23382]: cluster 2026-03-10T07:23:31.502819+0000 mon.a (mon.0) 467 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T07:23:32.782 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:32 vm03 bash[23382]: audit 2026-03-10T07:23:31.503650+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:32.782 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:32 vm03 bash[23382]: audit 2026-03-10T07:23:31.503650+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:32.782 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:32 vm03 bash[23382]: cluster 2026-03-10T07:23:32.506968+0000 mon.a (mon.0) 469 : cluster [INF] osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] boot 2026-03-10T07:23:32.782 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:32 vm03 bash[23382]: cluster 2026-03-10T07:23:32.506968+0000 mon.a (mon.0) 469 : cluster [INF] osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] boot 2026-03-10T07:23:32.782 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:32 vm03 bash[23382]: cluster 2026-03-10T07:23:32.507214+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T07:23:32.782 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:32 vm03 bash[23382]: cluster 2026-03-10T07:23:32.507214+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:32 vm00 bash[28005]: audit 2026-03-10T07:23:31.496179+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:32 vm00 bash[28005]: audit 2026-03-10T07:23:31.496179+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:32 vm00 bash[28005]: cluster 2026-03-10T07:23:31.502819+0000 mon.a (mon.0) 467 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:32 vm00 bash[28005]: cluster 2026-03-10T07:23:31.502819+0000 mon.a (mon.0) 467 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:32 vm00 bash[28005]: audit 2026-03-10T07:23:31.503650+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:32 vm00 bash[28005]: audit 2026-03-10T07:23:31.503650+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:32 vm00 bash[28005]: cluster 2026-03-10T07:23:32.506968+0000 mon.a (mon.0) 469 : cluster [INF] osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] boot 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:32 vm00 bash[28005]: cluster 2026-03-10T07:23:32.506968+0000 mon.a (mon.0) 469 : cluster [INF] osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] boot 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:32 vm00 bash[28005]: cluster 2026-03-10T07:23:32.507214+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:32 vm00 bash[28005]: cluster 2026-03-10T07:23:32.507214+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:32 vm00 bash[20701]: audit 2026-03-10T07:23:31.496179+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:32 vm00 bash[20701]: audit 2026-03-10T07:23:31.496179+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:32 vm00 bash[20701]: cluster 2026-03-10T07:23:31.502819+0000 mon.a (mon.0) 467 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:32 vm00 bash[20701]: cluster 2026-03-10T07:23:31.502819+0000 mon.a (mon.0) 467 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:32 vm00 bash[20701]: audit 2026-03-10T07:23:31.503650+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:32 vm00 bash[20701]: audit 2026-03-10T07:23:31.503650+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:32 vm00 bash[20701]: cluster 2026-03-10T07:23:32.506968+0000 mon.a (mon.0) 469 : cluster [INF] osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] boot 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:32 vm00 bash[20701]: cluster 2026-03-10T07:23:32.506968+0000 mon.a (mon.0) 469 : cluster [INF] osd.4 [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] boot 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:32 vm00 bash[20701]: cluster 2026-03-10T07:23:32.507214+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T07:23:32.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:32 vm00 bash[20701]: cluster 2026-03-10T07:23:32.507214+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: cluster 2026-03-10T07:23:31.220081+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: cluster 2026-03-10T07:23:31.220081+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: cluster 2026-03-10T07:23:31.220144+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: cluster 2026-03-10T07:23:31.220144+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: cluster 2026-03-10T07:23:31.892416+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: cluster 2026-03-10T07:23:31.892416+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:32.507281+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:32.507281+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:32.831454+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:32.831454+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:32.835463+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:32.835463+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:33.254730+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:33.254730+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:33.255367+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:33.255367+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:33.260662+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: audit 2026-03-10T07:23:33.260662+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: cluster 2026-03-10T07:23:33.510760+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T07:23:33.863 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:33 vm03 bash[23382]: cluster 2026-03-10T07:23:33.510760+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: cluster 2026-03-10T07:23:31.220081+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: cluster 2026-03-10T07:23:31.220081+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: cluster 2026-03-10T07:23:31.220144+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: cluster 2026-03-10T07:23:31.220144+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: cluster 2026-03-10T07:23:31.892416+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: cluster 2026-03-10T07:23:31.892416+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:32.507281+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:32.507281+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:32.831454+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:32.831454+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:32.835463+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:32.835463+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:33.254730+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:33.254730+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:33.255367+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:33.255367+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:33.260662+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: audit 2026-03-10T07:23:33.260662+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: cluster 2026-03-10T07:23:33.510760+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:33 vm00 bash[28005]: cluster 2026-03-10T07:23:33.510760+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: cluster 2026-03-10T07:23:31.220081+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: cluster 2026-03-10T07:23:31.220081+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: cluster 2026-03-10T07:23:31.220144+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: cluster 2026-03-10T07:23:31.220144+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: cluster 2026-03-10T07:23:31.892416+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: cluster 2026-03-10T07:23:31.892416+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:32.507281+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:33.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:32.507281+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:32.831454+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:32.831454+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:32.835463+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:32.835463+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:33.254730+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:33.254730+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:33.255367+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:33.255367+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:33.260662+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: audit 2026-03-10T07:23:33.260662+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: cluster 2026-03-10T07:23:33.510760+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T07:23:33.890 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:33 vm00 bash[20701]: cluster 2026-03-10T07:23:33.510760+0000 mon.a (mon.0) 477 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 4 on host 'vm03' 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.926+0000 7f204b7fe640 1 -- 192.168.123.103:0/726225374 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f202c002bf0 con 0x7f20400775d0 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 -- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f20400775d0 msgr2=0x7f2040079a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f20400775d0 0x7f2040079a90 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f2054002410 tx=0x7f205400a7b0 comp rx=0 tx=0).stop 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 -- 192.168.123.103:0/726225374 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f20641070d0 msgr2=0x7f20641a18e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f20641070d0 0x7f20641a18e0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f206000bdf0 tx=0x7f206000bef0 comp rx=0 tx=0).stop 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 -- 192.168.123.103:0/726225374 shutdown_connections 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f20400775d0 0x7f2040079a90 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f20641070d0 0x7f20641a18e0 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20641023d0 0x7f206419a860 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 --2- 192.168.123.103:0/726225374 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2064069a50 0x7f206419a320 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 -- 192.168.123.103:0/726225374 >> 192.168.123.103:0/726225374 conn(0x7f20640fc420 msgr2=0x7f2064108990 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 -- 192.168.123.103:0/726225374 shutdown_connections 2026-03-10T07:23:33.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:33.930+0000 7f206c3e7640 1 -- 192.168.123.103:0/726225374 wait complete. 2026-03-10T07:23:34.070 DEBUG:teuthology.orchestra.run.vm03:osd.4> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.4.service 2026-03-10T07:23:34.071 INFO:tasks.cephadm:Deploying osd.5 on vm03 with /dev/vdd... 2026-03-10T07:23:34.071 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- lvm zap /dev/vdd 2026-03-10T07:23:34.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:34 vm00 bash[28005]: audit 2026-03-10T07:23:33.915958+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:34.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:34 vm00 bash[28005]: audit 2026-03-10T07:23:33.915958+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:34.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:34 vm00 bash[28005]: audit 2026-03-10T07:23:33.920991+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:34.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:34 vm00 bash[28005]: audit 2026-03-10T07:23:33.920991+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:34.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:34 vm00 bash[28005]: audit 2026-03-10T07:23:33.926607+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:34.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:34 vm00 bash[28005]: audit 2026-03-10T07:23:33.926607+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:34.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:34 vm00 bash[20701]: audit 2026-03-10T07:23:33.915958+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:34.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:34 vm00 bash[20701]: audit 2026-03-10T07:23:33.915958+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:34.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:34 vm00 bash[20701]: audit 2026-03-10T07:23:33.920991+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:34.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:34 vm00 bash[20701]: audit 2026-03-10T07:23:33.920991+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:34.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:34 vm00 bash[20701]: audit 2026-03-10T07:23:33.926607+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:34.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:34 vm00 bash[20701]: audit 2026-03-10T07:23:33.926607+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:35.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:34 vm03 bash[23382]: audit 2026-03-10T07:23:33.915958+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:35.021 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:34 vm03 bash[23382]: audit 2026-03-10T07:23:33.915958+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:35.021 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:34 vm03 bash[23382]: audit 2026-03-10T07:23:33.920991+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:35.021 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:34 vm03 bash[23382]: audit 2026-03-10T07:23:33.920991+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:35.021 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:34 vm03 bash[23382]: audit 2026-03-10T07:23:33.926607+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:35.021 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:34 vm03 bash[23382]: audit 2026-03-10T07:23:33.926607+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:35.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:35 vm00 bash[28005]: cluster 2026-03-10T07:23:33.892703+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v133: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:35.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:35 vm00 bash[28005]: cluster 2026-03-10T07:23:33.892703+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v133: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:35.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:35 vm00 bash[28005]: cluster 2026-03-10T07:23:34.524092+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T07:23:35.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:35 vm00 bash[28005]: cluster 2026-03-10T07:23:34.524092+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T07:23:35.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:35 vm00 bash[20701]: cluster 2026-03-10T07:23:33.892703+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v133: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:35.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:35 vm00 bash[20701]: cluster 2026-03-10T07:23:33.892703+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v133: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:35.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:35 vm00 bash[20701]: cluster 2026-03-10T07:23:34.524092+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T07:23:35.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:35 vm00 bash[20701]: cluster 2026-03-10T07:23:34.524092+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T07:23:36.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:35 vm03 bash[23382]: cluster 2026-03-10T07:23:33.892703+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v133: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:36.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:35 vm03 bash[23382]: cluster 2026-03-10T07:23:33.892703+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v133: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:36.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:35 vm03 bash[23382]: cluster 2026-03-10T07:23:34.524092+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T07:23:36.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:35 vm03 bash[23382]: cluster 2026-03-10T07:23:34.524092+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T07:23:37.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:37 vm00 bash[28005]: cluster 2026-03-10T07:23:35.892993+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:37.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:37 vm00 bash[28005]: cluster 2026-03-10T07:23:35.892993+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:37.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:37 vm00 bash[20701]: cluster 2026-03-10T07:23:35.892993+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:37.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:37 vm00 bash[20701]: cluster 2026-03-10T07:23:35.892993+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:38.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:37 vm03 bash[23382]: cluster 2026-03-10T07:23:35.892993+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:38.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:37 vm03 bash[23382]: cluster 2026-03-10T07:23:35.892993+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:38.726 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:23:39.652 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:23:39.672 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch daemon add osd vm03:/dev/vdd 2026-03-10T07:23:39.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:39 vm00 bash[28005]: cluster 2026-03-10T07:23:37.893234+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:39.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:39 vm00 bash[28005]: cluster 2026-03-10T07:23:37.893234+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:39.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:39 vm00 bash[20701]: cluster 2026-03-10T07:23:37.893234+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:39.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:39 vm00 bash[20701]: cluster 2026-03-10T07:23:37.893234+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:40.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:39 vm03 bash[23382]: cluster 2026-03-10T07:23:37.893234+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:40.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:39 vm03 bash[23382]: cluster 2026-03-10T07:23:37.893234+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: cluster 2026-03-10T07:23:39.893555+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 61 KiB/s, 0 objects/s recovering 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: cluster 2026-03-10T07:23:39.893555+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 61 KiB/s, 0 objects/s recovering 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: cephadm 2026-03-10T07:23:40.566830+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: cephadm 2026-03-10T07:23:40.566830+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.571403+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.571403+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.575633+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.575633+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.576745+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.576745+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: cephadm 2026-03-10T07:23:40.577197+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: cephadm 2026-03-10T07:23:40.577197+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: cephadm 2026-03-10T07:23:40.577986+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: cephadm 2026-03-10T07:23:40.577986+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.578553+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.578553+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.579067+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.579067+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.583559+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:41 vm00 bash[28005]: audit 2026-03-10T07:23:40.583559+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: cluster 2026-03-10T07:23:39.893555+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 61 KiB/s, 0 objects/s recovering 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: cluster 2026-03-10T07:23:39.893555+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 61 KiB/s, 0 objects/s recovering 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: cephadm 2026-03-10T07:23:40.566830+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: cephadm 2026-03-10T07:23:40.566830+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.571403+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.571403+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:41.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.575633+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.575633+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.576745+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.576745+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: cephadm 2026-03-10T07:23:40.577197+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: cephadm 2026-03-10T07:23:40.577197+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: cephadm 2026-03-10T07:23:40.577986+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: cephadm 2026-03-10T07:23:40.577986+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.578553+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.578553+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.579067+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.579067+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.583559+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:41.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:41 vm00 bash[20701]: audit 2026-03-10T07:23:40.583559+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: cluster 2026-03-10T07:23:39.893555+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 61 KiB/s, 0 objects/s recovering 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: cluster 2026-03-10T07:23:39.893555+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 61 KiB/s, 0 objects/s recovering 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: cephadm 2026-03-10T07:23:40.566830+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: cephadm 2026-03-10T07:23:40.566830+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.571403+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.571403+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.575633+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.575633+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.576745+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.576745+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: cephadm 2026-03-10T07:23:40.577197+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: cephadm 2026-03-10T07:23:40.577197+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: cephadm 2026-03-10T07:23:40.577986+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: cephadm 2026-03-10T07:23:40.577986+0000 mgr.y (mgr.14150) 163 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.578553+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.578553+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.579067+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.579067+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.583559+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:42.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:41 vm03 bash[23382]: audit 2026-03-10T07:23:40.583559+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:23:42.887 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:42 vm00 bash[28005]: cluster 2026-03-10T07:23:41.893831+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-10T07:23:42.887 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:42 vm00 bash[28005]: cluster 2026-03-10T07:23:41.893831+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-10T07:23:42.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:42 vm00 bash[20701]: cluster 2026-03-10T07:23:41.893831+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-10T07:23:42.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:42 vm00 bash[20701]: cluster 2026-03-10T07:23:41.893831+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-10T07:23:43.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:42 vm03 bash[23382]: cluster 2026-03-10T07:23:41.893831+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-10T07:23:43.020 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:42 vm03 bash[23382]: cluster 2026-03-10T07:23:41.893831+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-10T07:23:44.326 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:23:44.482 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.475+0000 7f19a2c16640 1 -- 192.168.123.103:0/3546684108 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f199c06bcd0 msgr2=0x7f199c107ab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:23:44.482 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.475+0000 7f19a2c16640 1 --2- 192.168.123.103:0/3546684108 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f199c06bcd0 0x7f199c107ab0 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f1984009a30 tx=0x7f198402f220 comp rx=0 tx=0).stop 2026-03-10T07:23:44.482 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.475+0000 7f19a2c16640 1 -- 192.168.123.103:0/3546684108 shutdown_connections 2026-03-10T07:23:44.482 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.475+0000 7f19a2c16640 1 --2- 192.168.123.103:0/3546684108 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f199c10aa00 0x7f199c10ce90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:44.482 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.475+0000 7f19a2c16640 1 --2- 192.168.123.103:0/3546684108 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f199c107ff0 0x7f199c10a3e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:44.482 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.475+0000 7f19a2c16640 1 --2- 192.168.123.103:0/3546684108 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f199c06bcd0 0x7f199c107ab0 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:44.482 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.475+0000 7f19a2c16640 1 -- 192.168.123.103:0/3546684108 >> 192.168.123.103:0/3546684108 conn(0x7f199c0fd120 msgr2=0x7f199c0ff560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:23:44.482 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.475+0000 7f19a2c16640 1 -- 192.168.123.103:0/3546684108 shutdown_connections 2026-03-10T07:23:44.482 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.475+0000 7f19a2c16640 1 -- 192.168.123.103:0/3546684108 wait complete. 2026-03-10T07:23:44.483 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a2c16640 1 Processor -- start 2026-03-10T07:23:44.483 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a2c16640 1 -- start start 2026-03-10T07:23:44.483 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a2c16640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f199c06bcd0 0x7f199c19c390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:23:44.483 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a2c16640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f199c107ff0 0x7f199c19c8d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:23:44.483 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a2c16640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f199c10aa00 0x7f199c1a3950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:23:44.483 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a2c16640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f199c10fbb0 con 0x7f199c06bcd0 2026-03-10T07:23:44.483 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a2c16640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f199c10fa30 con 0x7f199c10aa00 2026-03-10T07:23:44.483 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a2c16640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f199c10fd30 con 0x7f199c107ff0 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a098b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f199c06bcd0 0x7f199c19c390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a098b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f199c06bcd0 0x7f199c19c390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.103:51188/0 (socket says 192.168.123.103:51188) 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a098b640 1 -- 192.168.123.103:0/1750766523 learned_addr learned my addr 192.168.123.103:0/1750766523 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a098b640 1 -- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f199c107ff0 msgr2=0x7f199c19c8d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a118c640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f199c10aa00 0x7f199c1a3950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f1993fff640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f199c107ff0 0x7f199c19c8d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a098b640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f199c107ff0 0x7f199c19c8d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a098b640 1 -- 192.168.123.103:0/1750766523 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f199c10aa00 msgr2=0x7f199c1a3950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a098b640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f199c10aa00 0x7f199c1a3950 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a098b640 1 -- 192.168.123.103:0/1750766523 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f199c1a4050 con 0x7f199c06bcd0 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a118c640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f199c10aa00 0x7f199c1a3950 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a098b640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f199c06bcd0 0x7f199c19c390 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f1984009a00 tx=0x7f1984002f60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f1993fff640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f199c107ff0 0x7f199c19c8d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f1991ffb640 1 -- 192.168.123.103:0/1750766523 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1984004240 con 0x7f199c06bcd0 2026-03-10T07:23:44.484 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f1991ffb640 1 -- 192.168.123.103:0/1750766523 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f19840043e0 con 0x7f199c06bcd0 2026-03-10T07:23:44.485 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a2c16640 1 -- 192.168.123.103:0/1750766523 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f199c1035a0 con 0x7f199c06bcd0 2026-03-10T07:23:44.485 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f1991ffb640 1 -- 192.168.123.103:0/1750766523 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f19840365d0 con 0x7f199c06bcd0 2026-03-10T07:23:44.485 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a2c16640 1 -- 192.168.123.103:0/1750766523 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f199c103ad0 con 0x7f199c06bcd0 2026-03-10T07:23:44.486 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f1991ffb640 1 -- 192.168.123.103:0/1750766523 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f198404a420 con 0x7f199c06bcd0 2026-03-10T07:23:44.486 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f19a2c16640 1 -- 192.168.123.103:0/1750766523 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f199c0fe680 con 0x7f199c06bcd0 2026-03-10T07:23:44.486 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f1991ffb640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f1978077580 0x7f1978079a40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:23:44.486 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.479+0000 7f1991ffb640 1 -- 192.168.123.103:0/1750766523 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(33..33 src has 1..33) ==== 3971+0+0 (secure 0 0 0) 0x7f19840c5a80 con 0x7f199c06bcd0 2026-03-10T07:23:44.489 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.483+0000 7f1993fff640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f1978077580 0x7f1978079a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:23:44.489 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.483+0000 7f1991ffb640 1 -- 192.168.123.103:0/1750766523 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f198408f450 con 0x7f199c06bcd0 2026-03-10T07:23:44.489 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.483+0000 7f1993fff640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f1978077580 0x7f1978079a40 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f199c19d8b0 tx=0x7f198c009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:23:44.590 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:23:44.583+0000 7f19a2c16640 1 -- 192.168.123.103:0/1750766523 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7f199c0630c0 con 0x7f1978077580 2026-03-10T07:23:45.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:44 vm03 bash[23382]: cluster 2026-03-10T07:23:43.894154+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:23:45.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:44 vm03 bash[23382]: cluster 2026-03-10T07:23:43.894154+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:23:45.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:44 vm03 bash[23382]: audit 2026-03-10T07:23:44.590982+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:45.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:44 vm03 bash[23382]: audit 2026-03-10T07:23:44.590982+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:45.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:44 vm03 bash[23382]: audit 2026-03-10T07:23:44.592275+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:45.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:44 vm03 bash[23382]: audit 2026-03-10T07:23:44.592275+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:45.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:44 vm03 bash[23382]: audit 2026-03-10T07:23:44.592679+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:45.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:44 vm03 bash[23382]: audit 2026-03-10T07:23:44.592679+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:45.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:44 vm00 bash[28005]: cluster 2026-03-10T07:23:43.894154+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:44 vm00 bash[28005]: cluster 2026-03-10T07:23:43.894154+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:44 vm00 bash[28005]: audit 2026-03-10T07:23:44.590982+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:44 vm00 bash[28005]: audit 2026-03-10T07:23:44.590982+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:44 vm00 bash[28005]: audit 2026-03-10T07:23:44.592275+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:44 vm00 bash[28005]: audit 2026-03-10T07:23:44.592275+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:44 vm00 bash[28005]: audit 2026-03-10T07:23:44.592679+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:44 vm00 bash[28005]: audit 2026-03-10T07:23:44.592679+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:44 vm00 bash[20701]: cluster 2026-03-10T07:23:43.894154+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:44 vm00 bash[20701]: cluster 2026-03-10T07:23:43.894154+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:44 vm00 bash[20701]: audit 2026-03-10T07:23:44.590982+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:44 vm00 bash[20701]: audit 2026-03-10T07:23:44.590982+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:44 vm00 bash[20701]: audit 2026-03-10T07:23:44.592275+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:44 vm00 bash[20701]: audit 2026-03-10T07:23:44.592275+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:44 vm00 bash[20701]: audit 2026-03-10T07:23:44.592679+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:45.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:44 vm00 bash[20701]: audit 2026-03-10T07:23:44.592679+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:46.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:45 vm03 bash[23382]: audit 2026-03-10T07:23:44.589551+0000 mgr.y (mgr.14150) 166 : audit [DBG] from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:46.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:45 vm03 bash[23382]: audit 2026-03-10T07:23:44.589551+0000 mgr.y (mgr.14150) 166 : audit [DBG] from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:46.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:45 vm00 bash[28005]: audit 2026-03-10T07:23:44.589551+0000 mgr.y (mgr.14150) 166 : audit [DBG] from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:46.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:45 vm00 bash[28005]: audit 2026-03-10T07:23:44.589551+0000 mgr.y (mgr.14150) 166 : audit [DBG] from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:46.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:45 vm00 bash[20701]: audit 2026-03-10T07:23:44.589551+0000 mgr.y (mgr.14150) 166 : audit [DBG] from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:46.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:45 vm00 bash[20701]: audit 2026-03-10T07:23:44.589551+0000 mgr.y (mgr.14150) 166 : audit [DBG] from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:47.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:46 vm03 bash[23382]: cluster 2026-03-10T07:23:45.894444+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T07:23:47.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:46 vm03 bash[23382]: cluster 2026-03-10T07:23:45.894444+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T07:23:47.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:46 vm00 bash[28005]: cluster 2026-03-10T07:23:45.894444+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T07:23:47.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:46 vm00 bash[28005]: cluster 2026-03-10T07:23:45.894444+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T07:23:47.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:46 vm00 bash[20701]: cluster 2026-03-10T07:23:45.894444+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T07:23:47.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:46 vm00 bash[20701]: cluster 2026-03-10T07:23:45.894444+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T07:23:49.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:49 vm03 bash[23382]: cluster 2026-03-10T07:23:47.894689+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:49.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:49 vm03 bash[23382]: cluster 2026-03-10T07:23:47.894689+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:49.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:49 vm00 bash[28005]: cluster 2026-03-10T07:23:47.894689+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:49.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:49 vm00 bash[28005]: cluster 2026-03-10T07:23:47.894689+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:49.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:49 vm00 bash[20701]: cluster 2026-03-10T07:23:47.894689+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:49.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:49 vm00 bash[20701]: cluster 2026-03-10T07:23:47.894689+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:51.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: cluster 2026-03-10T07:23:49.894964+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:51.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: cluster 2026-03-10T07:23:49.894964+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:51.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: audit 2026-03-10T07:23:50.028847+0000 mon.b (mon.1) 8 : audit [INF] from='client.? 192.168.123.103:0/1759829607' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: audit 2026-03-10T07:23:50.028847+0000 mon.b (mon.1) 8 : audit [INF] from='client.? 192.168.123.103:0/1759829607' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: audit 2026-03-10T07:23:50.028857+0000 mon.a (mon.0) 491 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: audit 2026-03-10T07:23:50.028857+0000 mon.a (mon.0) 491 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: audit 2026-03-10T07:23:50.031626+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]': finished 2026-03-10T07:23:51.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: audit 2026-03-10T07:23:50.031626+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]': finished 2026-03-10T07:23:51.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: cluster 2026-03-10T07:23:50.036458+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T07:23:51.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: cluster 2026-03-10T07:23:50.036458+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T07:23:51.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: audit 2026-03-10T07:23:50.036827+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:23:51.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: audit 2026-03-10T07:23:50.036827+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:23:51.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: audit 2026-03-10T07:23:50.644910+0000 mon.c (mon.2) 18 : audit [DBG] from='client.? 192.168.123.103:0/3898676585' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:51.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:51 vm03 bash[23382]: audit 2026-03-10T07:23:50.644910+0000 mon.c (mon.2) 18 : audit [DBG] from='client.? 192.168.123.103:0/3898676585' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: cluster 2026-03-10T07:23:49.894964+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: cluster 2026-03-10T07:23:49.894964+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: audit 2026-03-10T07:23:50.028847+0000 mon.b (mon.1) 8 : audit [INF] from='client.? 192.168.123.103:0/1759829607' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: audit 2026-03-10T07:23:50.028847+0000 mon.b (mon.1) 8 : audit [INF] from='client.? 192.168.123.103:0/1759829607' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: audit 2026-03-10T07:23:50.028857+0000 mon.a (mon.0) 491 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: audit 2026-03-10T07:23:50.028857+0000 mon.a (mon.0) 491 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: audit 2026-03-10T07:23:50.031626+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]': finished 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: audit 2026-03-10T07:23:50.031626+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]': finished 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: cluster 2026-03-10T07:23:50.036458+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: cluster 2026-03-10T07:23:50.036458+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: audit 2026-03-10T07:23:50.036827+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: audit 2026-03-10T07:23:50.036827+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: audit 2026-03-10T07:23:50.644910+0000 mon.c (mon.2) 18 : audit [DBG] from='client.? 192.168.123.103:0/3898676585' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:51 vm00 bash[28005]: audit 2026-03-10T07:23:50.644910+0000 mon.c (mon.2) 18 : audit [DBG] from='client.? 192.168.123.103:0/3898676585' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: cluster 2026-03-10T07:23:49.894964+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: cluster 2026-03-10T07:23:49.894964+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: audit 2026-03-10T07:23:50.028847+0000 mon.b (mon.1) 8 : audit [INF] from='client.? 192.168.123.103:0/1759829607' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: audit 2026-03-10T07:23:50.028847+0000 mon.b (mon.1) 8 : audit [INF] from='client.? 192.168.123.103:0/1759829607' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: audit 2026-03-10T07:23:50.028857+0000 mon.a (mon.0) 491 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: audit 2026-03-10T07:23:50.028857+0000 mon.a (mon.0) 491 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]: dispatch 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: audit 2026-03-10T07:23:50.031626+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]': finished 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: audit 2026-03-10T07:23:50.031626+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "361df97b-1006-4ba7-a86f-36dc13915955"}]': finished 2026-03-10T07:23:51.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: cluster 2026-03-10T07:23:50.036458+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T07:23:51.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: cluster 2026-03-10T07:23:50.036458+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T07:23:51.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: audit 2026-03-10T07:23:50.036827+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:23:51.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: audit 2026-03-10T07:23:50.036827+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:23:51.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: audit 2026-03-10T07:23:50.644910+0000 mon.c (mon.2) 18 : audit [DBG] from='client.? 192.168.123.103:0/3898676585' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:51.388 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:51 vm00 bash[20701]: audit 2026-03-10T07:23:50.644910+0000 mon.c (mon.2) 18 : audit [DBG] from='client.? 192.168.123.103:0/3898676585' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:23:53.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:53 vm00 bash[28005]: cluster 2026-03-10T07:23:51.895183+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:53.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:53 vm00 bash[28005]: cluster 2026-03-10T07:23:51.895183+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:53.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:53 vm00 bash[20701]: cluster 2026-03-10T07:23:51.895183+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:53.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:53 vm00 bash[20701]: cluster 2026-03-10T07:23:51.895183+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:53.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:53 vm03 bash[23382]: cluster 2026-03-10T07:23:51.895183+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:53.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:53 vm03 bash[23382]: cluster 2026-03-10T07:23:51.895183+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:55.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:55 vm00 bash[28005]: cluster 2026-03-10T07:23:53.895501+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:55.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:55 vm00 bash[28005]: cluster 2026-03-10T07:23:53.895501+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:55.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:55 vm00 bash[20701]: cluster 2026-03-10T07:23:53.895501+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:55.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:55 vm00 bash[20701]: cluster 2026-03-10T07:23:53.895501+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:55.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:55 vm03 bash[23382]: cluster 2026-03-10T07:23:53.895501+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:55.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:55 vm03 bash[23382]: cluster 2026-03-10T07:23:53.895501+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:57.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:57 vm00 bash[28005]: cluster 2026-03-10T07:23:55.895865+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:57.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:57 vm00 bash[28005]: cluster 2026-03-10T07:23:55.895865+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:57.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:57 vm00 bash[20701]: cluster 2026-03-10T07:23:55.895865+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:57.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:57 vm00 bash[20701]: cluster 2026-03-10T07:23:55.895865+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:57.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:57 vm03 bash[23382]: cluster 2026-03-10T07:23:55.895865+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:57.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:57 vm03 bash[23382]: cluster 2026-03-10T07:23:55.895865+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:59.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:59 vm00 bash[28005]: cluster 2026-03-10T07:23:57.896131+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:59.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:59 vm00 bash[28005]: cluster 2026-03-10T07:23:57.896131+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:59.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:59 vm00 bash[28005]: audit 2026-03-10T07:23:58.887878+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T07:23:59.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:59 vm00 bash[28005]: audit 2026-03-10T07:23:58.887878+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T07:23:59.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:59 vm00 bash[28005]: audit 2026-03-10T07:23:58.888403+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:59.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:23:59 vm00 bash[28005]: audit 2026-03-10T07:23:58.888403+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:59.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:59 vm00 bash[20701]: cluster 2026-03-10T07:23:57.896131+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:59.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:59 vm00 bash[20701]: cluster 2026-03-10T07:23:57.896131+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:59.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:59 vm00 bash[20701]: audit 2026-03-10T07:23:58.887878+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T07:23:59.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:59 vm00 bash[20701]: audit 2026-03-10T07:23:58.887878+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T07:23:59.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:59 vm00 bash[20701]: audit 2026-03-10T07:23:58.888403+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:59.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:23:59 vm00 bash[20701]: audit 2026-03-10T07:23:58.888403+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:59.407 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:59 vm03 bash[23382]: cluster 2026-03-10T07:23:57.896131+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:59.407 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:59 vm03 bash[23382]: cluster 2026-03-10T07:23:57.896131+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:23:59.407 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:59 vm03 bash[23382]: audit 2026-03-10T07:23:58.887878+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T07:23:59.407 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:59 vm03 bash[23382]: audit 2026-03-10T07:23:58.887878+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T07:23:59.407 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:59 vm03 bash[23382]: audit 2026-03-10T07:23:58.888403+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:59.407 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:59 vm03 bash[23382]: audit 2026-03-10T07:23:58.888403+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:00.019 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:59 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:00.019 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:23:59 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:00.019 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:23:59 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:00.019 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:23:59 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:00.019 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:23:59 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:00.019 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:23:59 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:00.327 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:00 vm03 bash[23382]: cephadm 2026-03-10T07:23:58.888850+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T07:24:00.327 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:00 vm03 bash[23382]: cephadm 2026-03-10T07:23:58.888850+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T07:24:00.327 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:00 vm03 bash[23382]: audit 2026-03-10T07:24:00.052938+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:00.327 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:00 vm03 bash[23382]: audit 2026-03-10T07:24:00.052938+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:00.327 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:00 vm03 bash[23382]: audit 2026-03-10T07:24:00.057537+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:00.327 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:00 vm03 bash[23382]: audit 2026-03-10T07:24:00.057537+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:00.327 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:00 vm03 bash[23382]: audit 2026-03-10T07:24:00.064056+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:00.327 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:00 vm03 bash[23382]: audit 2026-03-10T07:24:00.064056+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:00.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:00 vm00 bash[28005]: cephadm 2026-03-10T07:23:58.888850+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T07:24:00.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:00 vm00 bash[28005]: cephadm 2026-03-10T07:23:58.888850+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T07:24:00.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:00 vm00 bash[28005]: audit 2026-03-10T07:24:00.052938+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:00.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:00 vm00 bash[28005]: audit 2026-03-10T07:24:00.052938+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:00.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:00 vm00 bash[28005]: audit 2026-03-10T07:24:00.057537+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:00.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:00 vm00 bash[28005]: audit 2026-03-10T07:24:00.057537+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:00.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:00 vm00 bash[28005]: audit 2026-03-10T07:24:00.064056+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:00.387 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:00 vm00 bash[28005]: audit 2026-03-10T07:24:00.064056+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:00.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:00 vm00 bash[20701]: cephadm 2026-03-10T07:23:58.888850+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T07:24:00.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:00 vm00 bash[20701]: cephadm 2026-03-10T07:23:58.888850+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T07:24:00.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:00 vm00 bash[20701]: audit 2026-03-10T07:24:00.052938+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:00.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:00 vm00 bash[20701]: audit 2026-03-10T07:24:00.052938+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:00.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:00 vm00 bash[20701]: audit 2026-03-10T07:24:00.057537+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:00.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:00 vm00 bash[20701]: audit 2026-03-10T07:24:00.057537+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:00.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:00 vm00 bash[20701]: audit 2026-03-10T07:24:00.064056+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:00.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:00 vm00 bash[20701]: audit 2026-03-10T07:24:00.064056+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:01.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:01 vm00 bash[28005]: cluster 2026-03-10T07:23:59.896451+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:01.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:01 vm00 bash[28005]: cluster 2026-03-10T07:23:59.896451+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:01.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:01 vm00 bash[20701]: cluster 2026-03-10T07:23:59.896451+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:01.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:01 vm00 bash[20701]: cluster 2026-03-10T07:23:59.896451+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:01.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:01 vm03 bash[23382]: cluster 2026-03-10T07:23:59.896451+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:01.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:01 vm03 bash[23382]: cluster 2026-03-10T07:23:59.896451+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:03.383 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:03 vm03 bash[23382]: cluster 2026-03-10T07:24:01.896752+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:03.383 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:03 vm03 bash[23382]: cluster 2026-03-10T07:24:01.896752+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:03.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:03 vm00 bash[28005]: cluster 2026-03-10T07:24:01.896752+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:03.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:03 vm00 bash[28005]: cluster 2026-03-10T07:24:01.896752+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:03.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:03 vm00 bash[20701]: cluster 2026-03-10T07:24:01.896752+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:03.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:03 vm00 bash[20701]: cluster 2026-03-10T07:24:01.896752+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:04.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:04 vm00 bash[28005]: audit 2026-03-10T07:24:03.387151+0000 mon.b (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:04.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:04 vm00 bash[28005]: audit 2026-03-10T07:24:03.387151+0000 mon.b (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:04.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:04 vm00 bash[28005]: audit 2026-03-10T07:24:03.387430+0000 mon.a (mon.0) 500 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:04.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:04 vm00 bash[28005]: audit 2026-03-10T07:24:03.387430+0000 mon.a (mon.0) 500 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:04.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:04 vm00 bash[20701]: audit 2026-03-10T07:24:03.387151+0000 mon.b (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:04.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:04 vm00 bash[20701]: audit 2026-03-10T07:24:03.387151+0000 mon.b (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:04.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:04 vm00 bash[20701]: audit 2026-03-10T07:24:03.387430+0000 mon.a (mon.0) 500 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:04.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:04 vm00 bash[20701]: audit 2026-03-10T07:24:03.387430+0000 mon.a (mon.0) 500 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:04.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:04 vm03 bash[23382]: audit 2026-03-10T07:24:03.387151+0000 mon.b (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:04.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:04 vm03 bash[23382]: audit 2026-03-10T07:24:03.387151+0000 mon.b (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:04.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:04 vm03 bash[23382]: audit 2026-03-10T07:24:03.387430+0000 mon.a (mon.0) 500 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:04.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:04 vm03 bash[23382]: audit 2026-03-10T07:24:03.387430+0000 mon.a (mon.0) 500 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:24:05.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: cluster 2026-03-10T07:24:03.897013+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:05.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: cluster 2026-03-10T07:24:03.897013+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:05.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: audit 2026-03-10T07:24:04.132384+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T07:24:05.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: audit 2026-03-10T07:24:04.132384+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T07:24:05.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: audit 2026-03-10T07:24:04.137748+0000 mon.b (mon.1) 10 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:05.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: audit 2026-03-10T07:24:04.137748+0000 mon.b (mon.1) 10 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:05.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: cluster 2026-03-10T07:24:04.138023+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T07:24:05.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: cluster 2026-03-10T07:24:04.138023+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T07:24:05.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: audit 2026-03-10T07:24:04.138405+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:05.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: audit 2026-03-10T07:24:04.138405+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:05.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: audit 2026-03-10T07:24:04.138584+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:05.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:05 vm03 bash[23382]: audit 2026-03-10T07:24:04.138584+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: cluster 2026-03-10T07:24:03.897013+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: cluster 2026-03-10T07:24:03.897013+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: audit 2026-03-10T07:24:04.132384+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: audit 2026-03-10T07:24:04.132384+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: audit 2026-03-10T07:24:04.137748+0000 mon.b (mon.1) 10 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: audit 2026-03-10T07:24:04.137748+0000 mon.b (mon.1) 10 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: cluster 2026-03-10T07:24:04.138023+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: cluster 2026-03-10T07:24:04.138023+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: audit 2026-03-10T07:24:04.138405+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: audit 2026-03-10T07:24:04.138405+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: audit 2026-03-10T07:24:04.138584+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:05.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:05 vm00 bash[28005]: audit 2026-03-10T07:24:04.138584+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: cluster 2026-03-10T07:24:03.897013+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: cluster 2026-03-10T07:24:03.897013+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: audit 2026-03-10T07:24:04.132384+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: audit 2026-03-10T07:24:04.132384+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: audit 2026-03-10T07:24:04.137748+0000 mon.b (mon.1) 10 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: audit 2026-03-10T07:24:04.137748+0000 mon.b (mon.1) 10 : audit [INF] from='osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: cluster 2026-03-10T07:24:04.138023+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: cluster 2026-03-10T07:24:04.138023+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: audit 2026-03-10T07:24:04.138405+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: audit 2026-03-10T07:24:04.138405+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: audit 2026-03-10T07:24:04.138584+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:05.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:05 vm00 bash[20701]: audit 2026-03-10T07:24:04.138584+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:06.354 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: audit 2026-03-10T07:24:05.134739+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: audit 2026-03-10T07:24:05.134739+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: cluster 2026-03-10T07:24:05.141658+0000 mon.a (mon.0) 506 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: cluster 2026-03-10T07:24:05.141658+0000 mon.a (mon.0) 506 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: audit 2026-03-10T07:24:05.142466+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: audit 2026-03-10T07:24:05.142466+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: audit 2026-03-10T07:24:05.145170+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: audit 2026-03-10T07:24:05.145170+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: cluster 2026-03-10T07:24:05.801245+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: cluster 2026-03-10T07:24:05.801245+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: audit 2026-03-10T07:24:05.802026+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: audit 2026-03-10T07:24:05.802026+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: audit 2026-03-10T07:24:06.146470+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.355 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:06 vm03 bash[23382]: audit 2026-03-10T07:24:06.146470+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: audit 2026-03-10T07:24:05.134739+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: audit 2026-03-10T07:24:05.134739+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: cluster 2026-03-10T07:24:05.141658+0000 mon.a (mon.0) 506 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: cluster 2026-03-10T07:24:05.141658+0000 mon.a (mon.0) 506 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: audit 2026-03-10T07:24:05.142466+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: audit 2026-03-10T07:24:05.142466+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: audit 2026-03-10T07:24:05.145170+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: audit 2026-03-10T07:24:05.145170+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: cluster 2026-03-10T07:24:05.801245+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: cluster 2026-03-10T07:24:05.801245+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: audit 2026-03-10T07:24:05.802026+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: audit 2026-03-10T07:24:05.802026+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: audit 2026-03-10T07:24:06.146470+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:06 vm00 bash[28005]: audit 2026-03-10T07:24:06.146470+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: audit 2026-03-10T07:24:05.134739+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: audit 2026-03-10T07:24:05.134739+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: cluster 2026-03-10T07:24:05.141658+0000 mon.a (mon.0) 506 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: cluster 2026-03-10T07:24:05.141658+0000 mon.a (mon.0) 506 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: audit 2026-03-10T07:24:05.142466+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: audit 2026-03-10T07:24:05.142466+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: audit 2026-03-10T07:24:05.145170+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: audit 2026-03-10T07:24:05.145170+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: cluster 2026-03-10T07:24:05.801245+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T07:24:06.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: cluster 2026-03-10T07:24:05.801245+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T07:24:06.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: audit 2026-03-10T07:24:05.802026+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: audit 2026-03-10T07:24:05.802026+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: audit 2026-03-10T07:24:06.146470+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:06.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:06 vm00 bash[20701]: audit 2026-03-10T07:24:06.146470+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: cluster 2026-03-10T07:24:04.406048+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: cluster 2026-03-10T07:24:04.406048+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: cluster 2026-03-10T07:24:04.406096+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: cluster 2026-03-10T07:24:04.406096+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: cluster 2026-03-10T07:24:05.897312+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: cluster 2026-03-10T07:24:05.897312+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.240794+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.240794+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.311806+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.311806+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.316302+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.316302+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.317278+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.317278+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.318085+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.318085+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.323357+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.323357+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: cluster 2026-03-10T07:24:06.809864+0000 mon.a (mon.0) 518 : cluster [INF] osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] boot 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: cluster 2026-03-10T07:24:06.809864+0000 mon.a (mon.0) 518 : cluster [INF] osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] boot 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: cluster 2026-03-10T07:24:06.810052+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: cluster 2026-03-10T07:24:06.810052+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.810261+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:07.395 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:07 vm03 bash[23382]: audit 2026-03-10T07:24:06.810261+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:07.466 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 5 on host 'vm03' 2026-03-10T07:24:07.466 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.460+0000 7f1991ffb640 1 -- 192.168.123.103:0/1750766523 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f199c0630c0 con 0x7f1978077580 2026-03-10T07:24:07.467 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 -- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f1978077580 msgr2=0x7f1978079a40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:24:07.467 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f1978077580 0x7f1978079a40 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f199c19d8b0 tx=0x7f198c009290 comp rx=0 tx=0).stop 2026-03-10T07:24:07.467 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 -- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f199c06bcd0 msgr2=0x7f199c19c390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:24:07.467 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f199c06bcd0 0x7f199c19c390 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f1984009a00 tx=0x7f1984002f60 comp rx=0 tx=0).stop 2026-03-10T07:24:07.468 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 -- 192.168.123.103:0/1750766523 shutdown_connections 2026-03-10T07:24:07.468 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f1978077580 0x7f1978079a40 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:07.468 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f199c10aa00 0x7f199c1a3950 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:07.468 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f199c107ff0 0x7f199c19c8d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:07.468 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 --2- 192.168.123.103:0/1750766523 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f199c06bcd0 0x7f199c19c390 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:07.468 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 -- 192.168.123.103:0/1750766523 >> 192.168.123.103:0/1750766523 conn(0x7f199c0fd120 msgr2=0x7f199c108bf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:24:07.468 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 -- 192.168.123.103:0/1750766523 shutdown_connections 2026-03-10T07:24:07.468 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:07.464+0000 7f19a2c16640 1 -- 192.168.123.103:0/1750766523 wait complete. 2026-03-10T07:24:07.614 DEBUG:teuthology.orchestra.run.vm03:osd.5> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.5.service 2026-03-10T07:24:07.615 INFO:tasks.cephadm:Deploying osd.6 on vm03 with /dev/vdc... 2026-03-10T07:24:07.615 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- lvm zap /dev/vdc 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: cluster 2026-03-10T07:24:04.406048+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: cluster 2026-03-10T07:24:04.406048+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: cluster 2026-03-10T07:24:04.406096+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: cluster 2026-03-10T07:24:04.406096+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: cluster 2026-03-10T07:24:05.897312+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: cluster 2026-03-10T07:24:05.897312+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.240794+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.240794+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.311806+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.311806+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.316302+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.316302+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.317278+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:07.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.317278+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.318085+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.318085+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.323357+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.323357+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: cluster 2026-03-10T07:24:06.809864+0000 mon.a (mon.0) 518 : cluster [INF] osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] boot 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: cluster 2026-03-10T07:24:06.809864+0000 mon.a (mon.0) 518 : cluster [INF] osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] boot 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: cluster 2026-03-10T07:24:06.810052+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: cluster 2026-03-10T07:24:06.810052+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.810261+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:07 vm00 bash[28005]: audit 2026-03-10T07:24:06.810261+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: cluster 2026-03-10T07:24:04.406048+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: cluster 2026-03-10T07:24:04.406048+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: cluster 2026-03-10T07:24:04.406096+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: cluster 2026-03-10T07:24:04.406096+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: cluster 2026-03-10T07:24:05.897312+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: cluster 2026-03-10T07:24:05.897312+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.240794+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.240794+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.311806+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.311806+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.316302+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.316302+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.317278+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.317278+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.318085+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.318085+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.323357+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.323357+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: cluster 2026-03-10T07:24:06.809864+0000 mon.a (mon.0) 518 : cluster [INF] osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] boot 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: cluster 2026-03-10T07:24:06.809864+0000 mon.a (mon.0) 518 : cluster [INF] osd.5 [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] boot 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: cluster 2026-03-10T07:24:06.810052+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T07:24:07.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: cluster 2026-03-10T07:24:06.810052+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T07:24:07.638 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.810261+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:07.638 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:07 vm00 bash[20701]: audit 2026-03-10T07:24:06.810261+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:24:08.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:08 vm03 bash[23382]: audit 2026-03-10T07:24:07.453293+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:08.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:08 vm03 bash[23382]: audit 2026-03-10T07:24:07.453293+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:08.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:08 vm03 bash[23382]: audit 2026-03-10T07:24:07.458945+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:08 vm03 bash[23382]: audit 2026-03-10T07:24:07.458945+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:08 vm03 bash[23382]: audit 2026-03-10T07:24:07.462752+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:08 vm03 bash[23382]: audit 2026-03-10T07:24:07.462752+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:08 vm03 bash[23382]: cluster 2026-03-10T07:24:07.814308+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T07:24:08.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:08 vm03 bash[23382]: cluster 2026-03-10T07:24:07.814308+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:08 vm00 bash[28005]: audit 2026-03-10T07:24:07.453293+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:08 vm00 bash[28005]: audit 2026-03-10T07:24:07.453293+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:08 vm00 bash[28005]: audit 2026-03-10T07:24:07.458945+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:08 vm00 bash[28005]: audit 2026-03-10T07:24:07.458945+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:08 vm00 bash[28005]: audit 2026-03-10T07:24:07.462752+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:08 vm00 bash[28005]: audit 2026-03-10T07:24:07.462752+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:08 vm00 bash[28005]: cluster 2026-03-10T07:24:07.814308+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:08 vm00 bash[28005]: cluster 2026-03-10T07:24:07.814308+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:08 vm00 bash[20701]: audit 2026-03-10T07:24:07.453293+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:08 vm00 bash[20701]: audit 2026-03-10T07:24:07.453293+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:08 vm00 bash[20701]: audit 2026-03-10T07:24:07.458945+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:08 vm00 bash[20701]: audit 2026-03-10T07:24:07.458945+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:08 vm00 bash[20701]: audit 2026-03-10T07:24:07.462752+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:08 vm00 bash[20701]: audit 2026-03-10T07:24:07.462752+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:08 vm00 bash[20701]: cluster 2026-03-10T07:24:07.814308+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T07:24:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:08 vm00 bash[20701]: cluster 2026-03-10T07:24:07.814308+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T07:24:09.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:09 vm03 bash[23382]: cluster 2026-03-10T07:24:07.897605+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:09.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:09 vm03 bash[23382]: cluster 2026-03-10T07:24:07.897605+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:09.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:09 vm03 bash[23382]: cluster 2026-03-10T07:24:08.820699+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T07:24:09.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:09 vm03 bash[23382]: cluster 2026-03-10T07:24:08.820699+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T07:24:09.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:09 vm00 bash[28005]: cluster 2026-03-10T07:24:07.897605+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:09.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:09 vm00 bash[28005]: cluster 2026-03-10T07:24:07.897605+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:09.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:09 vm00 bash[28005]: cluster 2026-03-10T07:24:08.820699+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T07:24:09.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:09 vm00 bash[28005]: cluster 2026-03-10T07:24:08.820699+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T07:24:09.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:09 vm00 bash[20701]: cluster 2026-03-10T07:24:07.897605+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:09.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:09 vm00 bash[20701]: cluster 2026-03-10T07:24:07.897605+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:09.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:09 vm00 bash[20701]: cluster 2026-03-10T07:24:08.820699+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T07:24:09.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:09 vm00 bash[20701]: cluster 2026-03-10T07:24:08.820699+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T07:24:11.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:11 vm00 bash[28005]: cluster 2026-03-10T07:24:09.897894+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:11.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:11 vm00 bash[28005]: cluster 2026-03-10T07:24:09.897894+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:11.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:11 vm00 bash[20701]: cluster 2026-03-10T07:24:09.897894+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:11.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:11 vm00 bash[20701]: cluster 2026-03-10T07:24:09.897894+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:11.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:11 vm03 bash[23382]: cluster 2026-03-10T07:24:09.897894+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:11.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:11 vm03 bash[23382]: cluster 2026-03-10T07:24:09.897894+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:12.283 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:24:13.155 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:24:13.169 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch daemon add osd vm03:/dev/vdc 2026-03-10T07:24:13.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:13 vm03 bash[23382]: cluster 2026-03-10T07:24:11.898174+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:13.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:13 vm03 bash[23382]: cluster 2026-03-10T07:24:11.898174+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:13.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:13 vm00 bash[28005]: cluster 2026-03-10T07:24:11.898174+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:13.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:13 vm00 bash[28005]: cluster 2026-03-10T07:24:11.898174+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:13.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:13 vm00 bash[20701]: cluster 2026-03-10T07:24:11.898174+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:13.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:13 vm00 bash[20701]: cluster 2026-03-10T07:24:11.898174+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: cluster 2026-03-10T07:24:13.898466+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: cluster 2026-03-10T07:24:13.898466+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: cephadm 2026-03-10T07:24:13.984810+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: cephadm 2026-03-10T07:24:13.984810+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:13.992441+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:13.992441+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:13.996267+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:13.996267+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:13.997030+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:13.997030+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:13.999100+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:13.999100+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: cephadm 2026-03-10T07:24:13.999465+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T07:24:15.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: cephadm 2026-03-10T07:24:13.999465+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T07:24:15.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: cephadm 2026-03-10T07:24:13.999861+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-10T07:24:15.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: cephadm 2026-03-10T07:24:13.999861+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-10T07:24:15.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:14.000188+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:15.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:14.000188+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:15.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:14.000614+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:15.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:14.000614+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:15.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:14.004488+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.269 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:15 vm03 bash[23382]: audit 2026-03-10T07:24:14.004488+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: cluster 2026-03-10T07:24:13.898466+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: cluster 2026-03-10T07:24:13.898466+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: cephadm 2026-03-10T07:24:13.984810+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: cephadm 2026-03-10T07:24:13.984810+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:13.992441+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:13.992441+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:13.996267+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:13.996267+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:13.997030+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:13.997030+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:13.999100+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:13.999100+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: cephadm 2026-03-10T07:24:13.999465+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: cephadm 2026-03-10T07:24:13.999465+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: cephadm 2026-03-10T07:24:13.999861+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: cephadm 2026-03-10T07:24:13.999861+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:14.000188+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:14.000188+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:14.000614+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:14.000614+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:14.004488+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:15 vm00 bash[28005]: audit 2026-03-10T07:24:14.004488+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: cluster 2026-03-10T07:24:13.898466+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: cluster 2026-03-10T07:24:13.898466+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: cephadm 2026-03-10T07:24:13.984810+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: cephadm 2026-03-10T07:24:13.984810+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:13.992441+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:13.992441+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:13.996267+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:13.996267+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:13.997030+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:13.997030+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:13.999100+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:13.999100+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: cephadm 2026-03-10T07:24:13.999465+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: cephadm 2026-03-10T07:24:13.999465+0000 mgr.y (mgr.14150) 184 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: cephadm 2026-03-10T07:24:13.999861+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: cephadm 2026-03-10T07:24:13.999861+0000 mgr.y (mgr.14150) 185 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:14.000188+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:14.000188+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:14.000614+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:14.000614+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:14.004488+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:15.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:15 vm00 bash[20701]: audit 2026-03-10T07:24:14.004488+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:17.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:17 vm03 bash[23382]: cluster 2026-03-10T07:24:15.898772+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:17.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:17 vm03 bash[23382]: cluster 2026-03-10T07:24:15.898772+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:17.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:17 vm00 bash[28005]: cluster 2026-03-10T07:24:15.898772+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:17.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:17 vm00 bash[28005]: cluster 2026-03-10T07:24:15.898772+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:17.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:17 vm00 bash[20701]: cluster 2026-03-10T07:24:15.898772+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:17.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:17 vm00 bash[20701]: cluster 2026-03-10T07:24:15.898772+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:17.831 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:24:18.012 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.008+0000 7f3dfb237640 1 -- 192.168.123.103:0/2417480866 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3df406b860 msgr2=0x7f3df410a8b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:24:18.012 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.008+0000 7f3dfb237640 1 --2- 192.168.123.103:0/2417480866 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3df406b860 0x7f3df410a8b0 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f3de000ab80 tx=0x7f3de00305a0 comp rx=0 tx=0).stop 2026-03-10T07:24:18.012 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.008+0000 7f3dfb237640 1 -- 192.168.123.103:0/2417480866 shutdown_connections 2026-03-10T07:24:18.012 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.008+0000 7f3dfb237640 1 --2- 192.168.123.103:0/2417480866 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3df410faf0 0x7f3df4111ee0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:18.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.008+0000 7f3dfb237640 1 --2- 192.168.123.103:0/2417480866 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3df410adf0 0x7f3df410b250 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:18.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.008+0000 7f3dfb237640 1 --2- 192.168.123.103:0/2417480866 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3df406b860 0x7f3df410a8b0 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:18.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.008+0000 7f3dfb237640 1 -- 192.168.123.103:0/2417480866 >> 192.168.123.103:0/2417480866 conn(0x7f3df406fc70 msgr2=0x7f3df4072090 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:24:18.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.008+0000 7f3dfb237640 1 -- 192.168.123.103:0/2417480866 shutdown_connections 2026-03-10T07:24:18.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.008+0000 7f3dfb237640 1 -- 192.168.123.103:0/2417480866 wait complete. 2026-03-10T07:24:18.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.008+0000 7f3dfb237640 1 Processor -- start 2026-03-10T07:24:18.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.008+0000 7f3dfb237640 1 -- start start 2026-03-10T07:24:18.014 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3dfb237640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3df406b860 0x7f3df4111a40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:24:18.014 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3dfb237640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3df410adf0 0x7f3df4110090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:24:18.014 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3dfb237640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3df410faf0 0x7f3df41105d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:24:18.014 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3dfb237640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f3df4114180 con 0x7f3df406b860 2026-03-10T07:24:18.014 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3dfb237640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f3df4114000 con 0x7f3df410adf0 2026-03-10T07:24:18.014 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3dfb237640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f3df4114300 con 0x7f3df410faf0 2026-03-10T07:24:18.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df8fac640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3df406b860 0x7f3df4111a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:24:18.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df8fac640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3df406b860 0x7f3df4111a40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.103:54522/0 (socket says 192.168.123.103:54522) 2026-03-10T07:24:18.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df8fac640 1 -- 192.168.123.103:0/2406526049 learned_addr learned my addr 192.168.123.103:0/2406526049 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:24:18.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df8fac640 1 -- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3df410faf0 msgr2=0x7f3df41105d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:24:18.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df8fac640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3df410faf0 0x7f3df41105d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:18.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df8fac640 1 -- 192.168.123.103:0/2406526049 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3df410adf0 msgr2=0x7f3df4110090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:24:18.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df3fff640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3df410adf0 0x7f3df4110090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:24:18.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df8fac640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3df410adf0 0x7f3df4110090 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:18.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df8fac640 1 -- 192.168.123.103:0/2406526049 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3df4110e60 con 0x7f3df406b860 2026-03-10T07:24:18.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df3fff640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3df410adf0 0x7f3df4110090 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:24:18.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df8fac640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3df406b860 0x7f3df4111a40 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f3de000ab50 tx=0x7f3de0044db0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:24:18.016 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df1ffb640 1 -- 192.168.123.103:0/2406526049 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3de00029b0 con 0x7f3df406b860 2026-03-10T07:24:18.016 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df1ffb640 1 -- 192.168.123.103:0/2406526049 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3de00375e0 con 0x7f3df406b860 2026-03-10T07:24:18.016 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3dfb237640 1 -- 192.168.123.103:0/2406526049 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3df41a2760 con 0x7f3df406b860 2026-03-10T07:24:18.017 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3df1ffb640 1 -- 192.168.123.103:0/2406526049 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3de0038690 con 0x7f3df406b860 2026-03-10T07:24:18.017 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.012+0000 7f3dfb237640 1 -- 192.168.123.103:0/2406526049 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3df41a2cf0 con 0x7f3df406b860 2026-03-10T07:24:18.018 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.016+0000 7f3df1ffb640 1 -- 192.168.123.103:0/2406526049 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f3de0030ce0 con 0x7f3df406b860 2026-03-10T07:24:18.018 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.016+0000 7f3dfb237640 1 -- 192.168.123.103:0/2406526049 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3dbc005180 con 0x7f3df406b860 2026-03-10T07:24:18.019 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.016+0000 7f3df1ffb640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3dd4077610 0x7f3dd4079ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:24:18.019 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.016+0000 7f3df1ffb640 1 -- 192.168.123.103:0/2406526049 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(40..40 src has 1..40) ==== 4403+0+0 (secure 0 0 0) 0x7f3de00c6a40 con 0x7f3df406b860 2026-03-10T07:24:18.020 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.016+0000 7f3df3fff640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3dd4077610 0x7f3dd4079ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:24:18.020 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.016+0000 7f3df3fff640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3dd4077610 0x7f3dd4079ad0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f3de4005e00 tx=0x7f3de400a250 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:24:18.022 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.020+0000 7f3df1ffb640 1 -- 192.168.123.103:0/2406526049 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3de0090260 con 0x7f3df406b860 2026-03-10T07:24:18.124 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:18.120+0000 7f3dfb237640 1 -- 192.168.123.103:0/2406526049 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7f3dbc002bf0 con 0x7f3dd4077610 2026-03-10T07:24:19.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:19 vm03 bash[23382]: cluster 2026-03-10T07:24:17.899047+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:19.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:19 vm03 bash[23382]: cluster 2026-03-10T07:24:17.899047+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:19.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:19 vm03 bash[23382]: audit 2026-03-10T07:24:18.126101+0000 mgr.y (mgr.14150) 188 : audit [DBG] from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:19.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:19 vm03 bash[23382]: audit 2026-03-10T07:24:18.126101+0000 mgr.y (mgr.14150) 188 : audit [DBG] from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:19.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:19 vm03 bash[23382]: audit 2026-03-10T07:24:18.127574+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:19.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:19 vm03 bash[23382]: audit 2026-03-10T07:24:18.127574+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:19.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:19 vm03 bash[23382]: audit 2026-03-10T07:24:18.129185+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:19.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:19 vm03 bash[23382]: audit 2026-03-10T07:24:18.129185+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:19.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:19 vm03 bash[23382]: audit 2026-03-10T07:24:18.129580+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:19.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:19 vm03 bash[23382]: audit 2026-03-10T07:24:18.129580+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:19.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:19 vm00 bash[28005]: cluster 2026-03-10T07:24:17.899047+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:19.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:19 vm00 bash[28005]: cluster 2026-03-10T07:24:17.899047+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:19.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:19 vm00 bash[28005]: audit 2026-03-10T07:24:18.126101+0000 mgr.y (mgr.14150) 188 : audit [DBG] from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:19.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:19 vm00 bash[28005]: audit 2026-03-10T07:24:18.126101+0000 mgr.y (mgr.14150) 188 : audit [DBG] from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:19.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:19 vm00 bash[28005]: audit 2026-03-10T07:24:18.127574+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:19 vm00 bash[28005]: audit 2026-03-10T07:24:18.127574+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:19 vm00 bash[28005]: audit 2026-03-10T07:24:18.129185+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:19 vm00 bash[28005]: audit 2026-03-10T07:24:18.129185+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:19 vm00 bash[28005]: audit 2026-03-10T07:24:18.129580+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:19 vm00 bash[28005]: audit 2026-03-10T07:24:18.129580+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:19 vm00 bash[20701]: cluster 2026-03-10T07:24:17.899047+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:19 vm00 bash[20701]: cluster 2026-03-10T07:24:17.899047+0000 mgr.y (mgr.14150) 187 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:19 vm00 bash[20701]: audit 2026-03-10T07:24:18.126101+0000 mgr.y (mgr.14150) 188 : audit [DBG] from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:19 vm00 bash[20701]: audit 2026-03-10T07:24:18.126101+0000 mgr.y (mgr.14150) 188 : audit [DBG] from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:19 vm00 bash[20701]: audit 2026-03-10T07:24:18.127574+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:19 vm00 bash[20701]: audit 2026-03-10T07:24:18.127574+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:19 vm00 bash[20701]: audit 2026-03-10T07:24:18.129185+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:19 vm00 bash[20701]: audit 2026-03-10T07:24:18.129185+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:19 vm00 bash[20701]: audit 2026-03-10T07:24:18.129580+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:19.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:19 vm00 bash[20701]: audit 2026-03-10T07:24:18.129580+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:21.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:21 vm03 bash[23382]: cluster 2026-03-10T07:24:19.899306+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:21.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:21 vm03 bash[23382]: cluster 2026-03-10T07:24:19.899306+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:21.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:21 vm00 bash[28005]: cluster 2026-03-10T07:24:19.899306+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:21.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:21 vm00 bash[28005]: cluster 2026-03-10T07:24:19.899306+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:21.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:21 vm00 bash[20701]: cluster 2026-03-10T07:24:19.899306+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:21.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:21 vm00 bash[20701]: cluster 2026-03-10T07:24:19.899306+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:23.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:23 vm03 bash[23382]: cluster 2026-03-10T07:24:21.899585+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:23.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:23 vm03 bash[23382]: cluster 2026-03-10T07:24:21.899585+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:23.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:23 vm00 bash[28005]: cluster 2026-03-10T07:24:21.899585+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:23.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:23 vm00 bash[28005]: cluster 2026-03-10T07:24:21.899585+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:23.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:23 vm00 bash[20701]: cluster 2026-03-10T07:24:21.899585+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:23.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:23 vm00 bash[20701]: cluster 2026-03-10T07:24:21.899585+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:24.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:24 vm03 bash[23382]: audit 2026-03-10T07:24:23.552065+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/268220065' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:24 vm03 bash[23382]: audit 2026-03-10T07:24:23.552065+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/268220065' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:24 vm03 bash[23382]: audit 2026-03-10T07:24:23.552618+0000 mon.a (mon.0) 536 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:24 vm03 bash[23382]: audit 2026-03-10T07:24:23.552618+0000 mon.a (mon.0) 536 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:24 vm03 bash[23382]: audit 2026-03-10T07:24:23.555875+0000 mon.a (mon.0) 537 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]': finished 2026-03-10T07:24:24.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:24 vm03 bash[23382]: audit 2026-03-10T07:24:23.555875+0000 mon.a (mon.0) 537 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]': finished 2026-03-10T07:24:24.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:24 vm03 bash[23382]: cluster 2026-03-10T07:24:23.559855+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T07:24:24.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:24 vm03 bash[23382]: cluster 2026-03-10T07:24:23.559855+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T07:24:24.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:24 vm03 bash[23382]: audit 2026-03-10T07:24:23.560520+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:24.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:24 vm03 bash[23382]: audit 2026-03-10T07:24:23.560520+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:24 vm00 bash[28005]: audit 2026-03-10T07:24:23.552065+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/268220065' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:24 vm00 bash[28005]: audit 2026-03-10T07:24:23.552065+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/268220065' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:24 vm00 bash[28005]: audit 2026-03-10T07:24:23.552618+0000 mon.a (mon.0) 536 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:24 vm00 bash[28005]: audit 2026-03-10T07:24:23.552618+0000 mon.a (mon.0) 536 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:24 vm00 bash[28005]: audit 2026-03-10T07:24:23.555875+0000 mon.a (mon.0) 537 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]': finished 2026-03-10T07:24:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:24 vm00 bash[28005]: audit 2026-03-10T07:24:23.555875+0000 mon.a (mon.0) 537 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]': finished 2026-03-10T07:24:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:24 vm00 bash[28005]: cluster 2026-03-10T07:24:23.559855+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T07:24:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:24 vm00 bash[28005]: cluster 2026-03-10T07:24:23.559855+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T07:24:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:24 vm00 bash[28005]: audit 2026-03-10T07:24:23.560520+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:24 vm00 bash[28005]: audit 2026-03-10T07:24:23.560520+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:24.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:24 vm00 bash[20701]: audit 2026-03-10T07:24:23.552065+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/268220065' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:24 vm00 bash[20701]: audit 2026-03-10T07:24:23.552065+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/268220065' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:24 vm00 bash[20701]: audit 2026-03-10T07:24:23.552618+0000 mon.a (mon.0) 536 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:24 vm00 bash[20701]: audit 2026-03-10T07:24:23.552618+0000 mon.a (mon.0) 536 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]: dispatch 2026-03-10T07:24:24.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:24 vm00 bash[20701]: audit 2026-03-10T07:24:23.555875+0000 mon.a (mon.0) 537 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]': finished 2026-03-10T07:24:24.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:24 vm00 bash[20701]: audit 2026-03-10T07:24:23.555875+0000 mon.a (mon.0) 537 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a6dfdf0a-06d2-49ea-8222-a0f8f776983e"}]': finished 2026-03-10T07:24:24.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:24 vm00 bash[20701]: cluster 2026-03-10T07:24:23.559855+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T07:24:24.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:24 vm00 bash[20701]: cluster 2026-03-10T07:24:23.559855+0000 mon.a (mon.0) 538 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T07:24:24.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:24 vm00 bash[20701]: audit 2026-03-10T07:24:23.560520+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:24.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:24 vm00 bash[20701]: audit 2026-03-10T07:24:23.560520+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:25.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:25 vm00 bash[28005]: cluster 2026-03-10T07:24:23.899859+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:25.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:25 vm00 bash[28005]: cluster 2026-03-10T07:24:23.899859+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:25.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:25 vm00 bash[28005]: audit 2026-03-10T07:24:24.184498+0000 mon.a (mon.0) 540 : audit [DBG] from='client.? 192.168.123.103:0/4212102320' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:24:25.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:25 vm00 bash[28005]: audit 2026-03-10T07:24:24.184498+0000 mon.a (mon.0) 540 : audit [DBG] from='client.? 192.168.123.103:0/4212102320' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:24:25.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:25 vm00 bash[20701]: cluster 2026-03-10T07:24:23.899859+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:25.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:25 vm00 bash[20701]: cluster 2026-03-10T07:24:23.899859+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:25.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:25 vm00 bash[20701]: audit 2026-03-10T07:24:24.184498+0000 mon.a (mon.0) 540 : audit [DBG] from='client.? 192.168.123.103:0/4212102320' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:24:25.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:25 vm00 bash[20701]: audit 2026-03-10T07:24:24.184498+0000 mon.a (mon.0) 540 : audit [DBG] from='client.? 192.168.123.103:0/4212102320' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:24:25.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:25 vm03 bash[23382]: cluster 2026-03-10T07:24:23.899859+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:25.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:25 vm03 bash[23382]: cluster 2026-03-10T07:24:23.899859+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:25.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:25 vm03 bash[23382]: audit 2026-03-10T07:24:24.184498+0000 mon.a (mon.0) 540 : audit [DBG] from='client.? 192.168.123.103:0/4212102320' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:24:25.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:25 vm03 bash[23382]: audit 2026-03-10T07:24:24.184498+0000 mon.a (mon.0) 540 : audit [DBG] from='client.? 192.168.123.103:0/4212102320' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:24:27.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:27 vm00 bash[28005]: cluster 2026-03-10T07:24:25.900137+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:27.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:27 vm00 bash[28005]: cluster 2026-03-10T07:24:25.900137+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:27.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:27 vm00 bash[20701]: cluster 2026-03-10T07:24:25.900137+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:27.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:27 vm00 bash[20701]: cluster 2026-03-10T07:24:25.900137+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:27.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:27 vm03 bash[23382]: cluster 2026-03-10T07:24:25.900137+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:27.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:27 vm03 bash[23382]: cluster 2026-03-10T07:24:25.900137+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:29.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:29 vm03 bash[23382]: cluster 2026-03-10T07:24:27.900412+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:29.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:29 vm03 bash[23382]: cluster 2026-03-10T07:24:27.900412+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:29.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:29 vm00 bash[28005]: cluster 2026-03-10T07:24:27.900412+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:29.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:29 vm00 bash[28005]: cluster 2026-03-10T07:24:27.900412+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:29.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:29 vm00 bash[20701]: cluster 2026-03-10T07:24:27.900412+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:29.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:29 vm00 bash[20701]: cluster 2026-03-10T07:24:27.900412+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:31.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:31 vm03 bash[23382]: cluster 2026-03-10T07:24:29.900749+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:31.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:31 vm03 bash[23382]: cluster 2026-03-10T07:24:29.900749+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:31.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:31 vm00 bash[28005]: cluster 2026-03-10T07:24:29.900749+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:31.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:31 vm00 bash[28005]: cluster 2026-03-10T07:24:29.900749+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:31.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:31 vm00 bash[20701]: cluster 2026-03-10T07:24:29.900749+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:31.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:31 vm00 bash[20701]: cluster 2026-03-10T07:24:29.900749+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:33.488 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:33 vm03 bash[23382]: cluster 2026-03-10T07:24:31.901009+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:33.488 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:33 vm03 bash[23382]: cluster 2026-03-10T07:24:31.901009+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:33.488 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:33 vm03 bash[23382]: audit 2026-03-10T07:24:32.962659+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T07:24:33.488 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:33 vm03 bash[23382]: audit 2026-03-10T07:24:32.962659+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T07:24:33.488 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:33 vm03 bash[23382]: audit 2026-03-10T07:24:32.963556+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:33.488 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:33 vm03 bash[23382]: audit 2026-03-10T07:24:32.963556+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:33.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:33 vm00 bash[28005]: cluster 2026-03-10T07:24:31.901009+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:33.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:33 vm00 bash[28005]: cluster 2026-03-10T07:24:31.901009+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:33.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:33 vm00 bash[28005]: audit 2026-03-10T07:24:32.962659+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T07:24:33.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:33 vm00 bash[28005]: audit 2026-03-10T07:24:32.962659+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T07:24:33.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:33 vm00 bash[28005]: audit 2026-03-10T07:24:32.963556+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:33.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:33 vm00 bash[28005]: audit 2026-03-10T07:24:32.963556+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:33.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:33 vm00 bash[20701]: cluster 2026-03-10T07:24:31.901009+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:33.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:33 vm00 bash[20701]: cluster 2026-03-10T07:24:31.901009+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:33.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:33 vm00 bash[20701]: audit 2026-03-10T07:24:32.962659+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T07:24:33.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:33 vm00 bash[20701]: audit 2026-03-10T07:24:32.962659+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T07:24:33.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:33 vm00 bash[20701]: audit 2026-03-10T07:24:32.963556+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:33.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:33 vm00 bash[20701]: audit 2026-03-10T07:24:32.963556+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:34.103 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:33 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:34.103 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:34.103 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:24:33 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:34.103 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:34.103 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:24:33 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:34.103 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:34.104 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:24:33 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:34.104 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:24:34.474 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:34 vm03 bash[23382]: cephadm 2026-03-10T07:24:32.964342+0000 mgr.y (mgr.14150) 196 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T07:24:34.474 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:34 vm03 bash[23382]: cephadm 2026-03-10T07:24:32.964342+0000 mgr.y (mgr.14150) 196 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T07:24:34.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:34 vm00 bash[28005]: cephadm 2026-03-10T07:24:32.964342+0000 mgr.y (mgr.14150) 196 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T07:24:34.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:34 vm00 bash[28005]: cephadm 2026-03-10T07:24:32.964342+0000 mgr.y (mgr.14150) 196 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T07:24:34.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:34 vm00 bash[20701]: cephadm 2026-03-10T07:24:32.964342+0000 mgr.y (mgr.14150) 196 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T07:24:34.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:34 vm00 bash[20701]: cephadm 2026-03-10T07:24:32.964342+0000 mgr.y (mgr.14150) 196 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T07:24:35.468 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:35 vm03 bash[23382]: cluster 2026-03-10T07:24:33.901380+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:35.468 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:35 vm03 bash[23382]: cluster 2026-03-10T07:24:33.901380+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:35.468 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:35 vm03 bash[23382]: audit 2026-03-10T07:24:34.213555+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:35.468 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:35 vm03 bash[23382]: audit 2026-03-10T07:24:34.213555+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:35.468 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:35 vm03 bash[23382]: audit 2026-03-10T07:24:34.218316+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:35.468 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:35 vm03 bash[23382]: audit 2026-03-10T07:24:34.218316+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:35.468 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:35 vm03 bash[23382]: audit 2026-03-10T07:24:34.224494+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:35.468 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:35 vm03 bash[23382]: audit 2026-03-10T07:24:34.224494+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:35 vm00 bash[28005]: cluster 2026-03-10T07:24:33.901380+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:35 vm00 bash[28005]: cluster 2026-03-10T07:24:33.901380+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:35 vm00 bash[28005]: audit 2026-03-10T07:24:34.213555+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:35 vm00 bash[28005]: audit 2026-03-10T07:24:34.213555+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:35 vm00 bash[28005]: audit 2026-03-10T07:24:34.218316+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:35 vm00 bash[28005]: audit 2026-03-10T07:24:34.218316+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:35 vm00 bash[28005]: audit 2026-03-10T07:24:34.224494+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:35 vm00 bash[28005]: audit 2026-03-10T07:24:34.224494+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:35 vm00 bash[20701]: cluster 2026-03-10T07:24:33.901380+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:35 vm00 bash[20701]: cluster 2026-03-10T07:24:33.901380+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:35 vm00 bash[20701]: audit 2026-03-10T07:24:34.213555+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:35 vm00 bash[20701]: audit 2026-03-10T07:24:34.213555+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:35 vm00 bash[20701]: audit 2026-03-10T07:24:34.218316+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:35 vm00 bash[20701]: audit 2026-03-10T07:24:34.218316+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:35 vm00 bash[20701]: audit 2026-03-10T07:24:34.224494+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:35 vm00 bash[20701]: audit 2026-03-10T07:24:34.224494+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:37.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:37 vm03 bash[23382]: cluster 2026-03-10T07:24:35.901682+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:37.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:37 vm03 bash[23382]: cluster 2026-03-10T07:24:35.901682+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:37.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:37 vm00 bash[28005]: cluster 2026-03-10T07:24:35.901682+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:37.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:37 vm00 bash[28005]: cluster 2026-03-10T07:24:35.901682+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:37.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:37 vm00 bash[20701]: cluster 2026-03-10T07:24:35.901682+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:37.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:37 vm00 bash[20701]: cluster 2026-03-10T07:24:35.901682+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:38.248 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:38 vm03 bash[23382]: audit 2026-03-10T07:24:37.876233+0000 mon.b (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:38.248 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:38 vm03 bash[23382]: audit 2026-03-10T07:24:37.876233+0000 mon.b (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:38.248 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:38 vm03 bash[23382]: audit 2026-03-10T07:24:37.876708+0000 mon.a (mon.0) 546 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:38.248 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:38 vm03 bash[23382]: audit 2026-03-10T07:24:37.876708+0000 mon.a (mon.0) 546 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:38.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:38 vm00 bash[28005]: audit 2026-03-10T07:24:37.876233+0000 mon.b (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:38.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:38 vm00 bash[28005]: audit 2026-03-10T07:24:37.876233+0000 mon.b (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:38.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:38 vm00 bash[28005]: audit 2026-03-10T07:24:37.876708+0000 mon.a (mon.0) 546 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:38.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:38 vm00 bash[28005]: audit 2026-03-10T07:24:37.876708+0000 mon.a (mon.0) 546 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:38.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:38 vm00 bash[20701]: audit 2026-03-10T07:24:37.876233+0000 mon.b (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:38.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:38 vm00 bash[20701]: audit 2026-03-10T07:24:37.876233+0000 mon.b (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:38.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:38 vm00 bash[20701]: audit 2026-03-10T07:24:37.876708+0000 mon.a (mon.0) 546 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:38.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:38 vm00 bash[20701]: audit 2026-03-10T07:24:37.876708+0000 mon.a (mon.0) 546 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: cluster 2026-03-10T07:24:37.901983+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: cluster 2026-03-10T07:24:37.901983+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: audit 2026-03-10T07:24:38.246304+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: audit 2026-03-10T07:24:38.246304+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: audit 2026-03-10T07:24:38.252320+0000 mon.b (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: audit 2026-03-10T07:24:38.252320+0000 mon.b (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: cluster 2026-03-10T07:24:38.277583+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: cluster 2026-03-10T07:24:38.277583+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: audit 2026-03-10T07:24:38.277891+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: audit 2026-03-10T07:24:38.277891+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: audit 2026-03-10T07:24:38.278002+0000 mon.a (mon.0) 550 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:39.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:39 vm03 bash[23382]: audit 2026-03-10T07:24:38.278002+0000 mon.a (mon.0) 550 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: cluster 2026-03-10T07:24:37.901983+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: cluster 2026-03-10T07:24:37.901983+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: audit 2026-03-10T07:24:38.246304+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: audit 2026-03-10T07:24:38.246304+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: audit 2026-03-10T07:24:38.252320+0000 mon.b (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: audit 2026-03-10T07:24:38.252320+0000 mon.b (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: cluster 2026-03-10T07:24:38.277583+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: cluster 2026-03-10T07:24:38.277583+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: audit 2026-03-10T07:24:38.277891+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: audit 2026-03-10T07:24:38.277891+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: audit 2026-03-10T07:24:38.278002+0000 mon.a (mon.0) 550 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:39 vm00 bash[28005]: audit 2026-03-10T07:24:38.278002+0000 mon.a (mon.0) 550 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: cluster 2026-03-10T07:24:37.901983+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: cluster 2026-03-10T07:24:37.901983+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:39.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: audit 2026-03-10T07:24:38.246304+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T07:24:39.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: audit 2026-03-10T07:24:38.246304+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T07:24:39.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: audit 2026-03-10T07:24:38.252320+0000 mon.b (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:39.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: audit 2026-03-10T07:24:38.252320+0000 mon.b (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:39.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: cluster 2026-03-10T07:24:38.277583+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T07:24:39.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: cluster 2026-03-10T07:24:38.277583+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T07:24:39.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: audit 2026-03-10T07:24:38.277891+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:39.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: audit 2026-03-10T07:24:38.277891+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:39.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: audit 2026-03-10T07:24:38.278002+0000 mon.a (mon.0) 550 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:39.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:39 vm00 bash[20701]: audit 2026-03-10T07:24:38.278002+0000 mon.a (mon.0) 550 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:24:40.481 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:40 vm03 bash[23382]: audit 2026-03-10T07:24:39.255366+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:40.481 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:40 vm03 bash[23382]: audit 2026-03-10T07:24:39.255366+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:40.481 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:40 vm03 bash[23382]: cluster 2026-03-10T07:24:39.260449+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T07:24:40.481 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:40 vm03 bash[23382]: cluster 2026-03-10T07:24:39.260449+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T07:24:40.481 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:40 vm03 bash[23382]: audit 2026-03-10T07:24:39.261818+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.481 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:40 vm03 bash[23382]: audit 2026-03-10T07:24:39.261818+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.481 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:40 vm03 bash[23382]: audit 2026-03-10T07:24:39.267299+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.481 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:40 vm03 bash[23382]: audit 2026-03-10T07:24:39.267299+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.482 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:40 vm03 bash[23382]: audit 2026-03-10T07:24:40.263398+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.482 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:40 vm03 bash[23382]: audit 2026-03-10T07:24:40.263398+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:40 vm00 bash[28005]: audit 2026-03-10T07:24:39.255366+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:40 vm00 bash[28005]: audit 2026-03-10T07:24:39.255366+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:40 vm00 bash[28005]: cluster 2026-03-10T07:24:39.260449+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:40 vm00 bash[28005]: cluster 2026-03-10T07:24:39.260449+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:40 vm00 bash[28005]: audit 2026-03-10T07:24:39.261818+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:40 vm00 bash[28005]: audit 2026-03-10T07:24:39.261818+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:40 vm00 bash[28005]: audit 2026-03-10T07:24:39.267299+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:40 vm00 bash[28005]: audit 2026-03-10T07:24:39.267299+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:40 vm00 bash[28005]: audit 2026-03-10T07:24:40.263398+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:40 vm00 bash[28005]: audit 2026-03-10T07:24:40.263398+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:40 vm00 bash[20701]: audit 2026-03-10T07:24:39.255366+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:40 vm00 bash[20701]: audit 2026-03-10T07:24:39.255366+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:40 vm00 bash[20701]: cluster 2026-03-10T07:24:39.260449+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T07:24:40.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:40 vm00 bash[20701]: cluster 2026-03-10T07:24:39.260449+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T07:24:40.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:40 vm00 bash[20701]: audit 2026-03-10T07:24:39.261818+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:40 vm00 bash[20701]: audit 2026-03-10T07:24:39.261818+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:40 vm00 bash[20701]: audit 2026-03-10T07:24:39.267299+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:40 vm00 bash[20701]: audit 2026-03-10T07:24:39.267299+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:40 vm00 bash[20701]: audit 2026-03-10T07:24:40.263398+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:40.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:40 vm00 bash[20701]: audit 2026-03-10T07:24:40.263398+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:41.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:38.896061+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:38.896061+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:38.896131+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:38.896131+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:39.902301+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 unknown; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:39.902301+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 unknown; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:40.275962+0000 mon.a (mon.0) 556 : cluster [INF] osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] boot 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:40.275962+0000 mon.a (mon.0) 556 : cluster [INF] osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] boot 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:40.276058+0000 mon.a (mon.0) 557 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:40.276058+0000 mon.a (mon.0) 557 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.278444+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.278444+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.505887+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.505887+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.514929+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.514929+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:40.799126+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: cluster 2026-03-10T07:24:40.799126+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.964700+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.964700+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.965555+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.965555+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.971549+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:41 vm03 bash[23382]: audit 2026-03-10T07:24:40.971549+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.591 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 6 on host 'vm03' 2026-03-10T07:24:41.591 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.585+0000 7f3df1ffb640 1 -- 192.168.123.103:0/2406526049 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f3dbc002bf0 con 0x7f3dd4077610 2026-03-10T07:24:41.591 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 -- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3dd4077610 msgr2=0x7f3dd4079ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:24:41.591 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3dd4077610 0x7f3dd4079ad0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f3de4005e00 tx=0x7f3de400a250 comp rx=0 tx=0).stop 2026-03-10T07:24:41.591 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 -- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3df406b860 msgr2=0x7f3df4111a40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:24:41.591 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3df406b860 0x7f3df4111a40 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f3de000ab50 tx=0x7f3de0044db0 comp rx=0 tx=0).stop 2026-03-10T07:24:41.592 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 -- 192.168.123.103:0/2406526049 shutdown_connections 2026-03-10T07:24:41.592 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f3dd4077610 0x7f3dd4079ad0 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:41.592 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3df410faf0 0x7f3df41105d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:41.592 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3df410adf0 0x7f3df4110090 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:41.592 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 --2- 192.168.123.103:0/2406526049 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3df406b860 0x7f3df4111a40 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:41.592 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 -- 192.168.123.103:0/2406526049 >> 192.168.123.103:0/2406526049 conn(0x7f3df406fc70 msgr2=0x7f3df410db50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:24:41.592 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 -- 192.168.123.103:0/2406526049 shutdown_connections 2026-03-10T07:24:41.592 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:41.589+0000 7f3dfb237640 1 -- 192.168.123.103:0/2406526049 wait complete. 2026-03-10T07:24:41.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:38.896061+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:41.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:38.896061+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:41.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:38.896131+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:41.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:38.896131+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:41.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:39.902301+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 unknown; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:41.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:39.902301+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 unknown; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:41.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:40.275962+0000 mon.a (mon.0) 556 : cluster [INF] osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] boot 2026-03-10T07:24:41.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:40.275962+0000 mon.a (mon.0) 556 : cluster [INF] osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] boot 2026-03-10T07:24:41.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:40.276058+0000 mon.a (mon.0) 557 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T07:24:41.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:40.276058+0000 mon.a (mon.0) 557 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.278444+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.278444+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.505887+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.505887+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.514929+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.514929+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:40.799126+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: cluster 2026-03-10T07:24:40.799126+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.964700+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.964700+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.965555+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.965555+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.971549+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:41 vm00 bash[28005]: audit 2026-03-10T07:24:40.971549+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:38.896061+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:38.896061+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:38.896131+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:38.896131+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:39.902301+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 unknown; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:39.902301+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 unknown; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:40.275962+0000 mon.a (mon.0) 556 : cluster [INF] osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] boot 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:40.275962+0000 mon.a (mon.0) 556 : cluster [INF] osd.6 [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] boot 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:40.276058+0000 mon.a (mon.0) 557 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:40.276058+0000 mon.a (mon.0) 557 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.278444+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.278444+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.505887+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.505887+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.514929+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.514929+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:40.799126+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: cluster 2026-03-10T07:24:40.799126+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.964700+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.964700+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.965555+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.965555+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.971549+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:41 vm00 bash[20701]: audit 2026-03-10T07:24:40.971549+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:41.688 DEBUG:teuthology.orchestra.run.vm03:osd.6> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.6.service 2026-03-10T07:24:41.689 INFO:tasks.cephadm:Deploying osd.7 on vm03 with /dev/vdb... 2026-03-10T07:24:41.689 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- lvm zap /dev/vdb 2026-03-10T07:24:42.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:42 vm00 bash[28005]: audit 2026-03-10T07:24:41.574849+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:42.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:42 vm00 bash[28005]: audit 2026-03-10T07:24:41.574849+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:42.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:42 vm00 bash[28005]: audit 2026-03-10T07:24:41.581157+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:42 vm00 bash[28005]: audit 2026-03-10T07:24:41.581157+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:42 vm00 bash[28005]: audit 2026-03-10T07:24:41.586337+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:42 vm00 bash[28005]: audit 2026-03-10T07:24:41.586337+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:42 vm00 bash[28005]: cluster 2026-03-10T07:24:41.804414+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-10T07:24:42.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:42 vm00 bash[28005]: cluster 2026-03-10T07:24:41.804414+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-10T07:24:42.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:42 vm00 bash[20701]: audit 2026-03-10T07:24:41.574849+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:42.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:42 vm00 bash[20701]: audit 2026-03-10T07:24:41.574849+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:42.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:42 vm00 bash[20701]: audit 2026-03-10T07:24:41.581157+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:42 vm00 bash[20701]: audit 2026-03-10T07:24:41.581157+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:42 vm00 bash[20701]: audit 2026-03-10T07:24:41.586337+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:42 vm00 bash[20701]: audit 2026-03-10T07:24:41.586337+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:42 vm00 bash[20701]: cluster 2026-03-10T07:24:41.804414+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-10T07:24:42.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:42 vm00 bash[20701]: cluster 2026-03-10T07:24:41.804414+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-10T07:24:42.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:42 vm03 bash[23382]: audit 2026-03-10T07:24:41.574849+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:42.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:42 vm03 bash[23382]: audit 2026-03-10T07:24:41.574849+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:24:42.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:42 vm03 bash[23382]: audit 2026-03-10T07:24:41.581157+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:42 vm03 bash[23382]: audit 2026-03-10T07:24:41.581157+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:42 vm03 bash[23382]: audit 2026-03-10T07:24:41.586337+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:42 vm03 bash[23382]: audit 2026-03-10T07:24:41.586337+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:42.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:42 vm03 bash[23382]: cluster 2026-03-10T07:24:41.804414+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-10T07:24:42.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:42 vm03 bash[23382]: cluster 2026-03-10T07:24:41.804414+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-10T07:24:43.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:43 vm00 bash[28005]: cluster 2026-03-10T07:24:41.902641+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v181: 1 pgs: 1 peering; 0 B data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:43.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:43 vm00 bash[28005]: cluster 2026-03-10T07:24:41.902641+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v181: 1 pgs: 1 peering; 0 B data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:43.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:43 vm00 bash[20701]: cluster 2026-03-10T07:24:41.902641+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v181: 1 pgs: 1 peering; 0 B data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:43.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:43 vm00 bash[20701]: cluster 2026-03-10T07:24:41.902641+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v181: 1 pgs: 1 peering; 0 B data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:43.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:43 vm03 bash[23382]: cluster 2026-03-10T07:24:41.902641+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v181: 1 pgs: 1 peering; 0 B data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:43.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:43 vm03 bash[23382]: cluster 2026-03-10T07:24:41.902641+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v181: 1 pgs: 1 peering; 0 B data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:45.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:45 vm00 bash[28005]: cluster 2026-03-10T07:24:43.902965+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v182: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:45.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:45 vm00 bash[28005]: cluster 2026-03-10T07:24:43.902965+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v182: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:45.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:45 vm00 bash[20701]: cluster 2026-03-10T07:24:43.902965+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v182: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:45.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:45 vm00 bash[20701]: cluster 2026-03-10T07:24:43.902965+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v182: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:45.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:45 vm03 bash[23382]: cluster 2026-03-10T07:24:43.902965+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v182: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:45.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:45 vm03 bash[23382]: cluster 2026-03-10T07:24:43.902965+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v182: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:46.372 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: cluster 2026-03-10T07:24:45.903345+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: cluster 2026-03-10T07:24:45.903345+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.169537+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.169537+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.173364+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.173364+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.174107+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.174107+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.174611+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.174611+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.174952+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.174952+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.175949+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.175949+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.176318+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.176318+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.180561+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:47 vm03 bash[23382]: audit 2026-03-10T07:24:47.180561+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: cluster 2026-03-10T07:24:45.903345+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: cluster 2026-03-10T07:24:45.903345+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.169537+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.169537+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.173364+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.173364+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.174107+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.174107+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.174611+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.174611+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.174952+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.174952+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.175949+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.175949+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.176318+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.176318+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.180561+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:47 vm00 bash[28005]: audit 2026-03-10T07:24:47.180561+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: cluster 2026-03-10T07:24:45.903345+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: cluster 2026-03-10T07:24:45.903345+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v183: 1 pgs: 1 peering; 0 B data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.169537+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.169537+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.173364+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.173364+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.174107+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.174107+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.174611+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.174611+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.174952+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.174952+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.175949+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.175949+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.176318+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.176318+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.180561+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:47 vm00 bash[20701]: audit 2026-03-10T07:24:47.180561+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:24:47.893 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T07:24:47.905 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch daemon add osd vm03:/dev/vdb 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:48 vm00 bash[28005]: cephadm 2026-03-10T07:24:47.163601+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:48 vm00 bash[28005]: cephadm 2026-03-10T07:24:47.163601+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:48 vm00 bash[28005]: cephadm 2026-03-10T07:24:47.175253+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:48 vm00 bash[28005]: cephadm 2026-03-10T07:24:47.175253+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:48 vm00 bash[28005]: cephadm 2026-03-10T07:24:47.175615+0000 mgr.y (mgr.14150) 206 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:48 vm00 bash[28005]: cephadm 2026-03-10T07:24:47.175615+0000 mgr.y (mgr.14150) 206 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:48 vm00 bash[20701]: cephadm 2026-03-10T07:24:47.163601+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:48 vm00 bash[20701]: cephadm 2026-03-10T07:24:47.163601+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:48 vm00 bash[20701]: cephadm 2026-03-10T07:24:47.175253+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:48 vm00 bash[20701]: cephadm 2026-03-10T07:24:47.175253+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:48 vm00 bash[20701]: cephadm 2026-03-10T07:24:47.175615+0000 mgr.y (mgr.14150) 206 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-10T07:24:48.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:48 vm00 bash[20701]: cephadm 2026-03-10T07:24:47.175615+0000 mgr.y (mgr.14150) 206 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-10T07:24:48.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:48 vm03 bash[23382]: cephadm 2026-03-10T07:24:47.163601+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:48.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:48 vm03 bash[23382]: cephadm 2026-03-10T07:24:47.163601+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:24:48.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:48 vm03 bash[23382]: cephadm 2026-03-10T07:24:47.175253+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T07:24:48.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:48 vm03 bash[23382]: cephadm 2026-03-10T07:24:47.175253+0000 mgr.y (mgr.14150) 205 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T07:24:48.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:48 vm03 bash[23382]: cephadm 2026-03-10T07:24:47.175615+0000 mgr.y (mgr.14150) 206 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-10T07:24:48.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:48 vm03 bash[23382]: cephadm 2026-03-10T07:24:47.175615+0000 mgr.y (mgr.14150) 206 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-10T07:24:49.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:49 vm00 bash[28005]: cluster 2026-03-10T07:24:47.903695+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:49.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:49 vm00 bash[28005]: cluster 2026-03-10T07:24:47.903695+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:49.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:49 vm00 bash[20701]: cluster 2026-03-10T07:24:47.903695+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:49.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:49 vm00 bash[20701]: cluster 2026-03-10T07:24:47.903695+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:49.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:49 vm03 bash[23382]: cluster 2026-03-10T07:24:47.903695+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:49.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:49 vm03 bash[23382]: cluster 2026-03-10T07:24:47.903695+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:51.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:51 vm00 bash[28005]: cluster 2026-03-10T07:24:49.903995+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:51.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:51 vm00 bash[28005]: cluster 2026-03-10T07:24:49.903995+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:51.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:51 vm00 bash[20701]: cluster 2026-03-10T07:24:49.903995+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:51.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:51 vm00 bash[20701]: cluster 2026-03-10T07:24:49.903995+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:51.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:51 vm03 bash[23382]: cluster 2026-03-10T07:24:49.903995+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:51.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:51 vm03 bash[23382]: cluster 2026-03-10T07:24:49.903995+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:52.526 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:24:52.675 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 -- 192.168.123.103:0/472558430 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c1035a0 msgr2=0x7f218c105990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:24:52.675 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 --2- 192.168.123.103:0/472558430 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c1035a0 0x7f218c105990 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f2174009a30 tx=0x7f217402f260 comp rx=0 tx=0).stop 2026-03-10T07:24:52.675 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 -- 192.168.123.103:0/472558430 shutdown_connections 2026-03-10T07:24:52.675 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 --2- 192.168.123.103:0/472558430 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c108800 0x7f218c10ac10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:52.675 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 --2- 192.168.123.103:0/472558430 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c105ed0 0x7f218c1082c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:52.675 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 --2- 192.168.123.103:0/472558430 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c1035a0 0x7f218c105990 unknown :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:52.675 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 -- 192.168.123.103:0/472558430 >> 192.168.123.103:0/472558430 conn(0x7f218c0fd120 msgr2=0x7f218c0ff560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:24:52.675 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 -- 192.168.123.103:0/472558430 shutdown_connections 2026-03-10T07:24:52.675 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 -- 192.168.123.103:0/472558430 wait complete. 2026-03-10T07:24:52.675 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 Processor -- start 2026-03-10T07:24:52.675 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 -- start start 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c1035a0 0x7f218c19a170 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105ed0 0x7f218c19a6b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c108800 0x7f218c1a1730 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f218c10d9b0 con 0x7f218c108800 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f218c10d830 con 0x7f218c105ed0 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f2191916640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f218c10db30 con 0x7f218c1035a0 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f218affd640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c1035a0 0x7f218c19a170 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f218affd640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c1035a0 0x7f218c19a170 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.103:52280/0 (socket says 192.168.123.103:52280) 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f218affd640 1 -- 192.168.123.103:0/2879685094 learned_addr learned my addr 192.168.123.103:0/2879685094 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f218b7fe640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c108800 0x7f218c1a1730 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f218a7fc640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105ed0 0x7f218c19a6b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:24:52.676 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.673+0000 7f218affd640 1 -- 192.168.123.103:0/2879685094 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105ed0 msgr2=0x7f218c19a6b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:24:52.677 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f218affd640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105ed0 0x7f218c19a6b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:52.677 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f218affd640 1 -- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c108800 msgr2=0x7f218c1a1730 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:24:52.677 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f218affd640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c108800 0x7f218c1a1730 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:24:52.677 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f218affd640 1 -- 192.168.123.103:0/2879685094 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f218c1a1e30 con 0x7f218c1035a0 2026-03-10T07:24:52.677 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f218a7fc640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105ed0 0x7f218c19a6b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:24:52.677 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f218b7fe640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c108800 0x7f218c1a1730 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:24:52.677 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f218affd640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c1035a0 0x7f218c19a170 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f21740029e0 tx=0x7f217402fcb0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:24:52.678 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f2190914640 1 -- 192.168.123.103:0/2879685094 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2174002e20 con 0x7f218c1035a0 2026-03-10T07:24:52.678 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f2191916640 1 -- 192.168.123.103:0/2879685094 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f218c1a20c0 con 0x7f218c1035a0 2026-03-10T07:24:52.678 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f2191916640 1 -- 192.168.123.103:0/2879685094 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f218c1a25a0 con 0x7f218c1035a0 2026-03-10T07:24:52.678 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f2190914640 1 -- 192.168.123.103:0/2879685094 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f21740388b0 con 0x7f218c1035a0 2026-03-10T07:24:52.678 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f2190914640 1 -- 192.168.123.103:0/2879685094 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2174042680 con 0x7f218c1035a0 2026-03-10T07:24:52.679 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f2191916640 1 -- 192.168.123.103:0/2879685094 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f218c1062b0 con 0x7f218c1035a0 2026-03-10T07:24:52.680 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f2190914640 1 -- 192.168.123.103:0/2879685094 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f2174038470 con 0x7f218c1035a0 2026-03-10T07:24:52.680 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f2190914640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2160077640 0x7f2160079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:24:52.680 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f2190914640 1 -- 192.168.123.103:0/2879685094 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(46..46 src has 1..46) ==== 4835+0+0 (secure 0 0 0) 0x7f21740be1d0 con 0x7f218c1035a0 2026-03-10T07:24:52.680 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.677+0000 7f218a7fc640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2160077640 0x7f2160079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:24:52.681 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.681+0000 7f218a7fc640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2160077640 0x7f2160079b00 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f21800059c0 tx=0x7f2180005950 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:24:52.683 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.681+0000 7f2190914640 1 -- 192.168.123.103:0/2879685094 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2174047050 con 0x7f218c1035a0 2026-03-10T07:24:52.786 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:24:52.785+0000 7f2191916640 1 -- 192.168.123.103:0/2879685094 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7f218c0630c0 con 0x7f2160077640 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:53 vm00 bash[20701]: cluster 2026-03-10T07:24:51.904292+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:53 vm00 bash[20701]: cluster 2026-03-10T07:24:51.904292+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:53 vm00 bash[20701]: audit 2026-03-10T07:24:52.790360+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:53 vm00 bash[20701]: audit 2026-03-10T07:24:52.790360+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:53 vm00 bash[20701]: audit 2026-03-10T07:24:52.791725+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:53 vm00 bash[20701]: audit 2026-03-10T07:24:52.791725+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:53 vm00 bash[20701]: audit 2026-03-10T07:24:52.792168+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:53 vm00 bash[20701]: audit 2026-03-10T07:24:52.792168+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:53 vm00 bash[28005]: cluster 2026-03-10T07:24:51.904292+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:53 vm00 bash[28005]: cluster 2026-03-10T07:24:51.904292+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:53 vm00 bash[28005]: audit 2026-03-10T07:24:52.790360+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:53 vm00 bash[28005]: audit 2026-03-10T07:24:52.790360+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:53 vm00 bash[28005]: audit 2026-03-10T07:24:52.791725+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:53 vm00 bash[28005]: audit 2026-03-10T07:24:52.791725+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:53 vm00 bash[28005]: audit 2026-03-10T07:24:52.792168+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:53.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:53 vm00 bash[28005]: audit 2026-03-10T07:24:52.792168+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:53.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:53 vm03 bash[23382]: cluster 2026-03-10T07:24:51.904292+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:53.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:53 vm03 bash[23382]: cluster 2026-03-10T07:24:51.904292+0000 mgr.y (mgr.14150) 209 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:53.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:53 vm03 bash[23382]: audit 2026-03-10T07:24:52.790360+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:53.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:53 vm03 bash[23382]: audit 2026-03-10T07:24:52.790360+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:24:53.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:53 vm03 bash[23382]: audit 2026-03-10T07:24:52.791725+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:53.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:53 vm03 bash[23382]: audit 2026-03-10T07:24:52.791725+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:24:53.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:53 vm03 bash[23382]: audit 2026-03-10T07:24:52.792168+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:53.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:53 vm03 bash[23382]: audit 2026-03-10T07:24:52.792168+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:24:54.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:54 vm00 bash[20701]: audit 2026-03-10T07:24:52.788860+0000 mgr.y (mgr.14150) 210 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:54.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:54 vm00 bash[20701]: audit 2026-03-10T07:24:52.788860+0000 mgr.y (mgr.14150) 210 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:54.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:54 vm00 bash[28005]: audit 2026-03-10T07:24:52.788860+0000 mgr.y (mgr.14150) 210 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:54.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:54 vm00 bash[28005]: audit 2026-03-10T07:24:52.788860+0000 mgr.y (mgr.14150) 210 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:54.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:54 vm03 bash[23382]: audit 2026-03-10T07:24:52.788860+0000 mgr.y (mgr.14150) 210 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:54.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:54 vm03 bash[23382]: audit 2026-03-10T07:24:52.788860+0000 mgr.y (mgr.14150) 210 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:24:55.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:55 vm03 bash[23382]: cluster 2026-03-10T07:24:53.904800+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:55.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:55 vm03 bash[23382]: cluster 2026-03-10T07:24:53.904800+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:55.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:55 vm00 bash[28005]: cluster 2026-03-10T07:24:53.904800+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:55.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:55 vm00 bash[28005]: cluster 2026-03-10T07:24:53.904800+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:55.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:55 vm00 bash[20701]: cluster 2026-03-10T07:24:53.904800+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:55.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:55 vm00 bash[20701]: cluster 2026-03-10T07:24:53.904800+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:57.685 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:57 vm03 bash[23382]: cluster 2026-03-10T07:24:55.905091+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:57.685 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:57 vm03 bash[23382]: cluster 2026-03-10T07:24:55.905091+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:57.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:57 vm00 bash[28005]: cluster 2026-03-10T07:24:55.905091+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:57.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:57 vm00 bash[28005]: cluster 2026-03-10T07:24:55.905091+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:57 vm00 bash[20701]: cluster 2026-03-10T07:24:55.905091+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:57 vm00 bash[20701]: cluster 2026-03-10T07:24:55.905091+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:58.607 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:58 vm03 bash[23382]: audit 2026-03-10T07:24:58.208492+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/4272200534' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.607 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:58 vm03 bash[23382]: audit 2026-03-10T07:24:58.208492+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/4272200534' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.607 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:58 vm03 bash[23382]: audit 2026-03-10T07:24:58.209143+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.607 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:58 vm03 bash[23382]: audit 2026-03-10T07:24:58.209143+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.607 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:58 vm03 bash[23382]: audit 2026-03-10T07:24:58.212729+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]': finished 2026-03-10T07:24:58.607 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:58 vm03 bash[23382]: audit 2026-03-10T07:24:58.212729+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]': finished 2026-03-10T07:24:58.607 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:58 vm03 bash[23382]: cluster 2026-03-10T07:24:58.216081+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T07:24:58.607 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:58 vm03 bash[23382]: cluster 2026-03-10T07:24:58.216081+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T07:24:58.607 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:58 vm03 bash[23382]: audit 2026-03-10T07:24:58.216256+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:24:58.607 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:58 vm03 bash[23382]: audit 2026-03-10T07:24:58.216256+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:58 vm00 bash[28005]: audit 2026-03-10T07:24:58.208492+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/4272200534' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:58 vm00 bash[28005]: audit 2026-03-10T07:24:58.208492+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/4272200534' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:58 vm00 bash[28005]: audit 2026-03-10T07:24:58.209143+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:58 vm00 bash[28005]: audit 2026-03-10T07:24:58.209143+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:58 vm00 bash[28005]: audit 2026-03-10T07:24:58.212729+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]': finished 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:58 vm00 bash[28005]: audit 2026-03-10T07:24:58.212729+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]': finished 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:58 vm00 bash[28005]: cluster 2026-03-10T07:24:58.216081+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:58 vm00 bash[28005]: cluster 2026-03-10T07:24:58.216081+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:58 vm00 bash[28005]: audit 2026-03-10T07:24:58.216256+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:58 vm00 bash[28005]: audit 2026-03-10T07:24:58.216256+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:58 vm00 bash[20701]: audit 2026-03-10T07:24:58.208492+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/4272200534' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:58 vm00 bash[20701]: audit 2026-03-10T07:24:58.208492+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/4272200534' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:58 vm00 bash[20701]: audit 2026-03-10T07:24:58.209143+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:58 vm00 bash[20701]: audit 2026-03-10T07:24:58.209143+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]: dispatch 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:58 vm00 bash[20701]: audit 2026-03-10T07:24:58.212729+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]': finished 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:58 vm00 bash[20701]: audit 2026-03-10T07:24:58.212729+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6b79230f-59b8-4c24-91c0-cf41cbad4dc5"}]': finished 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:58 vm00 bash[20701]: cluster 2026-03-10T07:24:58.216081+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T07:24:58.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:58 vm00 bash[20701]: cluster 2026-03-10T07:24:58.216081+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T07:24:58.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:58 vm00 bash[20701]: audit 2026-03-10T07:24:58.216256+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:24:58.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:58 vm00 bash[20701]: audit 2026-03-10T07:24:58.216256+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:24:59.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:59 vm03 bash[23382]: cluster 2026-03-10T07:24:57.905386+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:59.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:59 vm03 bash[23382]: cluster 2026-03-10T07:24:57.905386+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:59.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:59 vm03 bash[23382]: audit 2026-03-10T07:24:58.892281+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.103:0/1062596486' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:24:59.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:24:59 vm03 bash[23382]: audit 2026-03-10T07:24:58.892281+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.103:0/1062596486' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:24:59.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:59 vm00 bash[28005]: cluster 2026-03-10T07:24:57.905386+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:59.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:59 vm00 bash[28005]: cluster 2026-03-10T07:24:57.905386+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:59.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:59 vm00 bash[28005]: audit 2026-03-10T07:24:58.892281+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.103:0/1062596486' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:24:59.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:24:59 vm00 bash[28005]: audit 2026-03-10T07:24:58.892281+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.103:0/1062596486' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:24:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:59 vm00 bash[20701]: cluster 2026-03-10T07:24:57.905386+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:59 vm00 bash[20701]: cluster 2026-03-10T07:24:57.905386+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:24:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:59 vm00 bash[20701]: audit 2026-03-10T07:24:58.892281+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.103:0/1062596486' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:24:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:24:59 vm00 bash[20701]: audit 2026-03-10T07:24:58.892281+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.103:0/1062596486' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:25:01.728 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:01 vm00 bash[28005]: cluster 2026-03-10T07:24:59.905666+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:01.729 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:01 vm00 bash[28005]: cluster 2026-03-10T07:24:59.905666+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:01.729 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:01 vm00 bash[20701]: cluster 2026-03-10T07:24:59.905666+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:01.729 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:01 vm00 bash[20701]: cluster 2026-03-10T07:24:59.905666+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:01.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:01 vm03 bash[23382]: cluster 2026-03-10T07:24:59.905666+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:01.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:01 vm03 bash[23382]: cluster 2026-03-10T07:24:59.905666+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:03.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:03 vm03 bash[23382]: cluster 2026-03-10T07:25:01.906005+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:03.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:03 vm03 bash[23382]: cluster 2026-03-10T07:25:01.906005+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:03.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:03 vm00 bash[28005]: cluster 2026-03-10T07:25:01.906005+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:03.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:03 vm00 bash[28005]: cluster 2026-03-10T07:25:01.906005+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:03.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:03 vm00 bash[20701]: cluster 2026-03-10T07:25:01.906005+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:03.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:03 vm00 bash[20701]: cluster 2026-03-10T07:25:01.906005+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:05.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:05 vm03 bash[23382]: cluster 2026-03-10T07:25:03.906310+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:05.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:05 vm03 bash[23382]: cluster 2026-03-10T07:25:03.906310+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:05.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:05 vm00 bash[28005]: cluster 2026-03-10T07:25:03.906310+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:05.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:05 vm00 bash[28005]: cluster 2026-03-10T07:25:03.906310+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:05.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:05 vm00 bash[20701]: cluster 2026-03-10T07:25:03.906310+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:05.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:05 vm00 bash[20701]: cluster 2026-03-10T07:25:03.906310+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:07.755 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:07 vm03 bash[23382]: cluster 2026-03-10T07:25:05.906587+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:07.755 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:07 vm03 bash[23382]: cluster 2026-03-10T07:25:05.906587+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:07.755 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:07 vm03 bash[23382]: audit 2026-03-10T07:25:07.213336+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T07:25:07.756 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:07 vm03 bash[23382]: audit 2026-03-10T07:25:07.213336+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T07:25:07.756 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:07 vm03 bash[23382]: audit 2026-03-10T07:25:07.213901+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:07.756 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:07 vm03 bash[23382]: audit 2026-03-10T07:25:07.213901+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:07 vm00 bash[28005]: cluster 2026-03-10T07:25:05.906587+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:07 vm00 bash[28005]: cluster 2026-03-10T07:25:05.906587+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:07 vm00 bash[28005]: audit 2026-03-10T07:25:07.213336+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:07 vm00 bash[28005]: audit 2026-03-10T07:25:07.213336+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:07 vm00 bash[28005]: audit 2026-03-10T07:25:07.213901+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:07 vm00 bash[28005]: audit 2026-03-10T07:25:07.213901+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:07 vm00 bash[20701]: cluster 2026-03-10T07:25:05.906587+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:07 vm00 bash[20701]: cluster 2026-03-10T07:25:05.906587+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:07 vm00 bash[20701]: audit 2026-03-10T07:25:07.213336+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:07 vm00 bash[20701]: audit 2026-03-10T07:25:07.213336+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:07 vm00 bash[20701]: audit 2026-03-10T07:25:07.213901+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:07.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:07 vm00 bash[20701]: audit 2026-03-10T07:25:07.213901+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:08.385 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:08 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:08.385 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:08 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:08.385 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:25:08 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:08.385 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:25:08 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:08.385 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:25:08 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:08.385 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:25:08 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:08.385 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:25:08 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:08.385 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:25:08 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:08.385 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 07:25:08 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:08.385 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 07:25:08 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:08.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:08 vm03 bash[23382]: cephadm 2026-03-10T07:25:07.214354+0000 mgr.y (mgr.14150) 218 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T07:25:08.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:08 vm03 bash[23382]: cephadm 2026-03-10T07:25:07.214354+0000 mgr.y (mgr.14150) 218 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T07:25:08.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:08 vm03 bash[23382]: audit 2026-03-10T07:25:08.422290+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:08.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:08 vm03 bash[23382]: audit 2026-03-10T07:25:08.422290+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:08.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:08 vm03 bash[23382]: audit 2026-03-10T07:25:08.430829+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:08.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:08 vm03 bash[23382]: audit 2026-03-10T07:25:08.430829+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:08.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:08 vm03 bash[23382]: audit 2026-03-10T07:25:08.442338+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:08.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:08 vm03 bash[23382]: audit 2026-03-10T07:25:08.442338+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:08 vm00 bash[28005]: cephadm 2026-03-10T07:25:07.214354+0000 mgr.y (mgr.14150) 218 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:08 vm00 bash[28005]: cephadm 2026-03-10T07:25:07.214354+0000 mgr.y (mgr.14150) 218 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:08 vm00 bash[28005]: audit 2026-03-10T07:25:08.422290+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:08 vm00 bash[28005]: audit 2026-03-10T07:25:08.422290+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:08 vm00 bash[28005]: audit 2026-03-10T07:25:08.430829+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:08 vm00 bash[28005]: audit 2026-03-10T07:25:08.430829+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:08 vm00 bash[28005]: audit 2026-03-10T07:25:08.442338+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:08 vm00 bash[28005]: audit 2026-03-10T07:25:08.442338+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:08 vm00 bash[20701]: cephadm 2026-03-10T07:25:07.214354+0000 mgr.y (mgr.14150) 218 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:08 vm00 bash[20701]: cephadm 2026-03-10T07:25:07.214354+0000 mgr.y (mgr.14150) 218 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:08 vm00 bash[20701]: audit 2026-03-10T07:25:08.422290+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:08 vm00 bash[20701]: audit 2026-03-10T07:25:08.422290+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:08 vm00 bash[20701]: audit 2026-03-10T07:25:08.430829+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:08 vm00 bash[20701]: audit 2026-03-10T07:25:08.430829+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:08 vm00 bash[20701]: audit 2026-03-10T07:25:08.442338+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:08.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:08 vm00 bash[20701]: audit 2026-03-10T07:25:08.442338+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:09.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:09 vm03 bash[23382]: cluster 2026-03-10T07:25:07.906861+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:09.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:09 vm03 bash[23382]: cluster 2026-03-10T07:25:07.906861+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:09 vm00 bash[28005]: cluster 2026-03-10T07:25:07.906861+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:09 vm00 bash[28005]: cluster 2026-03-10T07:25:07.906861+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:09 vm00 bash[20701]: cluster 2026-03-10T07:25:07.906861+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:09 vm00 bash[20701]: cluster 2026-03-10T07:25:07.906861+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:11.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:11 vm03 bash[23382]: cluster 2026-03-10T07:25:09.907122+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:11.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:11 vm03 bash[23382]: cluster 2026-03-10T07:25:09.907122+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:11.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:11 vm00 bash[28005]: cluster 2026-03-10T07:25:09.907122+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:11.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:11 vm00 bash[28005]: cluster 2026-03-10T07:25:09.907122+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:11.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:11 vm00 bash[20701]: cluster 2026-03-10T07:25:09.907122+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:11.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:11 vm00 bash[20701]: cluster 2026-03-10T07:25:09.907122+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:12.492 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:12 vm03 bash[23382]: audit 2026-03-10T07:25:12.171354+0000 mon.b (mon.1) 15 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:12.492 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:12 vm03 bash[23382]: audit 2026-03-10T07:25:12.171354+0000 mon.b (mon.1) 15 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:12.492 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:12 vm03 bash[23382]: audit 2026-03-10T07:25:12.172015+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:12.492 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:12 vm03 bash[23382]: audit 2026-03-10T07:25:12.172015+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:12.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:12 vm00 bash[28005]: audit 2026-03-10T07:25:12.171354+0000 mon.b (mon.1) 15 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:12.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:12 vm00 bash[28005]: audit 2026-03-10T07:25:12.171354+0000 mon.b (mon.1) 15 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:12.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:12 vm00 bash[28005]: audit 2026-03-10T07:25:12.172015+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:12.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:12 vm00 bash[28005]: audit 2026-03-10T07:25:12.172015+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:12.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:12 vm00 bash[20701]: audit 2026-03-10T07:25:12.171354+0000 mon.b (mon.1) 15 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:12.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:12 vm00 bash[20701]: audit 2026-03-10T07:25:12.171354+0000 mon.b (mon.1) 15 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:12.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:12 vm00 bash[20701]: audit 2026-03-10T07:25:12.172015+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:12.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:12 vm00 bash[20701]: audit 2026-03-10T07:25:12.172015+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: cluster 2026-03-10T07:25:11.907465+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: cluster 2026-03-10T07:25:11.907465+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: audit 2026-03-10T07:25:12.504880+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: audit 2026-03-10T07:25:12.504880+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: audit 2026-03-10T07:25:12.508914+0000 mon.b (mon.1) 16 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: audit 2026-03-10T07:25:12.508914+0000 mon.b (mon.1) 16 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: cluster 2026-03-10T07:25:12.509261+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: cluster 2026-03-10T07:25:12.509261+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: audit 2026-03-10T07:25:12.509544+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: audit 2026-03-10T07:25:12.509544+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: audit 2026-03-10T07:25:12.509927+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:13 vm00 bash[28005]: audit 2026-03-10T07:25:12.509927+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: cluster 2026-03-10T07:25:11.907465+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: cluster 2026-03-10T07:25:11.907465+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: audit 2026-03-10T07:25:12.504880+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: audit 2026-03-10T07:25:12.504880+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: audit 2026-03-10T07:25:12.508914+0000 mon.b (mon.1) 16 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: audit 2026-03-10T07:25:12.508914+0000 mon.b (mon.1) 16 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: cluster 2026-03-10T07:25:12.509261+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: cluster 2026-03-10T07:25:12.509261+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: audit 2026-03-10T07:25:12.509544+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: audit 2026-03-10T07:25:12.509544+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: audit 2026-03-10T07:25:12.509927+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:13.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:13 vm00 bash[20701]: audit 2026-03-10T07:25:12.509927+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:14.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: cluster 2026-03-10T07:25:11.907465+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:14.058 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: cluster 2026-03-10T07:25:11.907465+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:14.058 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: audit 2026-03-10T07:25:12.504880+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T07:25:14.058 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: audit 2026-03-10T07:25:12.504880+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T07:25:14.058 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: audit 2026-03-10T07:25:12.508914+0000 mon.b (mon.1) 16 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:14.058 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: audit 2026-03-10T07:25:12.508914+0000 mon.b (mon.1) 16 : audit [INF] from='osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:14.058 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: cluster 2026-03-10T07:25:12.509261+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T07:25:14.058 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: cluster 2026-03-10T07:25:12.509261+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T07:25:14.058 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: audit 2026-03-10T07:25:12.509544+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.058 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: audit 2026-03-10T07:25:12.509544+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.058 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: audit 2026-03-10T07:25:12.509927+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:14.058 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:13 vm03 bash[23382]: audit 2026-03-10T07:25:12.509927+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T07:25:14.730 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: audit 2026-03-10T07:25:13.508725+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:25:14.731 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: audit 2026-03-10T07:25:13.508725+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:25:14.731 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: cluster 2026-03-10T07:25:13.519149+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-10T07:25:14.731 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: cluster 2026-03-10T07:25:13.519149+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-10T07:25:14.731 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: audit 2026-03-10T07:25:13.523977+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.731 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: audit 2026-03-10T07:25:13.523977+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.731 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: cluster 2026-03-10T07:25:14.514884+0000 mon.a (mon.0) 597 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T07:25:14.731 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: cluster 2026-03-10T07:25:14.514884+0000 mon.a (mon.0) 597 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T07:25:14.731 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: audit 2026-03-10T07:25:14.514974+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.731 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: audit 2026-03-10T07:25:14.514974+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.731 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: audit 2026-03-10T07:25:14.518704+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.731 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:14 vm03 bash[23382]: audit 2026-03-10T07:25:14.518704+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: audit 2026-03-10T07:25:13.508725+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: audit 2026-03-10T07:25:13.508725+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: cluster 2026-03-10T07:25:13.519149+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: cluster 2026-03-10T07:25:13.519149+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: audit 2026-03-10T07:25:13.523977+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: audit 2026-03-10T07:25:13.523977+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: cluster 2026-03-10T07:25:14.514884+0000 mon.a (mon.0) 597 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: cluster 2026-03-10T07:25:14.514884+0000 mon.a (mon.0) 597 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: audit 2026-03-10T07:25:14.514974+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: audit 2026-03-10T07:25:14.514974+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: audit 2026-03-10T07:25:14.518704+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:14 vm00 bash[28005]: audit 2026-03-10T07:25:14.518704+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: audit 2026-03-10T07:25:13.508725+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: audit 2026-03-10T07:25:13.508725+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: cluster 2026-03-10T07:25:13.519149+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: cluster 2026-03-10T07:25:13.519149+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: audit 2026-03-10T07:25:13.523977+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: audit 2026-03-10T07:25:13.523977+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: cluster 2026-03-10T07:25:14.514884+0000 mon.a (mon.0) 597 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: cluster 2026-03-10T07:25:14.514884+0000 mon.a (mon.0) 597 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: audit 2026-03-10T07:25:14.514974+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: audit 2026-03-10T07:25:14.514974+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: audit 2026-03-10T07:25:14.518704+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:14 vm00 bash[20701]: audit 2026-03-10T07:25:14.518704+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: cluster 2026-03-10T07:25:13.178450+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: cluster 2026-03-10T07:25:13.178450+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: cluster 2026-03-10T07:25:13.178509+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: cluster 2026-03-10T07:25:13.178509+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: cluster 2026-03-10T07:25:13.907747+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: cluster 2026-03-10T07:25:13.907747+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.735091+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.735091+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.815561+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.815561+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.821984+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.821984+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.822882+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.822882+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:15.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.823430+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:15.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.823430+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:15.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.828652+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:14.828652+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:15.519161+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:15.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:15 vm03 bash[23382]: audit 2026-03-10T07:25:15.519161+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:15.832 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 7 on host 'vm03' 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2190914640 1 -- 192.168.123.103:0/2879685094 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f218c0630c0 con 0x7f2160077640 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 -- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2160077640 msgr2=0x7f2160079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2160077640 0x7f2160079b00 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f21800059c0 tx=0x7f2180005950 comp rx=0 tx=0).stop 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 -- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c1035a0 msgr2=0x7f218c19a170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c1035a0 0x7f218c19a170 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f21740029e0 tx=0x7f217402fcb0 comp rx=0 tx=0).stop 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 -- 192.168.123.103:0/2879685094 shutdown_connections 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2160077640 0x7f2160079b00 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c108800 0x7f218c1a1730 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105ed0 0x7f218c19a6b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 --2- 192.168.123.103:0/2879685094 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c1035a0 0x7f218c19a170 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 -- 192.168.123.103:0/2879685094 >> 192.168.123.103:0/2879685094 conn(0x7f218c0fd120 msgr2=0x7f218c103e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 -- 192.168.123.103:0/2879685094 shutdown_connections 2026-03-10T07:25:15.833 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:15.829+0000 7f2191916640 1 -- 192.168.123.103:0/2879685094 wait complete. 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: cluster 2026-03-10T07:25:13.178450+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: cluster 2026-03-10T07:25:13.178450+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: cluster 2026-03-10T07:25:13.178509+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: cluster 2026-03-10T07:25:13.178509+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: cluster 2026-03-10T07:25:13.907747+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: cluster 2026-03-10T07:25:13.907747+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.735091+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.735091+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.815561+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.815561+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.821984+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.821984+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.822882+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.822882+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.823430+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.823430+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.828652+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:14.828652+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:15.519161+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:15 vm00 bash[28005]: audit 2026-03-10T07:25:15.519161+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: cluster 2026-03-10T07:25:13.178450+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: cluster 2026-03-10T07:25:13.178450+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: cluster 2026-03-10T07:25:13.178509+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:25:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: cluster 2026-03-10T07:25:13.178509+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: cluster 2026-03-10T07:25:13.907747+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: cluster 2026-03-10T07:25:13.907747+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.735091+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.735091+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.815561+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.815561+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.821984+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.821984+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.822882+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.822882+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.823430+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.823430+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.828652+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:14.828652+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:15.519161+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:15 vm00 bash[20701]: audit 2026-03-10T07:25:15.519161+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:15.903 DEBUG:teuthology.orchestra.run.vm03:osd.7> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.7.service 2026-03-10T07:25:15.905 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T07:25:15.905 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd stat -f json 2026-03-10T07:25:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: cluster 2026-03-10T07:25:15.543573+0000 mon.a (mon.0) 607 : cluster [INF] osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] boot 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: cluster 2026-03-10T07:25:15.543573+0000 mon.a (mon.0) 607 : cluster [INF] osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] boot 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: cluster 2026-03-10T07:25:15.543616+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: cluster 2026-03-10T07:25:15.543616+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: audit 2026-03-10T07:25:15.544634+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: audit 2026-03-10T07:25:15.544634+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: cluster 2026-03-10T07:25:15.806106+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: cluster 2026-03-10T07:25:15.806106+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: audit 2026-03-10T07:25:15.818633+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: audit 2026-03-10T07:25:15.818633+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: audit 2026-03-10T07:25:15.824153+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: audit 2026-03-10T07:25:15.824153+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: audit 2026-03-10T07:25:15.829150+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:16 vm00 bash[28005]: audit 2026-03-10T07:25:15.829150+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: cluster 2026-03-10T07:25:15.543573+0000 mon.a (mon.0) 607 : cluster [INF] osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] boot 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: cluster 2026-03-10T07:25:15.543573+0000 mon.a (mon.0) 607 : cluster [INF] osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] boot 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: cluster 2026-03-10T07:25:15.543616+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: cluster 2026-03-10T07:25:15.543616+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: audit 2026-03-10T07:25:15.544634+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: audit 2026-03-10T07:25:15.544634+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: cluster 2026-03-10T07:25:15.806106+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: cluster 2026-03-10T07:25:15.806106+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: audit 2026-03-10T07:25:15.818633+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: audit 2026-03-10T07:25:15.818633+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: audit 2026-03-10T07:25:15.824153+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: audit 2026-03-10T07:25:15.824153+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:16.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: audit 2026-03-10T07:25:15.829150+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:16.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:16 vm00 bash[20701]: audit 2026-03-10T07:25:15.829150+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:17.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: cluster 2026-03-10T07:25:15.543573+0000 mon.a (mon.0) 607 : cluster [INF] osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] boot 2026-03-10T07:25:17.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: cluster 2026-03-10T07:25:15.543573+0000 mon.a (mon.0) 607 : cluster [INF] osd.7 [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] boot 2026-03-10T07:25:17.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: cluster 2026-03-10T07:25:15.543616+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T07:25:17.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: cluster 2026-03-10T07:25:15.543616+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T07:25:17.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: audit 2026-03-10T07:25:15.544634+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:17.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: audit 2026-03-10T07:25:15.544634+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:25:17.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: cluster 2026-03-10T07:25:15.806106+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T07:25:17.019 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: cluster 2026-03-10T07:25:15.806106+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T07:25:17.019 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: audit 2026-03-10T07:25:15.818633+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:17.019 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: audit 2026-03-10T07:25:15.818633+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:17.019 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: audit 2026-03-10T07:25:15.824153+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:17.019 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: audit 2026-03-10T07:25:15.824153+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:17.019 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: audit 2026-03-10T07:25:15.829150+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:17.019 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:16 vm03 bash[23382]: audit 2026-03-10T07:25:15.829150+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:17.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:17 vm00 bash[28005]: cluster 2026-03-10T07:25:15.908417+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v204: 1 pgs: 1 remapped+peering; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:17.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:17 vm00 bash[28005]: cluster 2026-03-10T07:25:15.908417+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v204: 1 pgs: 1 remapped+peering; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:17.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:17 vm00 bash[28005]: cluster 2026-03-10T07:25:16.827895+0000 mon.a (mon.0) 614 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T07:25:17.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:17 vm00 bash[28005]: cluster 2026-03-10T07:25:16.827895+0000 mon.a (mon.0) 614 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T07:25:17.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:17 vm00 bash[28005]: cluster 2026-03-10T07:25:16.839172+0000 mon.a (mon.0) 615 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T07:25:17.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:17 vm00 bash[28005]: cluster 2026-03-10T07:25:16.839172+0000 mon.a (mon.0) 615 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T07:25:17.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:17 vm00 bash[20701]: cluster 2026-03-10T07:25:15.908417+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v204: 1 pgs: 1 remapped+peering; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:17.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:17 vm00 bash[20701]: cluster 2026-03-10T07:25:15.908417+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v204: 1 pgs: 1 remapped+peering; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:17.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:17 vm00 bash[20701]: cluster 2026-03-10T07:25:16.827895+0000 mon.a (mon.0) 614 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T07:25:17.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:17 vm00 bash[20701]: cluster 2026-03-10T07:25:16.827895+0000 mon.a (mon.0) 614 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T07:25:17.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:17 vm00 bash[20701]: cluster 2026-03-10T07:25:16.839172+0000 mon.a (mon.0) 615 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T07:25:17.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:17 vm00 bash[20701]: cluster 2026-03-10T07:25:16.839172+0000 mon.a (mon.0) 615 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T07:25:18.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:17 vm03 bash[23382]: cluster 2026-03-10T07:25:15.908417+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v204: 1 pgs: 1 remapped+peering; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:18.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:17 vm03 bash[23382]: cluster 2026-03-10T07:25:15.908417+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v204: 1 pgs: 1 remapped+peering; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T07:25:18.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:17 vm03 bash[23382]: cluster 2026-03-10T07:25:16.827895+0000 mon.a (mon.0) 614 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T07:25:18.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:17 vm03 bash[23382]: cluster 2026-03-10T07:25:16.827895+0000 mon.a (mon.0) 614 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T07:25:18.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:17 vm03 bash[23382]: cluster 2026-03-10T07:25:16.839172+0000 mon.a (mon.0) 615 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T07:25:18.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:17 vm03 bash[23382]: cluster 2026-03-10T07:25:16.839172+0000 mon.a (mon.0) 615 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T07:25:19.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:19 vm00 bash[28005]: cluster 2026-03-10T07:25:17.908732+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v206: 1 pgs: 1 peering; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:19.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:19 vm00 bash[28005]: cluster 2026-03-10T07:25:17.908732+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v206: 1 pgs: 1 peering; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:19.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:19 vm00 bash[20701]: cluster 2026-03-10T07:25:17.908732+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v206: 1 pgs: 1 peering; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:19.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:19 vm00 bash[20701]: cluster 2026-03-10T07:25:17.908732+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v206: 1 pgs: 1 peering; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:20.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:19 vm03 bash[23382]: cluster 2026-03-10T07:25:17.908732+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v206: 1 pgs: 1 peering; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:20.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:19 vm03 bash[23382]: cluster 2026-03-10T07:25:17.908732+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v206: 1 pgs: 1 peering; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:20.533 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:25:20.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 -- 192.168.123.100:0/862368717 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2a38106930 msgr2=0x7f2a3810d1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:20.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 --2- 192.168.123.100:0/862368717 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2a38106930 0x7f2a3810d1c0 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f2a2800b0a0 tx=0x7f2a2802f470 comp rx=0 tx=0).stop 2026-03-10T07:25:20.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 -- 192.168.123.100:0/862368717 shutdown_connections 2026-03-10T07:25:20.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 --2- 192.168.123.100:0/862368717 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2a38106930 0x7f2a3810d1c0 unknown :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:20.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 --2- 192.168.123.100:0/862368717 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2a38105f70 0x7f2a381063f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:20.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 --2- 192.168.123.100:0/862368717 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f2a38104d70 0x7f2a38105170 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:20.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 -- 192.168.123.100:0/862368717 >> 192.168.123.100:0/862368717 conn(0x7f2a38100520 msgr2=0x7f2a38102940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:20.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 -- 192.168.123.100:0/862368717 shutdown_connections 2026-03-10T07:25:20.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 -- 192.168.123.100:0/862368717 wait complete. 2026-03-10T07:25:20.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 Processor -- start 2026-03-10T07:25:20.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 -- start start 2026-03-10T07:25:20.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2a38104d70 0x7f2a3819c350 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:20.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f2a38105f70 0x7f2a3819c890 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:20.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2a38106930 0x7f2a381a3910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:20.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f2a3810fc20 con 0x7f2a38104d70 2026-03-10T07:25:20.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f2a3810faa0 con 0x7f2a38105f70 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a3cde3640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f2a3810fda0 con 0x7f2a38106930 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a36575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2a38104d70 0x7f2a3819c350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a36575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2a38104d70 0x7f2a3819c350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:45994/0 (socket says 192.168.123.100:45994) 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a36575640 1 -- 192.168.123.100:0/1559224520 learned_addr learned my addr 192.168.123.100:0/1559224520 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a35d74640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f2a38105f70 0x7f2a3819c890 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a36d76640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2a38106930 0x7f2a381a3910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a36575640 1 -- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2a38106930 msgr2=0x7f2a381a3910 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a36575640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2a38106930 0x7f2a381a3910 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a36575640 1 -- 192.168.123.100:0/1559224520 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f2a38105f70 msgr2=0x7f2a3819c890 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a36575640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f2a38105f70 0x7f2a3819c890 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a36575640 1 -- 192.168.123.100:0/1559224520 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2a381a4010 con 0x7f2a38104d70 2026-03-10T07:25:20.702 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a35d74640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f2a38105f70 0x7f2a3819c890 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:25:20.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a36d76640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2a38106930 0x7f2a381a3910 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:25:20.703 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.701+0000 7f2a36575640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2a38104d70 0x7f2a3819c350 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f2a2c00e9e0 tx=0x7f2a2c00eeb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:20.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.705+0000 7f2a1f7fe640 1 -- 192.168.123.100:0/1559224520 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2a2c00cde0 con 0x7f2a38104d70 2026-03-10T07:25:20.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.705+0000 7f2a1f7fe640 1 -- 192.168.123.100:0/1559224520 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f2a2c004510 con 0x7f2a38104d70 2026-03-10T07:25:20.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.705+0000 7f2a1f7fe640 1 -- 192.168.123.100:0/1559224520 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2a2c010430 con 0x7f2a38104d70 2026-03-10T07:25:20.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.705+0000 7f2a3cde3640 1 -- 192.168.123.100:0/1559224520 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f2a381a4300 con 0x7f2a38104d70 2026-03-10T07:25:20.704 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.705+0000 7f2a3cde3640 1 -- 192.168.123.100:0/1559224520 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f2a381a47c0 con 0x7f2a38104d70 2026-03-10T07:25:20.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.705+0000 7f2a1f7fe640 1 -- 192.168.123.100:0/1559224520 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f2a2c0040d0 con 0x7f2a38104d70 2026-03-10T07:25:20.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.705+0000 7f2a3cde3640 1 -- 192.168.123.100:0/1559224520 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2a38110400 con 0x7f2a38104d70 2026-03-10T07:25:20.708 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.705+0000 7f2a1f7fe640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2a100775d0 0x7f2a10079a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:20.708 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.705+0000 7f2a1f7fe640 1 -- 192.168.123.100:0/1559224520 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(53..53 src has 1..53) ==== 5267+0+0 (secure 0 0 0) 0x7f2a2c099550 con 0x7f2a38104d70 2026-03-10T07:25:20.708 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.709+0000 7f2a35d74640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2a100775d0 0x7f2a10079a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:20.709 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.709+0000 7f2a1f7fe640 1 -- 192.168.123.100:0/1559224520 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2a2c0046c0 con 0x7f2a38104d70 2026-03-10T07:25:20.709 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.709+0000 7f2a35d74640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2a100775d0 0x7f2a10079a90 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f2a3819d870 tx=0x7f2a20009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:20.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.805+0000 7f2a3cde3640 1 -- 192.168.123.100:0/1559224520 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) -- 0x7f2a3810b7b0 con 0x7f2a38104d70 2026-03-10T07:25:20.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.809+0000 7f2a1f7fe640 1 -- 192.168.123.100:0/1559224520 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v53) ==== 74+0+130 (secure 0 0 0) 0x7f2a2c062a90 con 0x7f2a38104d70 2026-03-10T07:25:20.808 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:25:20.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.809+0000 7f2a3cde3640 1 -- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2a100775d0 msgr2=0x7f2a10079a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:20.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.809+0000 7f2a3cde3640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2a100775d0 0x7f2a10079a90 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f2a3819d870 tx=0x7f2a20009290 comp rx=0 tx=0).stop 2026-03-10T07:25:20.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.809+0000 7f2a3cde3640 1 -- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2a38104d70 msgr2=0x7f2a3819c350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:20.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.809+0000 7f2a3cde3640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2a38104d70 0x7f2a3819c350 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f2a2c00e9e0 tx=0x7f2a2c00eeb0 comp rx=0 tx=0).stop 2026-03-10T07:25:20.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.813+0000 7f2a3cde3640 1 -- 192.168.123.100:0/1559224520 shutdown_connections 2026-03-10T07:25:20.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.813+0000 7f2a3cde3640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2a100775d0 0x7f2a10079a90 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:20.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.813+0000 7f2a3cde3640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f2a38106930 0x7f2a381a3910 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:20.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.813+0000 7f2a3cde3640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f2a38105f70 0x7f2a3819c890 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:20.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.813+0000 7f2a3cde3640 1 --2- 192.168.123.100:0/1559224520 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2a38104d70 0x7f2a3819c350 unknown :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:20.811 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.813+0000 7f2a3cde3640 1 -- 192.168.123.100:0/1559224520 >> 192.168.123.100:0/1559224520 conn(0x7f2a38100520 msgr2=0x7f2a38101fe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:20.812 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.813+0000 7f2a3cde3640 1 -- 192.168.123.100:0/1559224520 shutdown_connections 2026-03-10T07:25:20.812 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:20.813+0000 7f2a3cde3640 1 -- 192.168.123.100:0/1559224520 wait complete. 2026-03-10T07:25:20.866 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":53,"num_osds":8,"num_up_osds":8,"osd_up_since":1773127515,"num_in_osds":8,"osd_in_since":1773127498,"num_remapped_pgs":0} 2026-03-10T07:25:20.866 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd dump --format=json 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: cluster 2026-03-10T07:25:19.909026+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v207: 1 pgs: 1 peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: cluster 2026-03-10T07:25:19.909026+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v207: 1 pgs: 1 peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:20.809932+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.100:0/1559224520' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:20.809932+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.100:0/1559224520' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.461305+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.461305+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.466857+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.466857+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.468362+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.468362+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.469160+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.469160+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.469730+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.469730+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.470187+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.470187+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.471373+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.471373+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.471902+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.471902+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.476950+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.768 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:21 vm03 bash[23382]: audit 2026-03-10T07:25:21.476950+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: cluster 2026-03-10T07:25:19.909026+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v207: 1 pgs: 1 peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:21.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: cluster 2026-03-10T07:25:19.909026+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v207: 1 pgs: 1 peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:21.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:20.809932+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.100:0/1559224520' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:20.809932+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.100:0/1559224520' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.461305+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.461305+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.466857+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.466857+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.468362+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.468362+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.469160+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.469160+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.469730+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.469730+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.470187+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.470187+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.471373+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.471373+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.471902+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.471902+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.476950+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:21 vm00 bash[28005]: audit 2026-03-10T07:25:21.476950+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: cluster 2026-03-10T07:25:19.909026+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v207: 1 pgs: 1 peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: cluster 2026-03-10T07:25:19.909026+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v207: 1 pgs: 1 peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:20.809932+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.100:0/1559224520' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:20.809932+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.100:0/1559224520' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.461305+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.461305+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.466857+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.466857+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.468362+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.468362+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.469160+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.469160+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.469730+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.469730+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.470187+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.470187+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.471373+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.471373+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.471902+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.471902+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.476950+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:21.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:21 vm00 bash[20701]: audit 2026-03-10T07:25:21.476950+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:22 vm00 bash[28005]: cephadm 2026-03-10T07:25:21.454088+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:25:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:22 vm00 bash[28005]: cephadm 2026-03-10T07:25:21.454088+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:25:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:22 vm00 bash[28005]: cephadm 2026-03-10T07:25:21.470563+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T07:25:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:22 vm00 bash[28005]: cephadm 2026-03-10T07:25:21.470563+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T07:25:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:22 vm00 bash[28005]: cephadm 2026-03-10T07:25:21.470986+0000 mgr.y (mgr.14150) 228 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T07:25:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:22 vm00 bash[28005]: cephadm 2026-03-10T07:25:21.470986+0000 mgr.y (mgr.14150) 228 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T07:25:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:22 vm00 bash[20701]: cephadm 2026-03-10T07:25:21.454088+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:25:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:22 vm00 bash[20701]: cephadm 2026-03-10T07:25:21.454088+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:25:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:22 vm00 bash[20701]: cephadm 2026-03-10T07:25:21.470563+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T07:25:22.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:22 vm00 bash[20701]: cephadm 2026-03-10T07:25:21.470563+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T07:25:22.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:22 vm00 bash[20701]: cephadm 2026-03-10T07:25:21.470986+0000 mgr.y (mgr.14150) 228 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T07:25:22.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:22 vm00 bash[20701]: cephadm 2026-03-10T07:25:21.470986+0000 mgr.y (mgr.14150) 228 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T07:25:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:22 vm03 bash[23382]: cephadm 2026-03-10T07:25:21.454088+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:25:23.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:22 vm03 bash[23382]: cephadm 2026-03-10T07:25:21.454088+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T07:25:23.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:22 vm03 bash[23382]: cephadm 2026-03-10T07:25:21.470563+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T07:25:23.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:22 vm03 bash[23382]: cephadm 2026-03-10T07:25:21.470563+0000 mgr.y (mgr.14150) 227 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T07:25:23.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:22 vm03 bash[23382]: cephadm 2026-03-10T07:25:21.470986+0000 mgr.y (mgr.14150) 228 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T07:25:23.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:22 vm03 bash[23382]: cephadm 2026-03-10T07:25:21.470986+0000 mgr.y (mgr.14150) 228 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T07:25:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:23 vm00 bash[28005]: cluster 2026-03-10T07:25:21.909285+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s, 0 objects/s recovering 2026-03-10T07:25:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:23 vm00 bash[28005]: cluster 2026-03-10T07:25:21.909285+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s, 0 objects/s recovering 2026-03-10T07:25:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:23 vm00 bash[28005]: cluster 2026-03-10T07:25:22.568511+0000 mon.a (mon.0) 626 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T07:25:23.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:23 vm00 bash[28005]: cluster 2026-03-10T07:25:22.568511+0000 mon.a (mon.0) 626 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T07:25:23.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:23 vm00 bash[28005]: cluster 2026-03-10T07:25:22.568568+0000 mon.a (mon.0) 627 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:23.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:23 vm00 bash[28005]: cluster 2026-03-10T07:25:22.568568+0000 mon.a (mon.0) 627 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:23.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:23 vm00 bash[20701]: cluster 2026-03-10T07:25:21.909285+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s, 0 objects/s recovering 2026-03-10T07:25:23.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:23 vm00 bash[20701]: cluster 2026-03-10T07:25:21.909285+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s, 0 objects/s recovering 2026-03-10T07:25:23.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:23 vm00 bash[20701]: cluster 2026-03-10T07:25:22.568511+0000 mon.a (mon.0) 626 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T07:25:23.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:23 vm00 bash[20701]: cluster 2026-03-10T07:25:22.568511+0000 mon.a (mon.0) 626 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T07:25:23.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:23 vm00 bash[20701]: cluster 2026-03-10T07:25:22.568568+0000 mon.a (mon.0) 627 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:23.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:23 vm00 bash[20701]: cluster 2026-03-10T07:25:22.568568+0000 mon.a (mon.0) 627 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:24.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:23 vm03 bash[23382]: cluster 2026-03-10T07:25:21.909285+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s, 0 objects/s recovering 2026-03-10T07:25:24.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:23 vm03 bash[23382]: cluster 2026-03-10T07:25:21.909285+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s, 0 objects/s recovering 2026-03-10T07:25:24.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:23 vm03 bash[23382]: cluster 2026-03-10T07:25:22.568511+0000 mon.a (mon.0) 626 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T07:25:24.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:23 vm03 bash[23382]: cluster 2026-03-10T07:25:22.568511+0000 mon.a (mon.0) 626 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T07:25:24.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:23 vm03 bash[23382]: cluster 2026-03-10T07:25:22.568568+0000 mon.a (mon.0) 627 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:24.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:23 vm03 bash[23382]: cluster 2026-03-10T07:25:22.568568+0000 mon.a (mon.0) 627 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:24.556 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:25:24.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.713+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1888731355 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe1181009c0 msgr2=0x7fe118100e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:24.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.713+0000 7fe11d2b5640 1 --2- 192.168.123.100:0/1888731355 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe1181009c0 0x7fe118100e40 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7fe104009960 tx=0x7fe10402f160 comp rx=0 tx=0).stop 2026-03-10T07:25:24.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.713+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1888731355 shutdown_connections 2026-03-10T07:25:24.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.713+0000 7fe11d2b5640 1 --2- 192.168.123.100:0/1888731355 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe118101380 0x7fe11810f810 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:24.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.713+0000 7fe11d2b5640 1 --2- 192.168.123.100:0/1888731355 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe1181009c0 0x7fe118100e40 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:24.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.713+0000 7fe11d2b5640 1 --2- 192.168.123.100:0/1888731355 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe118108b30 0x7fe118108f10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:24.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.713+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1888731355 >> 192.168.123.100:0/1888731355 conn(0x7fe1180fc810 msgr2=0x7fe1180fec30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:24.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1888731355 shutdown_connections 2026-03-10T07:25:24.715 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1888731355 wait complete. 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 Processor -- start 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 -- start start 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe1181009c0 0x7fe118102490 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe118101380 0x7fe1181029d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe118108b30 0x7fe118102f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe118112030 con 0x7fe118108b30 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fe118111eb0 con 0x7fe1181009c0 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fe1181121b0 con 0x7fe118101380 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe117fff640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe1181009c0 0x7fe118102490 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe1177fe640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe118101380 0x7fe1181029d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe1177fe640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe118101380 0x7fe1181029d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:48200/0 (socket says 192.168.123.100:48200) 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe1177fe640 1 -- 192.168.123.100:0/1124048865 learned_addr learned my addr 192.168.123.100:0/1124048865 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11cab4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe118108b30 0x7fe118102f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11cab4640 1 -- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe118101380 msgr2=0x7fe1181029d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:24.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11cab4640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe118101380 0x7fe1181029d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:24.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11cab4640 1 -- 192.168.123.100:0/1124048865 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe1181009c0 msgr2=0x7fe118102490 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:24.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11cab4640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe1181009c0 0x7fe118102490 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:24.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11cab4640 1 -- 192.168.123.100:0/1124048865 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe1181a8aa0 con 0x7fe118108b30 2026-03-10T07:25:24.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11cab4640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe118108b30 0x7fe118102f10 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7fe10c007fb0 tx=0x7fe10c00d570 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:24.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe1157fa640 1 -- 192.168.123.100:0/1124048865 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe10c018070 con 0x7fe118108b30 2026-03-10T07:25:24.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe1157fa640 1 -- 192.168.123.100:0/1124048865 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fe10c0040d0 con 0x7fe118108b30 2026-03-10T07:25:24.717 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1124048865 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe1181a8d90 con 0x7fe118108b30 2026-03-10T07:25:24.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1124048865 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe1181a92d0 con 0x7fe118108b30 2026-03-10T07:25:24.718 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.717+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1124048865 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe0e4005180 con 0x7fe118108b30 2026-03-10T07:25:24.722 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.721+0000 7fe1157fa640 1 -- 192.168.123.100:0/1124048865 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe10c013650 con 0x7fe118108b30 2026-03-10T07:25:24.722 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.721+0000 7fe1157fa640 1 -- 192.168.123.100:0/1124048865 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7fe10c012070 con 0x7fe118108b30 2026-03-10T07:25:24.723 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.721+0000 7fe1157fa640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fe0f8077660 0x7fe0f8079b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:24.723 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.721+0000 7fe117fff640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fe0f8077660 0x7fe0f8079b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:24.723 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.721+0000 7fe1157fa640 1 -- 192.168.123.100:0/1124048865 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(53..53 src has 1..53) ==== 5267+0+0 (secure 0 0 0) 0x7fe10c0991e0 con 0x7fe118108b30 2026-03-10T07:25:24.723 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.725+0000 7fe1157fa640 1 -- 192.168.123.100:0/1124048865 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe10c010070 con 0x7fe118108b30 2026-03-10T07:25:24.723 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.725+0000 7fe117fff640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fe0f8077660 0x7fe0f8079b20 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7fe108004480 tx=0x7fe108009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:24.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.821+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1124048865 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7fe0e4005740 con 0x7fe118108b30 2026-03-10T07:25:24.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.821+0000 7fe1157fa640 1 -- 192.168.123.100:0/1124048865 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v53) ==== 74+0+13846 (secure 0 0 0) 0x7fe10c0663d0 con 0x7fe118108b30 2026-03-10T07:25:24.822 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:25:24.822 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":53,"fsid":"534d9c8a-1c51-11f1-ac87-d1fb9a119953","created":"2026-03-10T07:19:29.470223+0000","modified":"2026-03-10T07:25:16.828584+0000","last_up_change":"2026-03-10T07:25:15.530223+0000","last_in_change":"2026-03-10T07:24:58.209477+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T07:22:27.928486+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"103cba6f-bd9d-4169-adab-61ce873b1107","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":51,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6803","nonce":944390886}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6805","nonce":944390886}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6809","nonce":944390886}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6807","nonce":944390886}]},"public_addr":"192.168.123.100:6803/944390886","cluster_addr":"192.168.123.100:6805/944390886","heartbeat_back_addr":"192.168.123.100:6809/944390886","heartbeat_front_addr":"192.168.123.100:6807/944390886","state":["exists","up"]},{"osd":1,"uuid":"99ca2b37-ae0a-4199-ac17-e89aa50eb255","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":32,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6811","nonce":1715502331}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6813","nonce":1715502331}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6817","nonce":1715502331}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6815","nonce":1715502331}]},"public_addr":"192.168.123.100:6811/1715502331","cluster_addr":"192.168.123.100:6813/1715502331","heartbeat_back_addr":"192.168.123.100:6817/1715502331","heartbeat_front_addr":"192.168.123.100:6815/1715502331","state":["exists","up"]},{"osd":2,"uuid":"7d09342f-42e2-41fc-9c97-fa4b821fa628","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6819","nonce":3026087437}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6821","nonce":3026087437}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6825","nonce":3026087437}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6823","nonce":3026087437}]},"public_addr":"192.168.123.100:6819/3026087437","cluster_addr":"192.168.123.100:6821/3026087437","heartbeat_back_addr":"192.168.123.100:6825/3026087437","heartbeat_front_addr":"192.168.123.100:6823/3026087437","state":["exists","up"]},{"osd":3,"uuid":"76d2f5e3-81b1-4e08-917a-1bb3561d67e1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6827","nonce":2171328275}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6829","nonce":2171328275}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6833","nonce":2171328275}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6831","nonce":2171328275}]},"public_addr":"192.168.123.100:6827/2171328275","cluster_addr":"192.168.123.100:6829/2171328275","heartbeat_back_addr":"192.168.123.100:6833/2171328275","heartbeat_front_addr":"192.168.123.100:6831/2171328275","state":["exists","up"]},{"osd":4,"uuid":"f7c9bda9-fb82-468f-b7f9-e588fcc193bf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6801","nonce":2627693272}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6803","nonce":2627693272}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6807","nonce":2627693272}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6805","nonce":2627693272}]},"public_addr":"192.168.123.103:6801/2627693272","cluster_addr":"192.168.123.103:6803/2627693272","heartbeat_back_addr":"192.168.123.103:6807/2627693272","heartbeat_front_addr":"192.168.123.103:6805/2627693272","state":["exists","up"]},{"osd":5,"uuid":"361df97b-1006-4ba7-a86f-36dc13915955","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":39,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6809","nonce":3238215945}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6811","nonce":3238215945}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6815","nonce":3238215945}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6813","nonce":3238215945}]},"public_addr":"192.168.123.103:6809/3238215945","cluster_addr":"192.168.123.103:6811/3238215945","heartbeat_back_addr":"192.168.123.103:6815/3238215945","heartbeat_front_addr":"192.168.123.103:6813/3238215945","state":["exists","up"]},{"osd":6,"uuid":"a6dfdf0a-06d2-49ea-8222-a0f8f776983e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":45,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6817","nonce":665664252}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6819","nonce":665664252}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6823","nonce":665664252}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6820","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6821","nonce":665664252}]},"public_addr":"192.168.123.103:6817/665664252","cluster_addr":"192.168.123.103:6819/665664252","heartbeat_back_addr":"192.168.123.103:6823/665664252","heartbeat_front_addr":"192.168.123.103:6821/665664252","state":["exists","up"]},{"osd":7,"uuid":"6b79230f-59b8-4c24-91c0-cf41cbad4dc5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":51,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6824","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6825","nonce":3078297940}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6826","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6827","nonce":3078297940}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6830","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6831","nonce":3078297940}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6828","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6829","nonce":3078297940}]},"public_addr":"192.168.123.103:6825/3078297940","cluster_addr":"192.168.123.103:6827/3078297940","heartbeat_back_addr":"192.168.123.103:6831/3078297940","heartbeat_front_addr":"192.168.123.103:6829/3078297940","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:21:17.397942+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:21:51.044130+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:22:23.888280+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:22:57.672323+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:23:31.220146+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:24:04.406099+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:24:38.896132+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:25:13.178511+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:6800/1944661180":"2026-03-11T07:19:51.853862+0000","192.168.123.100:0/532732704":"2026-03-11T07:19:51.853862+0000","192.168.123.100:0/709545184":"2026-03-11T07:19:40.638072+0000","192.168.123.100:0/1289482675":"2026-03-11T07:19:51.853862+0000","192.168.123.100:0/57166232":"2026-03-11T07:19:40.638072+0000","192.168.123.100:6801/1944661180":"2026-03-11T07:19:51.853862+0000","192.168.123.100:0/2799046240":"2026-03-11T07:19:51.853862+0000","192.168.123.100:0/1054483043":"2026-03-11T07:19:40.638072+0000","192.168.123.100:6801/2344477988":"2026-03-11T07:19:40.638072+0000","192.168.123.100:6800/2344477988":"2026-03-11T07:19:40.638072+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T07:25:24.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fe0f8077660 msgr2=0x7fe0f8079b20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:24.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fe0f8077660 0x7fe0f8079b20 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7fe108004480 tx=0x7fe108009290 comp rx=0 tx=0).stop 2026-03-10T07:25:24.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe118108b30 msgr2=0x7fe118102f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:24.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe118108b30 0x7fe118102f10 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7fe10c007fb0 tx=0x7fe10c00d570 comp rx=0 tx=0).stop 2026-03-10T07:25:24.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1124048865 shutdown_connections 2026-03-10T07:25:24.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fe0f8077660 0x7fe0f8079b20 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:24.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe118108b30 0x7fe118102f10 unknown :-1 s=CLOSED pgs=131 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:24.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe118101380 0x7fe1181029d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:24.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 --2- 192.168.123.100:0/1124048865 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe1181009c0 0x7fe118102490 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:24.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1124048865 >> 192.168.123.100:0/1124048865 conn(0x7fe1180fc810 msgr2=0x7fe1180fe0b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:24.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1124048865 shutdown_connections 2026-03-10T07:25:24.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:24.825+0000 7fe11d2b5640 1 -- 192.168.123.100:0/1124048865 wait complete. 2026-03-10T07:25:24.884 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T07:22:27.928486+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '22', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T07:25:24.884 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd pool get .mgr pg_num 2026-03-10T07:25:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:25 vm00 bash[28005]: cluster 2026-03-10T07:25:23.909666+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s, 0 objects/s recovering 2026-03-10T07:25:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:25 vm00 bash[28005]: cluster 2026-03-10T07:25:23.909666+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s, 0 objects/s recovering 2026-03-10T07:25:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:25 vm00 bash[28005]: audit 2026-03-10T07:25:24.823639+0000 mon.a (mon.0) 628 : audit [DBG] from='client.? 192.168.123.100:0/1124048865' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:25:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:25 vm00 bash[28005]: audit 2026-03-10T07:25:24.823639+0000 mon.a (mon.0) 628 : audit [DBG] from='client.? 192.168.123.100:0/1124048865' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:25:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:25 vm00 bash[20701]: cluster 2026-03-10T07:25:23.909666+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s, 0 objects/s recovering 2026-03-10T07:25:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:25 vm00 bash[20701]: cluster 2026-03-10T07:25:23.909666+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s, 0 objects/s recovering 2026-03-10T07:25:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:25 vm00 bash[20701]: audit 2026-03-10T07:25:24.823639+0000 mon.a (mon.0) 628 : audit [DBG] from='client.? 192.168.123.100:0/1124048865' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:25:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:25 vm00 bash[20701]: audit 2026-03-10T07:25:24.823639+0000 mon.a (mon.0) 628 : audit [DBG] from='client.? 192.168.123.100:0/1124048865' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:25:26.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:25 vm03 bash[23382]: cluster 2026-03-10T07:25:23.909666+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s, 0 objects/s recovering 2026-03-10T07:25:26.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:25 vm03 bash[23382]: cluster 2026-03-10T07:25:23.909666+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 55 KiB/s, 0 objects/s recovering 2026-03-10T07:25:26.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:25 vm03 bash[23382]: audit 2026-03-10T07:25:24.823639+0000 mon.a (mon.0) 628 : audit [DBG] from='client.? 192.168.123.100:0/1124048865' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:25:26.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:25 vm03 bash[23382]: audit 2026-03-10T07:25:24.823639+0000 mon.a (mon.0) 628 : audit [DBG] from='client.? 192.168.123.100:0/1124048865' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:25:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:27 vm00 bash[28005]: cluster 2026-03-10T07:25:25.909973+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:25:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:27 vm00 bash[28005]: cluster 2026-03-10T07:25:25.909973+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:25:27.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:27 vm00 bash[20701]: cluster 2026-03-10T07:25:25.909973+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:25:27.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:27 vm00 bash[20701]: cluster 2026-03-10T07:25:25.909973+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:25:28.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:27 vm03 bash[23382]: cluster 2026-03-10T07:25:25.909973+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:25:28.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:27 vm03 bash[23382]: cluster 2026-03-10T07:25:25.909973+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T07:25:28.580 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:25:28.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.745+0000 7f41002c8640 1 -- 192.168.123.100:0/1327744054 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f40f8102110 msgr2=0x7f40f810efe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:28.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.745+0000 7f41002c8640 1 --2- 192.168.123.100:0/1327744054 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f40f8102110 0x7f40f810efe0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f40e8009960 tx=0x7f40e802f140 comp rx=0 tx=0).stop 2026-03-10T07:25:28.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.745+0000 7f41002c8640 1 -- 192.168.123.100:0/1327744054 shutdown_connections 2026-03-10T07:25:28.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.745+0000 7f41002c8640 1 --2- 192.168.123.100:0/1327744054 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f40f810f520 0x7f40f8111910 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:28.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.745+0000 7f41002c8640 1 --2- 192.168.123.100:0/1327744054 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f40f8102110 0x7f40f810efe0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:28.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.745+0000 7f41002c8640 1 --2- 192.168.123.100:0/1327744054 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f40f81017f0 0x7f40f8101bd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:28.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.745+0000 7f41002c8640 1 -- 192.168.123.100:0/1327744054 >> 192.168.123.100:0/1327744054 conn(0x7f40f80fd6e0 msgr2=0x7f40f80ffb00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:28.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 -- 192.168.123.100:0/1327744054 shutdown_connections 2026-03-10T07:25:28.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 -- 192.168.123.100:0/1327744054 wait complete. 2026-03-10T07:25:28.748 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 Processor -- start 2026-03-10T07:25:28.748 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 -- start start 2026-03-10T07:25:28.748 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f40f81017f0 0x7f40f81a26b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:28.748 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f40f8102110 0x7f40f81a2bf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:28.748 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f40f810f520 0x7f40f819c870 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f40f81143f0 con 0x7f40f81017f0 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f40f8114270 con 0x7f40f810f520 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f40f8114570 con 0x7f40f8102110 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fe83e640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f40f810f520 0x7f40f819c870 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fe83e640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f40f810f520 0x7f40f819c870 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:52378/0 (socket says 192.168.123.100:52378) 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fe03d640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f40f81017f0 0x7f40f81a26b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fe83e640 1 -- 192.168.123.100:0/3014102947 learned_addr learned my addr 192.168.123.100:0/3014102947 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fd83c640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f40f8102110 0x7f40f81a2bf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fe83e640 1 -- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f40f8102110 msgr2=0x7f40f81a2bf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fe83e640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f40f8102110 0x7f40f81a2bf0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fe83e640 1 -- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f40f81017f0 msgr2=0x7f40f81a26b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:28.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fe83e640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f40f81017f0 0x7f40f81a26b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:28.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fe83e640 1 -- 192.168.123.100:0/3014102947 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f40f819d040 con 0x7f40f810f520 2026-03-10T07:25:28.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fe03d640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f40f81017f0 0x7f40f81a26b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:25:28.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fd83c640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f40f8102110 0x7f40f81a2bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:25:28.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40fe83e640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f40f810f520 0x7f40f819c870 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f40f400e3c0 tx=0x7f40f400e890 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:28.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40e77fe640 1 -- 192.168.123.100:0/3014102947 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f40f40040d0 con 0x7f40f810f520 2026-03-10T07:25:28.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40e77fe640 1 -- 192.168.123.100:0/3014102947 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f40f4018070 con 0x7f40f810f520 2026-03-10T07:25:28.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f40e77fe640 1 -- 192.168.123.100:0/3014102947 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f40f40136d0 con 0x7f40f810f520 2026-03-10T07:25:28.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 -- 192.168.123.100:0/3014102947 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f40f819d2d0 con 0x7f40f810f520 2026-03-10T07:25:28.751 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.749+0000 7f41002c8640 1 -- 192.168.123.100:0/3014102947 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f40f81a9580 con 0x7f40f810f520 2026-03-10T07:25:28.751 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.753+0000 7f41002c8640 1 -- 192.168.123.100:0/3014102947 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f40c8005180 con 0x7f40f810f520 2026-03-10T07:25:28.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.753+0000 7f40e77fe640 1 -- 192.168.123.100:0/3014102947 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f40f4004b00 con 0x7f40f810f520 2026-03-10T07:25:28.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.753+0000 7f40e77fe640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40cc077580 0x7f40cc079a40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:28.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.753+0000 7f40e77fe640 1 -- 192.168.123.100:0/3014102947 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(53..53 src has 1..53) ==== 5267+0+0 (secure 0 0 0) 0x7f40f4098fe0 con 0x7f40f810f520 2026-03-10T07:25:28.752 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.753+0000 7f40fe03d640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40cc077580 0x7f40cc079a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:28.753 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.753+0000 7f40fe03d640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40cc077580 0x7f40cc079a40 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f40ec004750 tx=0x7f40ec009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:28.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.753+0000 7f40e77fe640 1 -- 192.168.123.100:0/3014102947 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f40f4010040 con 0x7f40f810f520 2026-03-10T07:25:28.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.857+0000 7f41002c8640 1 -- 192.168.123.100:0/3014102947 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"} v 0) -- 0x7f40c8005740 con 0x7f40f810f520 2026-03-10T07:25:28.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.857+0000 7f40e77fe640 1 -- 192.168.123.100:0/3014102947 <== mon.1 v2:192.168.123.103:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]=0 v53) ==== 93+0+10 (secure 0 0 0) 0x7f40f40661d0 con 0x7f40f810f520 2026-03-10T07:25:28.857 INFO:teuthology.orchestra.run.vm00.stdout:pg_num: 1 2026-03-10T07:25:28.859 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 -- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40cc077580 msgr2=0x7f40cc079a40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:28.859 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40cc077580 0x7f40cc079a40 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f40ec004750 tx=0x7f40ec009290 comp rx=0 tx=0).stop 2026-03-10T07:25:28.859 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 -- 192.168.123.100:0/3014102947 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f40f810f520 msgr2=0x7f40f819c870 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:28.859 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f40f810f520 0x7f40f819c870 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f40f400e3c0 tx=0x7f40f400e890 comp rx=0 tx=0).stop 2026-03-10T07:25:28.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 -- 192.168.123.100:0/3014102947 shutdown_connections 2026-03-10T07:25:28.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f40cc077580 0x7f40cc079a40 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:28.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f40f810f520 0x7f40f819c870 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:28.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f40f8102110 0x7f40f81a2bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:28.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 --2- 192.168.123.100:0/3014102947 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f40f81017f0 0x7f40f81a26b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:28.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 -- 192.168.123.100:0/3014102947 >> 192.168.123.100:0/3014102947 conn(0x7f40f80fd6e0 msgr2=0x7f40f810fcc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:28.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 -- 192.168.123.100:0/3014102947 shutdown_connections 2026-03-10T07:25:28.860 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:25:28.861+0000 7f41002c8640 1 -- 192.168.123.100:0/3014102947 wait complete. 2026-03-10T07:25:28.912 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm00 2026-03-10T07:25:28.912 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch apply rgw foo.a --placement '1;vm00=foo.a' 2026-03-10T07:25:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:29 vm00 bash[28005]: cluster 2026-03-10T07:25:27.910242+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T07:25:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:29 vm00 bash[28005]: cluster 2026-03-10T07:25:27.910242+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T07:25:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:29 vm00 bash[28005]: audit 2026-03-10T07:25:28.858689+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.100:0/3014102947' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T07:25:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:29 vm00 bash[28005]: audit 2026-03-10T07:25:28.858689+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.100:0/3014102947' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T07:25:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:29 vm00 bash[20701]: cluster 2026-03-10T07:25:27.910242+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T07:25:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:29 vm00 bash[20701]: cluster 2026-03-10T07:25:27.910242+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T07:25:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:29 vm00 bash[20701]: audit 2026-03-10T07:25:28.858689+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.100:0/3014102947' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T07:25:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:29 vm00 bash[20701]: audit 2026-03-10T07:25:28.858689+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.100:0/3014102947' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T07:25:30.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:29 vm03 bash[23382]: cluster 2026-03-10T07:25:27.910242+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T07:25:30.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:29 vm03 bash[23382]: cluster 2026-03-10T07:25:27.910242+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T07:25:30.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:29 vm03 bash[23382]: audit 2026-03-10T07:25:28.858689+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.100:0/3014102947' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T07:25:30.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:29 vm03 bash[23382]: audit 2026-03-10T07:25:28.858689+0000 mon.b (mon.1) 17 : audit [DBG] from='client.? 192.168.123.100:0/3014102947' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T07:25:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:30 vm00 bash[28005]: cluster 2026-03-10T07:25:29.910508+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:30 vm00 bash[28005]: cluster 2026-03-10T07:25:29.910508+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:30 vm00 bash[20701]: cluster 2026-03-10T07:25:29.910508+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:30 vm00 bash[20701]: cluster 2026-03-10T07:25:29.910508+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:31.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:30 vm03 bash[23382]: cluster 2026-03-10T07:25:29.910508+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:31.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:30 vm03 bash[23382]: cluster 2026-03-10T07:25:29.910508+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:33.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:32 vm03 bash[23382]: cluster 2026-03-10T07:25:31.910823+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:33.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:32 vm03 bash[23382]: cluster 2026-03-10T07:25:31.910823+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:33.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:32 vm00 bash[28005]: cluster 2026-03-10T07:25:31.910823+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:33.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:32 vm00 bash[28005]: cluster 2026-03-10T07:25:31.910823+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:33.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:32 vm00 bash[20701]: cluster 2026-03-10T07:25:31.910823+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:33.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:32 vm00 bash[20701]: cluster 2026-03-10T07:25:31.910823+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T07:25:33.548 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:25:33.715 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.713+0000 7fae734c2640 1 -- 192.168.123.103:0/2593021119 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fae6c102ce0 msgr2=0x7fae6c109570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:33.715 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.713+0000 7fae734c2640 1 --2- 192.168.123.103:0/2593021119 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fae6c102ce0 0x7fae6c109570 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fae6800c210 tx=0x7fae680305b0 comp rx=0 tx=0).stop 2026-03-10T07:25:33.715 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.713+0000 7fae734c2640 1 -- 192.168.123.103:0/2593021119 shutdown_connections 2026-03-10T07:25:33.715 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.713+0000 7fae734c2640 1 --2- 192.168.123.103:0/2593021119 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fae6c102ce0 0x7fae6c109570 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:33.715 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.713+0000 7fae734c2640 1 --2- 192.168.123.103:0/2593021119 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae6c102320 0x7fae6c1027a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:33.715 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.713+0000 7fae734c2640 1 --2- 192.168.123.103:0/2593021119 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fae6c101120 0x7fae6c101520 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:33.715 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.713+0000 7fae734c2640 1 -- 192.168.123.103:0/2593021119 >> 192.168.123.103:0/2593021119 conn(0x7fae6c0fc8d0 msgr2=0x7fae6c0fecf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:33.715 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.713+0000 7fae734c2640 1 -- 192.168.123.103:0/2593021119 shutdown_connections 2026-03-10T07:25:33.715 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.713+0000 7fae734c2640 1 -- 192.168.123.103:0/2593021119 wait complete. 2026-03-10T07:25:33.716 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.713+0000 7fae734c2640 1 Processor -- start 2026-03-10T07:25:33.716 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae734c2640 1 -- start start 2026-03-10T07:25:33.717 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae734c2640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fae6c101120 0x7fae6c111320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:33.717 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae734c2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae6c102320 0x7fae6c111860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:33.717 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae734c2640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fae6c102ce0 0x7fae6c1188e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:33.717 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae734c2640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fae6c10b760 con 0x7fae6c102320 2026-03-10T07:25:33.717 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae734c2640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fae6c10b5e0 con 0x7fae6c102ce0 2026-03-10T07:25:33.717 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae734c2640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fae6c10b8e0 con 0x7fae6c101120 2026-03-10T07:25:33.717 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae70a36640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae6c102320 0x7fae6c111860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:33.717 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae71a38640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fae6c102ce0 0x7fae6c1188e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae71a38640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fae6c102ce0 0x7fae6c1188e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.103:56648/0 (socket says 192.168.123.103:56648) 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae71a38640 1 -- 192.168.123.103:0/3273768765 learned_addr learned my addr 192.168.123.103:0/3273768765 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae71a38640 1 -- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fae6c101120 msgr2=0x7fae6c111320 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae71a38640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fae6c101120 0x7fae6c111320 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae71a38640 1 -- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae6c102320 msgr2=0x7fae6c111860 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae71a38640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae6c102320 0x7fae6c111860 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae71a38640 1 -- 192.168.123.103:0/3273768765 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fae6c118fe0 con 0x7fae6c102ce0 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae71a38640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fae6c102ce0 0x7fae6c1188e0 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fae6800a310 tx=0x7fae6800a340 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae5a7fc640 1 -- 192.168.123.103:0/3273768765 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fae68008040 con 0x7fae6c102ce0 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae734c2640 1 -- 192.168.123.103:0/3273768765 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fae6c06c890 con 0x7fae6c102ce0 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae734c2640 1 -- 192.168.123.103:0/3273768765 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fae6c06cd80 con 0x7fae6c102ce0 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae5a7fc640 1 -- 192.168.123.103:0/3273768765 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fae68004030 con 0x7fae6c102ce0 2026-03-10T07:25:33.718 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae5a7fc640 1 -- 192.168.123.103:0/3273768765 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fae68007720 con 0x7fae6c102ce0 2026-03-10T07:25:33.719 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae734c2640 1 -- 192.168.123.103:0/3273768765 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fae6c10bf40 con 0x7fae6c102ce0 2026-03-10T07:25:33.720 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.717+0000 7fae5a7fc640 1 -- 192.168.123.103:0/3273768765 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7fae680078c0 con 0x7fae6c102ce0 2026-03-10T07:25:33.723 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.721+0000 7fae5a7fc640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae40077660 0x7fae40079b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:33.723 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.721+0000 7fae71237640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae40077660 0x7fae40079b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:33.723 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.721+0000 7fae5a7fc640 1 -- 192.168.123.103:0/3273768765 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(53..53 src has 1..53) ==== 5267+0+0 (secure 0 0 0) 0x7fae680be770 con 0x7fae6c102ce0 2026-03-10T07:25:33.723 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.721+0000 7fae71237640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae40077660 0x7fae40079b20 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7fae6c102180 tx=0x7fae60006cd0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:33.723 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.721+0000 7fae5a7fc640 1 -- 192.168.123.103:0/3273768765 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fae68087cb0 con 0x7fae6c102ce0 2026-03-10T07:25:33.823 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.821+0000 7fae734c2640 1 -- 192.168.123.103:0/3273768765 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}) -- 0x7fae6c107c50 con 0x7fae40077660 2026-03-10T07:25:33.871 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.869+0000 7fae5a7fc640 1 -- 192.168.123.103:0/3273768765 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+30 (secure 0 0 0) 0x7fae6c107c50 con 0x7fae40077660 2026-03-10T07:25:33.871 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled rgw.foo.a update... 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 -- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae40077660 msgr2=0x7fae40079b20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae40077660 0x7fae40079b20 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7fae6c102180 tx=0x7fae60006cd0 comp rx=0 tx=0).stop 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 -- 192.168.123.103:0/3273768765 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fae6c102ce0 msgr2=0x7fae6c1188e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fae6c102ce0 0x7fae6c1188e0 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fae6800a310 tx=0x7fae6800a340 comp rx=0 tx=0).stop 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 -- 192.168.123.103:0/3273768765 shutdown_connections 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fae40077660 0x7fae40079b20 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fae6c102ce0 0x7fae6c1188e0 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fae6c102320 0x7fae6c111860 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 --2- 192.168.123.103:0/3273768765 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fae6c101120 0x7fae6c111320 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 -- 192.168.123.103:0/3273768765 >> 192.168.123.103:0/3273768765 conn(0x7fae6c0fc8d0 msgr2=0x7fae6c0fe470 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 -- 192.168.123.103:0/3273768765 shutdown_connections 2026-03-10T07:25:33.875 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:33.873+0000 7fae734c2640 1 -- 192.168.123.103:0/3273768765 wait complete. 2026-03-10T07:25:33.938 DEBUG:teuthology.orchestra.run.vm00:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@rgw.foo.a.service 2026-03-10T07:25:33.939 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm03 2026-03-10T07:25:33.939 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd pool create datapool 3 3 replicated 2026-03-10T07:25:34.702 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: Started Ceph rgw.foo.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:34.703 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:35.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.826343+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24274 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.826343+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24274 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: cephadm 2026-03-10T07:25:33.827335+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: cephadm 2026-03-10T07:25:33.827335+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.872524+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.872524+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.873991+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.873991+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.875916+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.875916+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.876649+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.876649+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.897988+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.897988+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.899982+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.899982+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.902345+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.826343+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24274 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.826343+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24274 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: cephadm 2026-03-10T07:25:33.827335+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: cephadm 2026-03-10T07:25:33.827335+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.872524+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.872524+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.873991+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.873991+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.875916+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.875916+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.876649+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.876649+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.897988+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.897988+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.899982+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.899982+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.902345+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.902345+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.906858+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.906858+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.908218+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:33.908218+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: cephadm 2026-03-10T07:25:33.908835+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: cephadm 2026-03-10T07:25:33.908835+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: cluster 2026-03-10T07:25:33.911491+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: cluster 2026-03-10T07:25:33.911491+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.736903+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.736903+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.743920+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.743920+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.753078+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.753078+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.761818+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.761818+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.768396+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.768396+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.781417+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:35.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:34 vm00 bash[28005]: audit 2026-03-10T07:25:34.781417+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.902345+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.906858+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.906858+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.908218+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:33.908218+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: cephadm 2026-03-10T07:25:33.908835+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: cephadm 2026-03-10T07:25:33.908835+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: cluster 2026-03-10T07:25:33.911491+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: cluster 2026-03-10T07:25:33.911491+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.736903+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.736903+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.743920+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.743920+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.753078+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.753078+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.761818+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.761818+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.768396+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.768396+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.781417+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:35.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:34 vm00 bash[20701]: audit 2026-03-10T07:25:34.781417+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:35.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.826343+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24274 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:35.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.826343+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24274 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:35.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: cephadm 2026-03-10T07:25:33.827335+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:35.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: cephadm 2026-03-10T07:25:33.827335+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:35.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.872524+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.872524+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.873991+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:35.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.873991+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:35.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.875916+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.875916+0000 mon.a (mon.0) 631 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.876649+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.876649+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.897988+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.897988+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.899982+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.899982+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.902345+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.902345+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.906858+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.906858+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.908218+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:33.908218+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: cephadm 2026-03-10T07:25:33.908835+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: cephadm 2026-03-10T07:25:33.908835+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: cluster 2026-03-10T07:25:33.911491+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: cluster 2026-03-10T07:25:33.911491+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.736903+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.736903+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.743920+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.743920+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.753078+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.753078+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.761818+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.761818+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.768396+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.768396+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.781417+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:35.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:34 vm03 bash[23382]: audit 2026-03-10T07:25:34.781417+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:36.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:36 vm00 bash[28005]: cephadm 2026-03-10T07:25:34.753952+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:36.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:36 vm00 bash[28005]: cephadm 2026-03-10T07:25:34.753952+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:36.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:36 vm00 bash[28005]: audit 2026-03-10T07:25:35.077197+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:36.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:36 vm00 bash[28005]: audit 2026-03-10T07:25:35.077197+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:36.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:36 vm00 bash[28005]: cluster 2026-03-10T07:25:35.925106+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T07:25:36.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:36 vm00 bash[28005]: cluster 2026-03-10T07:25:35.925106+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T07:25:36.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:36 vm00 bash[28005]: audit 2026-03-10T07:25:35.928757+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.100:0/2639354310' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:36.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:36 vm00 bash[28005]: audit 2026-03-10T07:25:35.928757+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.100:0/2639354310' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:36.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:36 vm00 bash[28005]: audit 2026-03-10T07:25:35.932567+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:36.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:36 vm00 bash[28005]: audit 2026-03-10T07:25:35.932567+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:36.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:36 vm00 bash[20701]: cephadm 2026-03-10T07:25:34.753952+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:36.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:36 vm00 bash[20701]: cephadm 2026-03-10T07:25:34.753952+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:36.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:36 vm00 bash[20701]: audit 2026-03-10T07:25:35.077197+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:36.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:36 vm00 bash[20701]: audit 2026-03-10T07:25:35.077197+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:36.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:36 vm00 bash[20701]: cluster 2026-03-10T07:25:35.925106+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T07:25:36.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:36 vm00 bash[20701]: cluster 2026-03-10T07:25:35.925106+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T07:25:36.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:36 vm00 bash[20701]: audit 2026-03-10T07:25:35.928757+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.100:0/2639354310' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:36.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:36 vm00 bash[20701]: audit 2026-03-10T07:25:35.928757+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.100:0/2639354310' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:36.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:36 vm00 bash[20701]: audit 2026-03-10T07:25:35.932567+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:36.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:36 vm00 bash[20701]: audit 2026-03-10T07:25:35.932567+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:36.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:36 vm03 bash[23382]: cephadm 2026-03-10T07:25:34.753952+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:36.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:36 vm03 bash[23382]: cephadm 2026-03-10T07:25:34.753952+0000 mgr.y (mgr.14150) 239 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T07:25:36.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:36 vm03 bash[23382]: audit 2026-03-10T07:25:35.077197+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:36.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:36 vm03 bash[23382]: audit 2026-03-10T07:25:35.077197+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:36.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:36 vm03 bash[23382]: cluster 2026-03-10T07:25:35.925106+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T07:25:36.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:36 vm03 bash[23382]: cluster 2026-03-10T07:25:35.925106+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T07:25:36.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:36 vm03 bash[23382]: audit 2026-03-10T07:25:35.928757+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.100:0/2639354310' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:36.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:36 vm03 bash[23382]: audit 2026-03-10T07:25:35.928757+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.100:0/2639354310' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:36.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:36 vm03 bash[23382]: audit 2026-03-10T07:25:35.932567+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:36.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:36 vm03 bash[23382]: audit 2026-03-10T07:25:35.932567+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T07:25:37.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:37 vm00 bash[28005]: cluster 2026-03-10T07:25:35.911818+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:37.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:37 vm00 bash[28005]: cluster 2026-03-10T07:25:35.911818+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:37.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:37 vm00 bash[28005]: audit 2026-03-10T07:25:36.916323+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T07:25:37.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:37 vm00 bash[28005]: audit 2026-03-10T07:25:36.916323+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T07:25:37.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:37 vm00 bash[28005]: cluster 2026-03-10T07:25:36.925086+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T07:25:37.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:37 vm00 bash[28005]: cluster 2026-03-10T07:25:36.925086+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T07:25:37.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:37 vm00 bash[20701]: cluster 2026-03-10T07:25:35.911818+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:37.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:37 vm00 bash[20701]: cluster 2026-03-10T07:25:35.911818+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:37.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:37 vm00 bash[20701]: audit 2026-03-10T07:25:36.916323+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T07:25:37.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:37 vm00 bash[20701]: audit 2026-03-10T07:25:36.916323+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T07:25:37.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:37 vm00 bash[20701]: cluster 2026-03-10T07:25:36.925086+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T07:25:37.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:37 vm00 bash[20701]: cluster 2026-03-10T07:25:36.925086+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T07:25:37.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:37 vm03 bash[23382]: cluster 2026-03-10T07:25:35.911818+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:37.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:37 vm03 bash[23382]: cluster 2026-03-10T07:25:35.911818+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:25:37.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:37 vm03 bash[23382]: audit 2026-03-10T07:25:36.916323+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T07:25:37.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:37 vm03 bash[23382]: audit 2026-03-10T07:25:36.916323+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T07:25:37.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:37 vm03 bash[23382]: cluster 2026-03-10T07:25:36.925086+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T07:25:37.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:37 vm03 bash[23382]: cluster 2026-03-10T07:25:36.925086+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T07:25:38.570 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:25:38.740 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.737+0000 7f219147b640 1 -- 192.168.123.103:0/3885496118 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105f70 msgr2=0x7f218c1063f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:38.741 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.737+0000 7f219147b640 1 --2- 192.168.123.103:0/3885496118 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105f70 0x7f218c1063f0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f2174009a30 tx=0x7f217402f240 comp rx=0 tx=0).stop 2026-03-10T07:25:38.741 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 -- 192.168.123.103:0/3885496118 shutdown_connections 2026-03-10T07:25:38.741 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 --2- 192.168.123.103:0/3885496118 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c106930 0x7f218c10d1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:38.741 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 --2- 192.168.123.103:0/3885496118 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105f70 0x7f218c1063f0 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:38.741 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 --2- 192.168.123.103:0/3885496118 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c104d70 0x7f218c105170 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:38.741 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 -- 192.168.123.103:0/3885496118 >> 192.168.123.103:0/3885496118 conn(0x7f218c100520 msgr2=0x7f218c102940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:38.741 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 -- 192.168.123.103:0/3885496118 shutdown_connections 2026-03-10T07:25:38.741 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 -- 192.168.123.103:0/3885496118 wait complete. 2026-03-10T07:25:38.742 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 Processor -- start 2026-03-10T07:25:38.742 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 -- start start 2026-03-10T07:25:38.742 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c104d70 0x7f218c07aea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:38.742 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105f70 0x7f218c07b3e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:38.743 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c104d70 0x7f218c07aea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:38.743 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c104d70 0x7f218c07aea0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.103:35262/0 (socket says 192.168.123.103:35262) 2026-03-10T07:25:38.743 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218a7fc640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105f70 0x7f218c07b3e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:38.743 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c106930 0x7f218c0779a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:38.743 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f218c10fc50 con 0x7f218c104d70 2026-03-10T07:25:38.743 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f218c10fad0 con 0x7f218c105f70 2026-03-10T07:25:38.743 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218affd640 1 -- 192.168.123.103:0/4126329354 learned_addr learned my addr 192.168.123.103:0/4126329354 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:25:38.743 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 -- 192.168.123.103:0/4126329354 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f218c10fdd0 con 0x7f218c106930 2026-03-10T07:25:38.743 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218b7fe640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c106930 0x7f218c0779a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:38.743 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218a7fc640 1 -- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c106930 msgr2=0x7f218c0779a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:38.744 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218a7fc640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c106930 0x7f218c0779a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:38.745 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218a7fc640 1 -- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c104d70 msgr2=0x7f218c07aea0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:38.745 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218a7fc640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c104d70 0x7f218c07aea0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:38.746 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218a7fc640 1 -- 192.168.123.103:0/4126329354 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f218c078260 con 0x7f218c105f70 2026-03-10T07:25:38.746 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218affd640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c104d70 0x7f218c07aea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:25:38.746 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f218a7fc640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105f70 0x7f218c07b3e0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f2174009a00 tx=0x7f2174004290 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:38.746 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f216bfff640 1 -- 192.168.123.103:0/4126329354 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2174004400 con 0x7f218c105f70 2026-03-10T07:25:38.746 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 -- 192.168.123.103:0/4126329354 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f218c0784f0 con 0x7f218c105f70 2026-03-10T07:25:38.746 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f219147b640 1 -- 192.168.123.103:0/4126329354 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f218c1a8d50 con 0x7f218c105f70 2026-03-10T07:25:38.746 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f216bfff640 1 -- 192.168.123.103:0/4126329354 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f2174033070 con 0x7f218c105f70 2026-03-10T07:25:38.746 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.741+0000 7f216bfff640 1 -- 192.168.123.103:0/4126329354 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f21740413c0 con 0x7f218c105f70 2026-03-10T07:25:38.747 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.745+0000 7f218b7fe640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c106930 0x7f218c0779a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:25:38.747 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.745+0000 7f216bfff640 1 -- 192.168.123.103:0/4126329354 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7f2174041560 con 0x7f218c105f70 2026-03-10T07:25:38.747 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.745+0000 7f216bfff640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2158077610 0x7f2158079ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:38.750 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.745+0000 7f216bfff640 1 -- 192.168.123.103:0/4126329354 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(56..56 src has 1..56) ==== 5980+0+0 (secure 0 0 0) 0x7f21740bdbf0 con 0x7f218c105f70 2026-03-10T07:25:38.750 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.745+0000 7f218affd640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2158077610 0x7f2158079ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:38.750 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.745+0000 7f218affd640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2158077610 0x7f2158079ad0 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f2180005fd0 tx=0x7f2180007480 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:38.750 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.745+0000 7f219147b640 1 -- 192.168.123.103:0/4126329354 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2154005180 con 0x7f218c105f70 2026-03-10T07:25:38.750 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.749+0000 7f216bfff640 1 -- 192.168.123.103:0/4126329354 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2174047050 con 0x7f218c105f70 2026-03-10T07:25:38.845 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.845+0000 7f219147b640 1 -- 192.168.123.103:0/4126329354 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"} v 0) -- 0x7f2154005470 con 0x7f218c105f70 2026-03-10T07:25:38.929 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.929+0000 7f216bfff640 1 -- 192.168.123.103:0/4126329354 <== mon.1 v2:192.168.123.103:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]=0 pool 'datapool' created v57) ==== 160+0+0 (secure 0 0 0) 0x7f2174086df0 con 0x7f218c105f70 2026-03-10T07:25:38.929 INFO:teuthology.orchestra.run.vm03.stderr:pool 'datapool' created 2026-03-10T07:25:38.933 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 -- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2158077610 msgr2=0x7f2158079ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:38.933 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2158077610 0x7f2158079ad0 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f2180005fd0 tx=0x7f2180007480 comp rx=0 tx=0).stop 2026-03-10T07:25:38.933 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 -- 192.168.123.103:0/4126329354 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105f70 msgr2=0x7f218c07b3e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:38.933 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105f70 0x7f218c07b3e0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f2174009a00 tx=0x7f2174004290 comp rx=0 tx=0).stop 2026-03-10T07:25:38.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 -- 192.168.123.103:0/4126329354 shutdown_connections 2026-03-10T07:25:38.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f2158077610 0x7f2158079ad0 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:38.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f218c106930 0x7f218c0779a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:38.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f218c105f70 0x7f218c07b3e0 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:38.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 --2- 192.168.123.103:0/4126329354 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f218c104d70 0x7f218c07aea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:38.934 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 -- 192.168.123.103:0/4126329354 >> 192.168.123.103:0/4126329354 conn(0x7f218c100520 msgr2=0x7f218c102080 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:38.935 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 -- 192.168.123.103:0/4126329354 shutdown_connections 2026-03-10T07:25:38.935 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:38.933+0000 7f219147b640 1 -- 192.168.123.103:0/4126329354 wait complete. 2026-03-10T07:25:39.024 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- rbd pool init datapool 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: cluster 2026-03-10T07:25:37.912138+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 33 pgs: 11 creating+peering, 11 active+clean, 11 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 383 B/s rd, 383 B/s wr, 1 op/s 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: cluster 2026-03-10T07:25:37.912138+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 33 pgs: 11 creating+peering, 11 active+clean, 11 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 383 B/s rd, 383 B/s wr, 1 op/s 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: cluster 2026-03-10T07:25:37.932302+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: cluster 2026-03-10T07:25:37.932302+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: audit 2026-03-10T07:25:37.932871+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: audit 2026-03-10T07:25:37.932871+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: audit 2026-03-10T07:25:37.936428+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: audit 2026-03-10T07:25:37.936428+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: audit 2026-03-10T07:25:38.848009+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.103:0/4126329354' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: audit 2026-03-10T07:25:38.848009+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.103:0/4126329354' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: audit 2026-03-10T07:25:38.849045+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:38 vm03 bash[23382]: audit 2026-03-10T07:25:38.849045+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: cluster 2026-03-10T07:25:37.912138+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 33 pgs: 11 creating+peering, 11 active+clean, 11 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 383 B/s rd, 383 B/s wr, 1 op/s 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: cluster 2026-03-10T07:25:37.912138+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 33 pgs: 11 creating+peering, 11 active+clean, 11 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 383 B/s rd, 383 B/s wr, 1 op/s 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: cluster 2026-03-10T07:25:37.932302+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: cluster 2026-03-10T07:25:37.932302+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: audit 2026-03-10T07:25:37.932871+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: audit 2026-03-10T07:25:37.932871+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: audit 2026-03-10T07:25:37.936428+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: audit 2026-03-10T07:25:37.936428+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: audit 2026-03-10T07:25:38.848009+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.103:0/4126329354' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: audit 2026-03-10T07:25:38.848009+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.103:0/4126329354' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: audit 2026-03-10T07:25:38.849045+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:38 vm00 bash[28005]: audit 2026-03-10T07:25:38.849045+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: cluster 2026-03-10T07:25:37.912138+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 33 pgs: 11 creating+peering, 11 active+clean, 11 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 383 B/s rd, 383 B/s wr, 1 op/s 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: cluster 2026-03-10T07:25:37.912138+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 33 pgs: 11 creating+peering, 11 active+clean, 11 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 383 B/s rd, 383 B/s wr, 1 op/s 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: cluster 2026-03-10T07:25:37.932302+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T07:25:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: cluster 2026-03-10T07:25:37.932302+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T07:25:39.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: audit 2026-03-10T07:25:37.932871+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: audit 2026-03-10T07:25:37.932871+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: audit 2026-03-10T07:25:37.936428+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: audit 2026-03-10T07:25:37.936428+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T07:25:39.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: audit 2026-03-10T07:25:38.848009+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.103:0/4126329354' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:39.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: audit 2026-03-10T07:25:38.848009+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.103:0/4126329354' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:39.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: audit 2026-03-10T07:25:38.849045+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:39.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:38 vm00 bash[20701]: audit 2026-03-10T07:25:38.849045+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: audit 2026-03-10T07:25:38.922924+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: audit 2026-03-10T07:25:38.922924+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: audit 2026-03-10T07:25:38.922984+0000 mon.a (mon.0) 653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: audit 2026-03-10T07:25:38.922984+0000 mon.a (mon.0) 653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: cluster 2026-03-10T07:25:38.936850+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: cluster 2026-03-10T07:25:38.936850+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: audit 2026-03-10T07:25:39.850606+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: audit 2026-03-10T07:25:39.850606+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: audit 2026-03-10T07:25:39.856812+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: audit 2026-03-10T07:25:39.856812+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: cluster 2026-03-10T07:25:39.938574+0000 mon.a (mon.0) 657 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T07:25:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:39 vm03 bash[23382]: cluster 2026-03-10T07:25:39.938574+0000 mon.a (mon.0) 657 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: audit 2026-03-10T07:25:38.922924+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: audit 2026-03-10T07:25:38.922924+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: audit 2026-03-10T07:25:38.922984+0000 mon.a (mon.0) 653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: audit 2026-03-10T07:25:38.922984+0000 mon.a (mon.0) 653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: cluster 2026-03-10T07:25:38.936850+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: cluster 2026-03-10T07:25:38.936850+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: audit 2026-03-10T07:25:39.850606+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: audit 2026-03-10T07:25:39.850606+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: audit 2026-03-10T07:25:39.856812+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: audit 2026-03-10T07:25:39.856812+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: cluster 2026-03-10T07:25:39.938574+0000 mon.a (mon.0) 657 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:39 vm00 bash[28005]: cluster 2026-03-10T07:25:39.938574+0000 mon.a (mon.0) 657 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: audit 2026-03-10T07:25:38.922924+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: audit 2026-03-10T07:25:38.922924+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: audit 2026-03-10T07:25:38.922984+0000 mon.a (mon.0) 653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: audit 2026-03-10T07:25:38.922984+0000 mon.a (mon.0) 653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: cluster 2026-03-10T07:25:38.936850+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: cluster 2026-03-10T07:25:38.936850+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: audit 2026-03-10T07:25:39.850606+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: audit 2026-03-10T07:25:39.850606+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: audit 2026-03-10T07:25:39.856812+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: audit 2026-03-10T07:25:39.856812+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:40.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: cluster 2026-03-10T07:25:39.938574+0000 mon.a (mon.0) 657 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T07:25:40.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:39 vm00 bash[20701]: cluster 2026-03-10T07:25:39.938574+0000 mon.a (mon.0) 657 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: cluster 2026-03-10T07:25:39.912429+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v221: 68 pgs: 14 creating+peering, 25 active+clean, 29 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: cluster 2026-03-10T07:25:39.912429+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v221: 68 pgs: 14 creating+peering, 25 active+clean, 29 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: audit 2026-03-10T07:25:39.956249+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: audit 2026-03-10T07:25:39.956249+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: audit 2026-03-10T07:25:39.961043+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: audit 2026-03-10T07:25:39.961043+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: audit 2026-03-10T07:25:40.186234+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: audit 2026-03-10T07:25:40.186234+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: audit 2026-03-10T07:25:40.186754+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: audit 2026-03-10T07:25:40.186754+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: cephadm 2026-03-10T07:25:40.189309+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: cephadm 2026-03-10T07:25:40.189309+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: cluster 2026-03-10T07:25:40.856088+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: cluster 2026-03-10T07:25:40.856088+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:25:41.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: audit 2026-03-10T07:25:40.929196+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T07:25:41.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: audit 2026-03-10T07:25:40.929196+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T07:25:41.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: cluster 2026-03-10T07:25:40.935801+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T07:25:41.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:40 vm03 bash[23382]: cluster 2026-03-10T07:25:40.935801+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: cluster 2026-03-10T07:25:39.912429+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v221: 68 pgs: 14 creating+peering, 25 active+clean, 29 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: cluster 2026-03-10T07:25:39.912429+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v221: 68 pgs: 14 creating+peering, 25 active+clean, 29 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: audit 2026-03-10T07:25:39.956249+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: audit 2026-03-10T07:25:39.956249+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: audit 2026-03-10T07:25:39.961043+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: audit 2026-03-10T07:25:39.961043+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: audit 2026-03-10T07:25:40.186234+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: audit 2026-03-10T07:25:40.186234+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: audit 2026-03-10T07:25:40.186754+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: audit 2026-03-10T07:25:40.186754+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: cephadm 2026-03-10T07:25:40.189309+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: cephadm 2026-03-10T07:25:40.189309+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: cluster 2026-03-10T07:25:40.856088+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: cluster 2026-03-10T07:25:40.856088+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:25:41.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: audit 2026-03-10T07:25:40.929196+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: audit 2026-03-10T07:25:40.929196+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: cluster 2026-03-10T07:25:40.935801+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:40 vm00 bash[28005]: cluster 2026-03-10T07:25:40.935801+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: cluster 2026-03-10T07:25:39.912429+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v221: 68 pgs: 14 creating+peering, 25 active+clean, 29 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: cluster 2026-03-10T07:25:39.912429+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v221: 68 pgs: 14 creating+peering, 25 active+clean, 29 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: audit 2026-03-10T07:25:39.956249+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: audit 2026-03-10T07:25:39.956249+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: audit 2026-03-10T07:25:39.961043+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: audit 2026-03-10T07:25:39.961043+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: audit 2026-03-10T07:25:40.186234+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: audit 2026-03-10T07:25:40.186234+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: audit 2026-03-10T07:25:40.186754+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: audit 2026-03-10T07:25:40.186754+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: cephadm 2026-03-10T07:25:40.189309+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: cephadm 2026-03-10T07:25:40.189309+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: cluster 2026-03-10T07:25:40.856088+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: cluster 2026-03-10T07:25:40.856088+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: audit 2026-03-10T07:25:40.929196+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: audit 2026-03-10T07:25:40.929196+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: cluster 2026-03-10T07:25:40.935801+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T07:25:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:40 vm00 bash[20701]: cluster 2026-03-10T07:25:40.935801+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: cluster 2026-03-10T07:25:41.912867+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 100 pgs: 7 creating+peering, 61 active+clean, 32 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: cluster 2026-03-10T07:25:41.912867+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 100 pgs: 7 creating+peering, 61 active+clean, 32 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: cluster 2026-03-10T07:25:41.936429+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: cluster 2026-03-10T07:25:41.936429+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: audit 2026-03-10T07:25:41.958093+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: audit 2026-03-10T07:25:41.958093+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: audit 2026-03-10T07:25:41.958528+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: audit 2026-03-10T07:25:41.958528+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: audit 2026-03-10T07:25:41.959434+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: audit 2026-03-10T07:25:41.959434+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: audit 2026-03-10T07:25:41.959679+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:42 vm03 bash[23382]: audit 2026-03-10T07:25:41.959679+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: cluster 2026-03-10T07:25:41.912867+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 100 pgs: 7 creating+peering, 61 active+clean, 32 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: cluster 2026-03-10T07:25:41.912867+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 100 pgs: 7 creating+peering, 61 active+clean, 32 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: cluster 2026-03-10T07:25:41.936429+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: cluster 2026-03-10T07:25:41.936429+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: audit 2026-03-10T07:25:41.958093+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: audit 2026-03-10T07:25:41.958093+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: audit 2026-03-10T07:25:41.958528+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: audit 2026-03-10T07:25:41.958528+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: audit 2026-03-10T07:25:41.959434+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: audit 2026-03-10T07:25:41.959434+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: audit 2026-03-10T07:25:41.959679+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:42 vm00 bash[28005]: audit 2026-03-10T07:25:41.959679+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: cluster 2026-03-10T07:25:41.912867+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 100 pgs: 7 creating+peering, 61 active+clean, 32 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T07:25:43.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: cluster 2026-03-10T07:25:41.912867+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 100 pgs: 7 creating+peering, 61 active+clean, 32 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T07:25:43.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: cluster 2026-03-10T07:25:41.936429+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T07:25:43.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: cluster 2026-03-10T07:25:41.936429+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T07:25:43.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: audit 2026-03-10T07:25:41.958093+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: audit 2026-03-10T07:25:41.958093+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: audit 2026-03-10T07:25:41.958528+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: audit 2026-03-10T07:25:41.958528+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: audit 2026-03-10T07:25:41.959434+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: audit 2026-03-10T07:25:41.959434+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: audit 2026-03-10T07:25:41.959679+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:42 vm00 bash[20701]: audit 2026-03-10T07:25:41.959679+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T07:25:43.649 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:25:44.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.950533+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.950533+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.950676+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.950676+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: cluster 2026-03-10T07:25:42.960588+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T07:25:44.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: cluster 2026-03-10T07:25:42.960588+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T07:25:44.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.967373+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.967373+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.970914+0000 mon.c (mon.2) 25 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.970914+0000 mon.c (mon.2) 25 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.979313+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.979313+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.979454+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:42.979454+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:43.808445+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.103:0/4077277201' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:44.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:43.808445+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.103:0/4077277201' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:44.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:43.809611+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:44.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:43 vm03 bash[23382]: audit 2026-03-10T07:25:43.809611+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 07:25:44 vm00 bash[53569]: debug 2026-03-10T07:25:44.053+0000 7fb647397980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.950533+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.950533+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.950676+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.950676+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: cluster 2026-03-10T07:25:42.960588+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: cluster 2026-03-10T07:25:42.960588+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.967373+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.967373+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.970914+0000 mon.c (mon.2) 25 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.970914+0000 mon.c (mon.2) 25 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.979313+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.979313+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.979454+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:42.979454+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:43.808445+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.103:0/4077277201' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:43.808445+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.103:0/4077277201' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:43.809611+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:43 vm00 bash[28005]: audit 2026-03-10T07:25:43.809611+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:44.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.950533+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.950533+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.950676+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.950676+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: cluster 2026-03-10T07:25:42.960588+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: cluster 2026-03-10T07:25:42.960588+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.967373+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.967373+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.100:0/693477052' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.970914+0000 mon.c (mon.2) 25 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.970914+0000 mon.c (mon.2) 25 : audit [INF] from='client.? 192.168.123.100:0/818341086' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.979313+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.979313+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.979454+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:42.979454+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:43.808445+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.103:0/4077277201' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:43.808445+0000 mon.b (mon.1) 20 : audit [INF] from='client.? 192.168.123.103:0/4077277201' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:43.809611+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:44.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:43 vm00 bash[20701]: audit 2026-03-10T07:25:43.809611+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T07:25:45.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: cluster 2026-03-10T07:25:43.913420+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v227: 132 pgs: 15 creating+peering, 96 active+clean, 21 unknown; 450 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 767 B/s wr, 5 op/s 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: cluster 2026-03-10T07:25:43.913420+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v227: 132 pgs: 15 creating+peering, 96 active+clean, 21 unknown; 450 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 767 B/s wr, 5 op/s 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:43.953396+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:43.953396+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:43.953494+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:43.953494+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:43.953521+0000 mon.a (mon.0) 675 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:43.953521+0000 mon.a (mon.0) 675 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: cluster 2026-03-10T07:25:43.969625+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: cluster 2026-03-10T07:25:43.969625+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.427576+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.427576+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.443929+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.443929+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.468647+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.468647+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.510191+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.510191+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.860368+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.860368+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.860898+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:45.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:44 vm03 bash[23382]: audit 2026-03-10T07:25:44.860898+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:44 vm00 bash[28005]: cluster 2026-03-10T07:25:43.913420+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v227: 132 pgs: 15 creating+peering, 96 active+clean, 21 unknown; 450 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 767 B/s wr, 5 op/s 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:44 vm00 bash[28005]: cluster 2026-03-10T07:25:43.913420+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v227: 132 pgs: 15 creating+peering, 96 active+clean, 21 unknown; 450 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 767 B/s wr, 5 op/s 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:44 vm00 bash[28005]: audit 2026-03-10T07:25:43.953396+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:44 vm00 bash[28005]: audit 2026-03-10T07:25:43.953396+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:44 vm00 bash[28005]: audit 2026-03-10T07:25:43.953494+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:44 vm00 bash[28005]: audit 2026-03-10T07:25:43.953494+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:44 vm00 bash[28005]: audit 2026-03-10T07:25:43.953521+0000 mon.a (mon.0) 675 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:44 vm00 bash[28005]: audit 2026-03-10T07:25:43.953521+0000 mon.a (mon.0) 675 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:44 vm00 bash[28005]: cluster 2026-03-10T07:25:43.969625+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:44 vm00 bash[28005]: cluster 2026-03-10T07:25:43.969625+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.427576+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.427576+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.443929+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.443929+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.468647+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.468647+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.510191+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.510191+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.860368+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.860368+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.860898+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:44.860898+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: cluster 2026-03-10T07:25:43.913420+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v227: 132 pgs: 15 creating+peering, 96 active+clean, 21 unknown; 450 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 767 B/s wr, 5 op/s 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: cluster 2026-03-10T07:25:43.913420+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v227: 132 pgs: 15 creating+peering, 96 active+clean, 21 unknown; 450 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 767 B/s wr, 5 op/s 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:43.953396+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:43.953396+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:43.953494+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:43.953494+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:43.953521+0000 mon.a (mon.0) 675 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:43.953521+0000 mon.a (mon.0) 675 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: cluster 2026-03-10T07:25:43.969625+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: cluster 2026-03-10T07:25:43.969625+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.427576+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.427576+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.443929+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.443929+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.468647+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.468647+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.510191+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.510191+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.860368+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.860368+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.860898+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:45.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:44.860898+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:46.131 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.103 --placement '1;vm03=iscsi.a' 2026-03-10T07:25:46.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:45 vm03 bash[23382]: cephadm 2026-03-10T07:25:44.863322+0000 mgr.y (mgr.14150) 246 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:46.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:45 vm03 bash[23382]: cephadm 2026-03-10T07:25:44.863322+0000 mgr.y (mgr.14150) 246 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:46.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:45 vm03 bash[23382]: cluster 2026-03-10T07:25:44.990381+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T07:25:46.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:45 vm03 bash[23382]: cluster 2026-03-10T07:25:44.990381+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T07:25:46.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:45 vm03 bash[23382]: audit 2026-03-10T07:25:45.089110+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:46.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:45 vm03 bash[23382]: audit 2026-03-10T07:25:45.089110+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:46.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: cephadm 2026-03-10T07:25:44.863322+0000 mgr.y (mgr.14150) 246 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:46.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: cephadm 2026-03-10T07:25:44.863322+0000 mgr.y (mgr.14150) 246 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:46.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: cluster 2026-03-10T07:25:44.990381+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T07:25:46.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: cluster 2026-03-10T07:25:44.990381+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T07:25:46.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:45.089110+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:46.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:45 vm00 bash[28005]: audit 2026-03-10T07:25:45.089110+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:46.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: cephadm 2026-03-10T07:25:44.863322+0000 mgr.y (mgr.14150) 246 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:46.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: cephadm 2026-03-10T07:25:44.863322+0000 mgr.y (mgr.14150) 246 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T07:25:46.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: cluster 2026-03-10T07:25:44.990381+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T07:25:46.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: cluster 2026-03-10T07:25:44.990381+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T07:25:46.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:45.089110+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:46.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:45 vm00 bash[20701]: audit 2026-03-10T07:25:45.089110+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:47.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:47 vm03 bash[23382]: cluster 2026-03-10T07:25:45.913882+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v230: 132 pgs: 11 creating+peering, 115 active+clean, 6 unknown; 451 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 31 KiB/s rd, 2.7 KiB/s wr, 72 op/s 2026-03-10T07:25:47.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:47 vm03 bash[23382]: cluster 2026-03-10T07:25:45.913882+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v230: 132 pgs: 11 creating+peering, 115 active+clean, 6 unknown; 451 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 31 KiB/s rd, 2.7 KiB/s wr, 72 op/s 2026-03-10T07:25:47.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:47 vm03 bash[23382]: cluster 2026-03-10T07:25:46.008755+0000 mon.a (mon.0) 685 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T07:25:47.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:47 vm03 bash[23382]: cluster 2026-03-10T07:25:46.008755+0000 mon.a (mon.0) 685 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T07:25:47.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:47 vm03 bash[23382]: cluster 2026-03-10T07:25:46.087361+0000 mon.a (mon.0) 686 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:25:47.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:47 vm03 bash[23382]: cluster 2026-03-10T07:25:46.087361+0000 mon.a (mon.0) 686 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:25:47.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:47 vm03 bash[23382]: cluster 2026-03-10T07:25:46.087386+0000 mon.a (mon.0) 687 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:47.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:47 vm03 bash[23382]: cluster 2026-03-10T07:25:46.087386+0000 mon.a (mon.0) 687 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:47 vm00 bash[28005]: cluster 2026-03-10T07:25:45.913882+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v230: 132 pgs: 11 creating+peering, 115 active+clean, 6 unknown; 451 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 31 KiB/s rd, 2.7 KiB/s wr, 72 op/s 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:47 vm00 bash[28005]: cluster 2026-03-10T07:25:45.913882+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v230: 132 pgs: 11 creating+peering, 115 active+clean, 6 unknown; 451 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 31 KiB/s rd, 2.7 KiB/s wr, 72 op/s 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:47 vm00 bash[28005]: cluster 2026-03-10T07:25:46.008755+0000 mon.a (mon.0) 685 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:47 vm00 bash[28005]: cluster 2026-03-10T07:25:46.008755+0000 mon.a (mon.0) 685 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:47 vm00 bash[28005]: cluster 2026-03-10T07:25:46.087361+0000 mon.a (mon.0) 686 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:47 vm00 bash[28005]: cluster 2026-03-10T07:25:46.087361+0000 mon.a (mon.0) 686 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:47 vm00 bash[28005]: cluster 2026-03-10T07:25:46.087386+0000 mon.a (mon.0) 687 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:47 vm00 bash[28005]: cluster 2026-03-10T07:25:46.087386+0000 mon.a (mon.0) 687 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:47 vm00 bash[20701]: cluster 2026-03-10T07:25:45.913882+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v230: 132 pgs: 11 creating+peering, 115 active+clean, 6 unknown; 451 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 31 KiB/s rd, 2.7 KiB/s wr, 72 op/s 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:47 vm00 bash[20701]: cluster 2026-03-10T07:25:45.913882+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v230: 132 pgs: 11 creating+peering, 115 active+clean, 6 unknown; 451 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 31 KiB/s rd, 2.7 KiB/s wr, 72 op/s 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:47 vm00 bash[20701]: cluster 2026-03-10T07:25:46.008755+0000 mon.a (mon.0) 685 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:47 vm00 bash[20701]: cluster 2026-03-10T07:25:46.008755+0000 mon.a (mon.0) 685 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:47 vm00 bash[20701]: cluster 2026-03-10T07:25:46.087361+0000 mon.a (mon.0) 686 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:47 vm00 bash[20701]: cluster 2026-03-10T07:25:46.087361+0000 mon.a (mon.0) 686 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:47 vm00 bash[20701]: cluster 2026-03-10T07:25:46.087386+0000 mon.a (mon.0) 687 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:47.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:47 vm00 bash[20701]: cluster 2026-03-10T07:25:46.087386+0000 mon.a (mon.0) 687 : cluster [INF] Cluster is now healthy 2026-03-10T07:25:49.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:49 vm03 bash[23382]: cluster 2026-03-10T07:25:47.914421+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 75 KiB/s rd, 5.8 KiB/s wr, 178 op/s 2026-03-10T07:25:49.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:49 vm03 bash[23382]: cluster 2026-03-10T07:25:47.914421+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 75 KiB/s rd, 5.8 KiB/s wr, 178 op/s 2026-03-10T07:25:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:49 vm00 bash[20701]: cluster 2026-03-10T07:25:47.914421+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 75 KiB/s rd, 5.8 KiB/s wr, 178 op/s 2026-03-10T07:25:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:49 vm00 bash[20701]: cluster 2026-03-10T07:25:47.914421+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 75 KiB/s rd, 5.8 KiB/s wr, 178 op/s 2026-03-10T07:25:49.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:49 vm00 bash[28005]: cluster 2026-03-10T07:25:47.914421+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 75 KiB/s rd, 5.8 KiB/s wr, 178 op/s 2026-03-10T07:25:49.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:49 vm00 bash[28005]: cluster 2026-03-10T07:25:47.914421+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 453 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 75 KiB/s rd, 5.8 KiB/s wr, 178 op/s 2026-03-10T07:25:50.769 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:25:50.908 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.906+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2362782501 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd89c100fe0 msgr2=0x7fd89c108350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:50.908 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.906+0000 7fd8a1ea9640 1 --2- 192.168.123.103:0/2362782501 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd89c100fe0 0x7fd89c108350 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7fd88c009a80 tx=0x7fd88c02f290 comp rx=0 tx=0).stop 2026-03-10T07:25:50.908 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.906+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2362782501 shutdown_connections 2026-03-10T07:25:50.908 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.906+0000 7fd8a1ea9640 1 --2- 192.168.123.103:0/2362782501 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd89c108890 0x7fd89c10ac80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:50.908 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.906+0000 7fd8a1ea9640 1 --2- 192.168.123.103:0/2362782501 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd89c100fe0 0x7fd89c108350 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:50.908 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.906+0000 7fd8a1ea9640 1 --2- 192.168.123.103:0/2362782501 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd89c1006a0 0x7fd89c100aa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:50.908 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.906+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2362782501 >> 192.168.123.103:0/2362782501 conn(0x7fd89c0fc410 msgr2=0x7fd89c0fe850 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:50.908 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2362782501 shutdown_connections 2026-03-10T07:25:50.908 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2362782501 wait complete. 2026-03-10T07:25:50.908 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 Processor -- start 2026-03-10T07:25:50.908 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 -- start start 2026-03-10T07:25:50.909 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd89c1006a0 0x7fd89c19c450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:50.909 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd89c100fe0 0x7fd89c19c990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:50.909 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd89c108890 0x7fd89c1a3a10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:50.909 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89affd640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd89c100fe0 0x7fd89c19c990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:50.909 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89affd640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd89c100fe0 0x7fd89c19c990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.103:60254/0 (socket says 192.168.123.103:60254) 2026-03-10T07:25:50.909 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89b7fe640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd89c1006a0 0x7fd89c19c450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:50.909 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89b7fe640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd89c1006a0 0x7fd89c19c450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.103:45000/0 (socket says 192.168.123.103:45000) 2026-03-10T07:25:50.909 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd89c10d9c0 con 0x7fd89c108890 2026-03-10T07:25:50.909 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fd89c10d840 con 0x7fd89c1006a0 2026-03-10T07:25:50.909 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fd89c10db40 con 0x7fd89c100fe0 2026-03-10T07:25:50.909 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89affd640 1 -- 192.168.123.103:0/2934357967 learned_addr learned my addr 192.168.123.103:0/2934357967 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:25:50.910 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89bfff640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd89c108890 0x7fd89c1a3a10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:50.910 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89affd640 1 -- 192.168.123.103:0/2934357967 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd89c1006a0 msgr2=0x7fd89c19c450 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:50.910 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89affd640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd89c1006a0 0x7fd89c19c450 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:50.910 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89affd640 1 -- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd89c108890 msgr2=0x7fd89c1a3a10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:50.910 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89affd640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd89c108890 0x7fd89c1a3a10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:50.910 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89affd640 1 -- 192.168.123.103:0/2934357967 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd89c1a4110 con 0x7fd89c100fe0 2026-03-10T07:25:50.910 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89b7fe640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd89c1006a0 0x7fd89c19c450 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:25:50.910 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89bfff640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd89c108890 0x7fd89c1a3a10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:25:50.910 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd89affd640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd89c100fe0 0x7fd89c19c990 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fd88c009a50 tx=0x7fd88c02fd90 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:50.910 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd898ff9640 1 -- 192.168.123.103:0/2934357967 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd88c004440 con 0x7fd89c100fe0 2026-03-10T07:25:50.911 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2934357967 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd89c1a43a0 con 0x7fd89c100fe0 2026-03-10T07:25:50.911 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2934357967 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fd89c1a47d0 con 0x7fd89c100fe0 2026-03-10T07:25:50.911 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd898ff9640 1 -- 192.168.123.103:0/2934357967 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fd88c03e070 con 0x7fd89c100fe0 2026-03-10T07:25:50.911 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd898ff9640 1 -- 192.168.123.103:0/2934357967 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd88c042630 con 0x7fd89c100fe0 2026-03-10T07:25:50.911 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.910+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2934357967 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd89c100aa0 con 0x7fd89c100fe0 2026-03-10T07:25:50.913 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.914+0000 7fd898ff9640 1 -- 192.168.123.103:0/2934357967 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 14) ==== 100035+0+0 (secure 0 0 0) 0x7fd88c038590 con 0x7fd89c100fe0 2026-03-10T07:25:50.913 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.914+0000 7fd898ff9640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fd874077640 0x7fd874079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:50.913 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.914+0000 7fd898ff9640 1 -- 192.168.123.103:0/2934357967 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(64..64 src has 1..64) ==== 7156+0+0 (secure 0 0 0) 0x7fd88c0beef0 con 0x7fd89c100fe0 2026-03-10T07:25:50.915 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.914+0000 7fd89b7fe640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fd874077640 0x7fd874079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:50.915 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.914+0000 7fd89b7fe640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fd874077640 0x7fd874079b00 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7fd888010420 tx=0x7fd8880073d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:50.916 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:50.914+0000 7fd898ff9640 1 -- 192.168.123.103:0/2934357967 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd88c087cd0 con 0x7fd89c100fe0 2026-03-10T07:25:51.025 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.026+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2934357967 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}) -- 0x7fd89c103600 con 0x7fd874077640 2026-03-10T07:25:51.034 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.034+0000 7fd898ff9640 1 -- 192.168.123.103:0/2934357967 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+35 (secure 0 0 0) 0x7fd89c103600 con 0x7fd874077640 2026-03-10T07:25:51.034 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled iscsi.datapool update... 2026-03-10T07:25:51.037 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fd874077640 msgr2=0x7fd874079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:51.037 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fd874077640 0x7fd874079b00 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7fd888010420 tx=0x7fd8880073d0 comp rx=0 tx=0).stop 2026-03-10T07:25:51.038 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd89c100fe0 msgr2=0x7fd89c19c990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:51.038 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd89c100fe0 0x7fd89c19c990 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fd88c009a50 tx=0x7fd88c02fd90 comp rx=0 tx=0).stop 2026-03-10T07:25:51.038 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2934357967 shutdown_connections 2026-03-10T07:25:51.038 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7fd874077640 0x7fd874079b00 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:51.038 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd89c108890 0x7fd89c1a3a10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:51.038 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd89c100fe0 0x7fd89c19c990 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:51.038 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 --2- 192.168.123.103:0/2934357967 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd89c1006a0 0x7fd89c19c450 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:51.038 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2934357967 >> 192.168.123.103:0/2934357967 conn(0x7fd89c0fc410 msgr2=0x7fd89c0fe800 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:51.038 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2934357967 shutdown_connections 2026-03-10T07:25:51.038 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:51.038+0000 7fd8a1ea9640 1 -- 192.168.123.103:0/2934357967 wait complete. 2026-03-10T07:25:51.050 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:51 vm03 bash[23382]: cluster 2026-03-10T07:25:49.914932+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 79 KiB/s rd, 6.0 KiB/s wr, 188 op/s 2026-03-10T07:25:51.050 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:51 vm03 bash[23382]: cluster 2026-03-10T07:25:49.914932+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 79 KiB/s rd, 6.0 KiB/s wr, 188 op/s 2026-03-10T07:25:51.103 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-10T07:25:51.103 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:25:51.103 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-10T07:25:51.119 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:25:51.119 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-10T07:25:51.129 DEBUG:teuthology.orchestra.run.vm03:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@iscsi.iscsi.a.service 2026-03-10T07:25:51.173 INFO:tasks.cephadm:Adding prometheus.a on vm03 2026-03-10T07:25:51.173 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch apply prometheus '1;vm03=a' 2026-03-10T07:25:51.312 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:51 vm00 bash[28005]: cluster 2026-03-10T07:25:49.914932+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 79 KiB/s rd, 6.0 KiB/s wr, 188 op/s 2026-03-10T07:25:51.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:51 vm00 bash[28005]: cluster 2026-03-10T07:25:49.914932+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 79 KiB/s rd, 6.0 KiB/s wr, 188 op/s 2026-03-10T07:25:51.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:51 vm00 bash[20701]: cluster 2026-03-10T07:25:49.914932+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 79 KiB/s rd, 6.0 KiB/s wr, 188 op/s 2026-03-10T07:25:51.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:51 vm00 bash[20701]: cluster 2026-03-10T07:25:49.914932+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v233: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 79 KiB/s rd, 6.0 KiB/s wr, 188 op/s 2026-03-10T07:25:51.851 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.851 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.851 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.851 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.851 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.851 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.851 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.851 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.851 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.851 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.852 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.852 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:51.852 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.028294+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24368 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.028294+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24368 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: cephadm 2026-03-10T07:25:51.029759+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: cephadm 2026-03-10T07:25:51.029759+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.035371+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.035371+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.036662+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.036662+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.037879+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.037879+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.155 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.038329+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.038329+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.044013+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.044013+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.045931+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.045931+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.048587+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.048587+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.055619+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.055619+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: cephadm 2026-03-10T07:25:51.056599+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: cephadm 2026-03-10T07:25:51.056599+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.880994+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.880994+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.964873+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.964873+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.972679+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.972679+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.987768+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.987768+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.998774+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:51.998774+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:52.010419+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:52 vm03 bash[23382]: audit 2026-03-10T07:25:52.010419+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:52.156 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:52.156 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:52.156 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:52.156 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:52.156 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:52.156 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:52.156 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:51 vm03 systemd[1]: Started Ceph iscsi.iscsi.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.028294+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24368 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.028294+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24368 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: cephadm 2026-03-10T07:25:51.029759+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: cephadm 2026-03-10T07:25:51.029759+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.035371+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.035371+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.036662+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.036662+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.037879+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.037879+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.038329+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.038329+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.044013+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.044013+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.045931+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.045931+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.048587+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.048587+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.055619+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.055619+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: cephadm 2026-03-10T07:25:51.056599+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: cephadm 2026-03-10T07:25:51.056599+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.880994+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.880994+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.964873+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.964873+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.972679+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.972679+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.987768+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.987768+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.998774+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:51.998774+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:52.010419+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:52 vm00 bash[28005]: audit 2026-03-10T07:25:52.010419+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.028294+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24368 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.028294+0000 mgr.y (mgr.14150) 250 : audit [DBG] from='client.24368 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: cephadm 2026-03-10T07:25:51.029759+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: cephadm 2026-03-10T07:25:51.029759+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.035371+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.035371+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.036662+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.036662+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.037879+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.037879+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.038329+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.038329+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.044013+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.044013+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.045931+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.045931+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.048587+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.048587+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.055619+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.055619+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: cephadm 2026-03-10T07:25:51.056599+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: cephadm 2026-03-10T07:25:51.056599+0000 mgr.y (mgr.14150) 252 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.880994+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.880994+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.964873+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.964873+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.972679+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.972679+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.987768+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.987768+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.998774+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:51.998774+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:52.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:52.010419+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:52.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:52 vm00 bash[20701]: audit 2026-03-10T07:25:52.010419+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:25:53.078 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:52 vm03 bash[49156]: debug Started the configuration object watcher 2026-03-10T07:25:53.078 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:52 vm03 bash[49156]: debug Checking for config object changes every 1s 2026-03-10T07:25:53.078 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:52 vm03 bash[49156]: debug Processing osd blocklist entries for this node 2026-03-10T07:25:53.078 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: debug Reading the configuration object to update local LIO configuration 2026-03-10T07:25:53.078 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: debug Configuration does not have an entry for this host(vm03.local) - nothing to define to LIO 2026-03-10T07:25:53.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-10T07:25:53.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: * Environment: production 2026-03-10T07:25:53.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T07:25:53.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: Use a production WSGI server instead. 2026-03-10T07:25:53.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: * Debug mode: off 2026-03-10T07:25:53.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: debug * Running on all addresses. 2026-03-10T07:25:53.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T07:25:53.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: * Running on all addresses. 2026-03-10T07:25:53.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T07:25:53.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-10T07:25:53.079 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:25:53 vm03 bash[49156]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:53 vm00 bash[28005]: cluster 2026-03-10T07:25:51.915339+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.5 KiB/s wr, 142 op/s 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:53 vm00 bash[28005]: cluster 2026-03-10T07:25:51.915339+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.5 KiB/s wr, 142 op/s 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:53 vm00 bash[28005]: cephadm 2026-03-10T07:25:51.988568+0000 mgr.y (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:53 vm00 bash[28005]: cephadm 2026-03-10T07:25:51.988568+0000 mgr.y (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:53 vm00 bash[28005]: audit 2026-03-10T07:25:52.055196+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:53 vm00 bash[28005]: audit 2026-03-10T07:25:52.055196+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:53 vm00 bash[28005]: cluster 2026-03-10T07:25:52.062031+0000 mon.a (mon.0) 703 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:53 vm00 bash[28005]: cluster 2026-03-10T07:25:52.062031+0000 mon.a (mon.0) 703 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:53 vm00 bash[28005]: audit 2026-03-10T07:25:53.033131+0000 mon.b (mon.1) 21 : audit [DBG] from='client.? 192.168.123.103:0/2184987629' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:53 vm00 bash[28005]: audit 2026-03-10T07:25:53.033131+0000 mon.b (mon.1) 21 : audit [DBG] from='client.? 192.168.123.103:0/2184987629' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:53 vm00 bash[20701]: cluster 2026-03-10T07:25:51.915339+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.5 KiB/s wr, 142 op/s 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:53 vm00 bash[20701]: cluster 2026-03-10T07:25:51.915339+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.5 KiB/s wr, 142 op/s 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:53 vm00 bash[20701]: cephadm 2026-03-10T07:25:51.988568+0000 mgr.y (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:53 vm00 bash[20701]: cephadm 2026-03-10T07:25:51.988568+0000 mgr.y (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:53 vm00 bash[20701]: audit 2026-03-10T07:25:52.055196+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:53 vm00 bash[20701]: audit 2026-03-10T07:25:52.055196+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:53 vm00 bash[20701]: cluster 2026-03-10T07:25:52.062031+0000 mon.a (mon.0) 703 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:53 vm00 bash[20701]: cluster 2026-03-10T07:25:52.062031+0000 mon.a (mon.0) 703 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:53 vm00 bash[20701]: audit 2026-03-10T07:25:53.033131+0000 mon.b (mon.1) 21 : audit [DBG] from='client.? 192.168.123.103:0/2184987629' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T07:25:53.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:53 vm00 bash[20701]: audit 2026-03-10T07:25:53.033131+0000 mon.b (mon.1) 21 : audit [DBG] from='client.? 192.168.123.103:0/2184987629' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T07:25:53.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:53 vm03 bash[23382]: cluster 2026-03-10T07:25:51.915339+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.5 KiB/s wr, 142 op/s 2026-03-10T07:25:53.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:53 vm03 bash[23382]: cluster 2026-03-10T07:25:51.915339+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v234: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.5 KiB/s wr, 142 op/s 2026-03-10T07:25:53.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:53 vm03 bash[23382]: cephadm 2026-03-10T07:25:51.988568+0000 mgr.y (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T07:25:53.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:53 vm03 bash[23382]: cephadm 2026-03-10T07:25:51.988568+0000 mgr.y (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T07:25:53.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:53 vm03 bash[23382]: audit 2026-03-10T07:25:52.055196+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T07:25:53.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:53 vm03 bash[23382]: audit 2026-03-10T07:25:52.055196+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T07:25:53.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:53 vm03 bash[23382]: cluster 2026-03-10T07:25:52.062031+0000 mon.a (mon.0) 703 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T07:25:53.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:53 vm03 bash[23382]: cluster 2026-03-10T07:25:52.062031+0000 mon.a (mon.0) 703 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T07:25:53.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:53 vm03 bash[23382]: audit 2026-03-10T07:25:53.033131+0000 mon.b (mon.1) 21 : audit [DBG] from='client.? 192.168.123.103:0/2184987629' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T07:25:53.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:53 vm03 bash[23382]: audit 2026-03-10T07:25:53.033131+0000 mon.b (mon.1) 21 : audit [DBG] from='client.? 192.168.123.103:0/2184987629' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T07:25:54.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:54 vm00 bash[28005]: cluster 2026-03-10T07:25:53.076098+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T07:25:54.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:54 vm00 bash[28005]: cluster 2026-03-10T07:25:53.076098+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T07:25:54.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:54 vm00 bash[20701]: cluster 2026-03-10T07:25:53.076098+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T07:25:54.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:54 vm00 bash[20701]: cluster 2026-03-10T07:25:53.076098+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T07:25:54.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:54 vm03 bash[23382]: cluster 2026-03-10T07:25:53.076098+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T07:25:54.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:54 vm03 bash[23382]: cluster 2026-03-10T07:25:53.076098+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T07:25:55.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:55 vm03 bash[23382]: cluster 2026-03-10T07:25:53.915732+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v237: 132 pgs: 1 peering, 131 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 46 KiB/s rd, 3.4 KiB/s wr, 109 op/s 2026-03-10T07:25:55.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:55 vm03 bash[23382]: cluster 2026-03-10T07:25:53.915732+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v237: 132 pgs: 1 peering, 131 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 46 KiB/s rd, 3.4 KiB/s wr, 109 op/s 2026-03-10T07:25:55.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:55 vm03 bash[23382]: cluster 2026-03-10T07:25:54.104457+0000 mon.a (mon.0) 705 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-10T07:25:55.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:55 vm03 bash[23382]: cluster 2026-03-10T07:25:54.104457+0000 mon.a (mon.0) 705 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-10T07:25:55.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:55 vm00 bash[28005]: cluster 2026-03-10T07:25:53.915732+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v237: 132 pgs: 1 peering, 131 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 46 KiB/s rd, 3.4 KiB/s wr, 109 op/s 2026-03-10T07:25:55.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:55 vm00 bash[28005]: cluster 2026-03-10T07:25:53.915732+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v237: 132 pgs: 1 peering, 131 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 46 KiB/s rd, 3.4 KiB/s wr, 109 op/s 2026-03-10T07:25:55.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:55 vm00 bash[28005]: cluster 2026-03-10T07:25:54.104457+0000 mon.a (mon.0) 705 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-10T07:25:55.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:55 vm00 bash[28005]: cluster 2026-03-10T07:25:54.104457+0000 mon.a (mon.0) 705 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-10T07:25:55.644 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:55 vm00 bash[20701]: cluster 2026-03-10T07:25:53.915732+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v237: 132 pgs: 1 peering, 131 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 46 KiB/s rd, 3.4 KiB/s wr, 109 op/s 2026-03-10T07:25:55.644 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:55 vm00 bash[20701]: cluster 2026-03-10T07:25:53.915732+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v237: 132 pgs: 1 peering, 131 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 46 KiB/s rd, 3.4 KiB/s wr, 109 op/s 2026-03-10T07:25:55.644 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:55 vm00 bash[20701]: cluster 2026-03-10T07:25:54.104457+0000 mon.a (mon.0) 705 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-10T07:25:55.644 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:55 vm00 bash[20701]: cluster 2026-03-10T07:25:54.104457+0000 mon.a (mon.0) 705 : cluster [DBG] mgrmap e15: y(active, since 6m), standbys: x 2026-03-10T07:25:55.860 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:25:56.101 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 -- 192.168.123.103:0/3838701899 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f50a8108070 msgr2=0x7f50a810a460 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:56.101 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 --2- 192.168.123.103:0/3838701899 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f50a8108070 0x7f50a810a460 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f509c009a80 tx=0x7f509c02f290 comp rx=0 tx=0).stop 2026-03-10T07:25:56.101 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 -- 192.168.123.103:0/3838701899 shutdown_connections 2026-03-10T07:25:56.101 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 --2- 192.168.123.103:0/3838701899 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50a810a9f0 0x7f50a810ce90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:56.101 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 --2- 192.168.123.103:0/3838701899 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f50a8108070 0x7f50a810a460 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:56.101 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 --2- 192.168.123.103:0/3838701899 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f50a806bc50 0x7f50a8107b30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:56.101 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 -- 192.168.123.103:0/3838701899 >> 192.168.123.103:0/3838701899 conn(0x7f50a80fd120 msgr2=0x7f50a80ff560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:56.101 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 -- 192.168.123.103:0/3838701899 shutdown_connections 2026-03-10T07:25:56.102 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 -- 192.168.123.103:0/3838701899 wait complete. 2026-03-10T07:25:56.102 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 Processor -- start 2026-03-10T07:25:56.102 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 -- start start 2026-03-10T07:25:56.102 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50a806bc50 0x7f50a819c3e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:56.103 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f50a8108070 0x7f50a819c920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:56.103 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f50a810a9f0 0x7f50a81a39a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:56.103 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f50a810fc10 con 0x7f50a806bc50 2026-03-10T07:25:56.103 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f50a810fa90 con 0x7f50a8108070 2026-03-10T07:25:56.103 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50ad27d640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f50a810fd90 con 0x7f50a810a9f0 2026-03-10T07:25:56.103 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50a6575640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f50a8108070 0x7f50a819c920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:56.103 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50a7577640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f50a810a9f0 0x7f50a81a39a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:56.103 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50a7577640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f50a810a9f0 0x7f50a81a39a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.103:60856/0 (socket says 192.168.123.103:60856) 2026-03-10T07:25:56.103 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50a7577640 1 -- 192.168.123.103:0/2683725720 learned_addr learned my addr 192.168.123.103:0/2683725720 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:25:56.103 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50a6d76640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50a806bc50 0x7f50a819c3e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:56.104 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50a6575640 1 -- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f50a810a9f0 msgr2=0x7f50a81a39a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:56.104 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50a6575640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f50a810a9f0 0x7f50a81a39a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:56.104 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50a6575640 1 -- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50a806bc50 msgr2=0x7f50a819c3e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:56.104 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50a6575640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50a806bc50 0x7f50a819c3e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:56.104 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50a6575640 1 -- 192.168.123.103:0/2683725720 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f50a81a40a0 con 0x7f50a8108070 2026-03-10T07:25:56.104 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.102+0000 7f50a6d76640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50a806bc50 0x7f50a819c3e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:25:56.104 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.106+0000 7f50a6575640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f50a8108070 0x7f50a819c920 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f509c009a50 tx=0x7f509c0043c0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:56.104 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.106+0000 7f507ffff640 1 -- 192.168.123.103:0/2683725720 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f509c002e40 con 0x7f50a8108070 2026-03-10T07:25:56.104 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.106+0000 7f50ad27d640 1 -- 192.168.123.103:0/2683725720 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f50a81a4330 con 0x7f50a8108070 2026-03-10T07:25:56.106 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.106+0000 7f50ad27d640 1 -- 192.168.123.103:0/2683725720 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f50a81a4760 con 0x7f50a8108070 2026-03-10T07:25:56.106 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.106+0000 7f507ffff640 1 -- 192.168.123.103:0/2683725720 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f509c03e070 con 0x7f50a8108070 2026-03-10T07:25:56.107 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.106+0000 7f507ffff640 1 -- 192.168.123.103:0/2683725720 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f509c0055f0 con 0x7f50a8108070 2026-03-10T07:25:56.107 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.106+0000 7f50ad27d640 1 -- 192.168.123.103:0/2683725720 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5068005180 con 0x7f50a8108070 2026-03-10T07:25:56.107 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.106+0000 7f507ffff640 1 -- 192.168.123.103:0/2683725720 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 15) ==== 100086+0+0 (secure 0 0 0) 0x7f509c0049d0 con 0x7f50a8108070 2026-03-10T07:25:56.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.106+0000 7f507ffff640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5074077720 0x7f5074079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:25:56.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.110+0000 7f507ffff640 1 -- 192.168.123.103:0/2683725720 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(67..67 src has 1..67) ==== 7157+0+0 (secure 0 0 0) 0x7f509c0be610 con 0x7f50a8108070 2026-03-10T07:25:56.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.110+0000 7f50a6d76640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5074077720 0x7f5074079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:25:56.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.110+0000 7f50a6d76640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5074077720 0x7f5074079be0 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f5090002730 tx=0x7f5090009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:25:56.111 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.110+0000 7f507ffff640 1 -- 192.168.123.103:0/2683725720 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f509c035320 con 0x7f50a8108070 2026-03-10T07:25:56.215 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.214+0000 7f50ad27d640 1 -- 192.168.123.103:0/2683725720 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}) -- 0x7f5068002bf0 con 0x7f5074077720 2026-03-10T07:25:56.245 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.246+0000 7f507ffff640 1 -- 192.168.123.103:0/2683725720 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+31 (secure 0 0 0) 0x7f5068002bf0 con 0x7f5074077720 2026-03-10T07:25:56.245 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled prometheus update... 2026-03-10T07:25:56.249 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.250+0000 7f50ad27d640 1 -- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5074077720 msgr2=0x7f5074079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:56.250 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.250+0000 7f50ad27d640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5074077720 0x7f5074079be0 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f5090002730 tx=0x7f5090009290 comp rx=0 tx=0).stop 2026-03-10T07:25:56.250 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.250+0000 7f50ad27d640 1 -- 192.168.123.103:0/2683725720 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f50a8108070 msgr2=0x7f50a819c920 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:25:56.250 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.250+0000 7f50ad27d640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f50a8108070 0x7f50a819c920 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f509c009a50 tx=0x7f509c0043c0 comp rx=0 tx=0).stop 2026-03-10T07:25:56.252 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.254+0000 7f50ad27d640 1 -- 192.168.123.103:0/2683725720 shutdown_connections 2026-03-10T07:25:56.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.254+0000 7f50ad27d640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f5074077720 0x7f5074079be0 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:56.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.254+0000 7f50ad27d640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f50a810a9f0 0x7f50a81a39a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:56.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.254+0000 7f50ad27d640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f50a8108070 0x7f50a819c920 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:56.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.254+0000 7f50ad27d640 1 --2- 192.168.123.103:0/2683725720 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f50a806bc50 0x7f50a819c3e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:25:56.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.254+0000 7f50ad27d640 1 -- 192.168.123.103:0/2683725720 >> 192.168.123.103:0/2683725720 conn(0x7f50a80fd120 msgr2=0x7f50a8108c70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:25:56.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.254+0000 7f50ad27d640 1 -- 192.168.123.103:0/2683725720 shutdown_connections 2026-03-10T07:25:56.253 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:25:56.254+0000 7f50ad27d640 1 -- 192.168.123.103:0/2683725720 wait complete. 2026-03-10T07:25:56.310 DEBUG:teuthology.orchestra.run.vm03:prometheus.a> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@prometheus.a.service 2026-03-10T07:25:56.311 INFO:tasks.cephadm:Adding node-exporter.a on vm00 2026-03-10T07:25:56.311 INFO:tasks.cephadm:Adding node-exporter.b on vm03 2026-03-10T07:25:56.311 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch apply node-exporter '2;vm00=a;vm03=b' 2026-03-10T07:25:56.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:56 vm03 bash[23382]: audit 2026-03-10T07:25:55.224785+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:56.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:56 vm03 bash[23382]: audit 2026-03-10T07:25:55.224785+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:56.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:56 vm03 bash[23382]: cluster 2026-03-10T07:25:55.946746+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T07:25:56.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:56 vm03 bash[23382]: cluster 2026-03-10T07:25:55.946746+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T07:25:56.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:56 vm00 bash[28005]: audit 2026-03-10T07:25:55.224785+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:56.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:56 vm00 bash[28005]: audit 2026-03-10T07:25:55.224785+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:56.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:56 vm00 bash[28005]: cluster 2026-03-10T07:25:55.946746+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T07:25:56.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:56 vm00 bash[28005]: cluster 2026-03-10T07:25:55.946746+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T07:25:56.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:56 vm00 bash[20701]: audit 2026-03-10T07:25:55.224785+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:56.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:56 vm00 bash[20701]: audit 2026-03-10T07:25:55.224785+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:56.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:56 vm00 bash[20701]: cluster 2026-03-10T07:25:55.946746+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T07:25:56.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:56 vm00 bash[20701]: cluster 2026-03-10T07:25:55.946746+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: cluster 2026-03-10T07:25:55.916236+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v238: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 33 op/s 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: cluster 2026-03-10T07:25:55.916236+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v238: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 33 op/s 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:56.218214+0000 mgr.y (mgr.14150) 257 : audit [DBG] from='client.24385 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:56.218214+0000 mgr.y (mgr.14150) 257 : audit [DBG] from='client.24385 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: cephadm 2026-03-10T07:25:56.219204+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: cephadm 2026-03-10T07:25:56.219204+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:56.246615+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:56.246615+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:57.183124+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:57.183124+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:57.187531+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:57.187531+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:57.188759+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:57.188759+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:57.189225+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:57.189225+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:57.192627+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:57 vm03 bash[23382]: audit 2026-03-10T07:25:57.192627+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: cluster 2026-03-10T07:25:55.916236+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v238: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 33 op/s 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: cluster 2026-03-10T07:25:55.916236+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v238: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 33 op/s 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:56.218214+0000 mgr.y (mgr.14150) 257 : audit [DBG] from='client.24385 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:56.218214+0000 mgr.y (mgr.14150) 257 : audit [DBG] from='client.24385 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: cephadm 2026-03-10T07:25:56.219204+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: cephadm 2026-03-10T07:25:56.219204+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:56.246615+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:56.246615+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:57.183124+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:57.183124+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:57.187531+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:57.187531+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:57.188759+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:57.188759+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:57.189225+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:57.189225+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:57.192627+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:57 vm00 bash[28005]: audit 2026-03-10T07:25:57.192627+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: cluster 2026-03-10T07:25:55.916236+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v238: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 33 op/s 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: cluster 2026-03-10T07:25:55.916236+0000 mgr.y (mgr.14150) 256 : cluster [DBG] pgmap v238: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 33 op/s 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:56.218214+0000 mgr.y (mgr.14150) 257 : audit [DBG] from='client.24385 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:56.218214+0000 mgr.y (mgr.14150) 257 : audit [DBG] from='client.24385 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: cephadm 2026-03-10T07:25:56.219204+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: cephadm 2026-03-10T07:25:56.219204+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:56.246615+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:56.246615+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:57.183124+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:57.183124+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:57.187531+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:57.187531+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:57.188759+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:57.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:57.188759+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:25:57.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:57.189225+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:57.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:57.189225+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:25:57.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:57.192627+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:57.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:57 vm00 bash[20701]: audit 2026-03-10T07:25:57.192627+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:25:58.017 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:25:57 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:25:58.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:58 vm03 bash[23382]: cephadm 2026-03-10T07:25:57.359106+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T07:25:58.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:58 vm03 bash[23382]: cephadm 2026-03-10T07:25:57.359106+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T07:25:58.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:58 vm00 bash[28005]: cephadm 2026-03-10T07:25:57.359106+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T07:25:58.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:58 vm00 bash[28005]: cephadm 2026-03-10T07:25:57.359106+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T07:25:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:58 vm00 bash[20701]: cephadm 2026-03-10T07:25:57.359106+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T07:25:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:58 vm00 bash[20701]: cephadm 2026-03-10T07:25:57.359106+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T07:25:59.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:59 vm03 bash[23382]: cluster 2026-03-10T07:25:57.916705+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:25:59.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:25:59 vm03 bash[23382]: cluster 2026-03-10T07:25:57.916705+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:25:59.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:59 vm00 bash[28005]: cluster 2026-03-10T07:25:57.916705+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:25:59.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:25:59 vm00 bash[28005]: cluster 2026-03-10T07:25:57.916705+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:25:59.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:59 vm00 bash[20701]: cluster 2026-03-10T07:25:57.916705+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:25:59.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:25:59 vm00 bash[20701]: cluster 2026-03-10T07:25:57.916705+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:26:00.888 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:26:01.131 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 -- 192.168.123.103:0/1687018764 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0f28078cf0 msgr2=0x7f0f28079150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:01.131 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 --2- 192.168.123.103:0/1687018764 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0f28078cf0 0x7f0f28079150 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f0f18009a30 tx=0x7f0f1802f220 comp rx=0 tx=0).stop 2026-03-10T07:26:01.131 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 -- 192.168.123.103:0/1687018764 shutdown_connections 2026-03-10T07:26:01.131 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 --2- 192.168.123.103:0/1687018764 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0f28079690 0x7f0f28079f30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:01.131 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 --2- 192.168.123.103:0/1687018764 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0f28078cf0 0x7f0f28079150 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:01.131 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 --2- 192.168.123.103:0/1687018764 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f28077aa0 0x7f0f28077ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:01.131 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 -- 192.168.123.103:0/1687018764 >> 192.168.123.103:0/1687018764 conn(0x7f0f281003b0 msgr2=0x7f0f281027f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:01.131 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 -- 192.168.123.103:0/1687018764 shutdown_connections 2026-03-10T07:26:01.131 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 -- 192.168.123.103:0/1687018764 wait complete. 2026-03-10T07:26:01.131 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 Processor -- start 2026-03-10T07:26:01.131 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 -- start start 2026-03-10T07:26:01.132 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f28077aa0 0x7f0f281a0740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:01.132 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0f28078cf0 0x7f0f281a0c80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:01.132 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0f28079690 0x7f0f281a7d00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:01.132 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0f28114030 con 0x7f0f28077aa0 2026-03-10T07:26:01.132 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f0f28113eb0 con 0x7f0f28079690 2026-03-10T07:26:01.132 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.130+0000 7f0f2ef82640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f0f281141b0 con 0x7f0f28078cf0 2026-03-10T07:26:01.132 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2d77f640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0f28078cf0 0x7f0f281a0c80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:01.133 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2d77f640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0f28078cf0 0x7f0f281a0c80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.103:60872/0 (socket says 192.168.123.103:60872) 2026-03-10T07:26:01.133 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2d77f640 1 -- 192.168.123.103:0/927819178 learned_addr learned my addr 192.168.123.103:0/927819178 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:26:01.133 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2df80640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f28077aa0 0x7f0f281a0740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:01.133 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2e781640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0f28079690 0x7f0f281a7d00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:01.133 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2df80640 1 -- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0f28078cf0 msgr2=0x7f0f281a0c80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:01.133 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2df80640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0f28078cf0 0x7f0f281a0c80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:01.133 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2df80640 1 -- 192.168.123.103:0/927819178 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0f28079690 msgr2=0x7f0f281a7d00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:01.133 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2df80640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0f28079690 0x7f0f281a7d00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:01.133 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2df80640 1 -- 192.168.123.103:0/927819178 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0f281a8400 con 0x7f0f28077aa0 2026-03-10T07:26:01.133 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2e781640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0f28079690 0x7f0f281a7d00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:26:01.134 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2df80640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f28077aa0 0x7f0f281a0740 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7f0f1000c970 tx=0x7f0f1000ce40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:01.134 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f1effd640 1 -- 192.168.123.103:0/927819178 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0f10007bf0 con 0x7f0f28077aa0 2026-03-10T07:26:01.134 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2ef82640 1 -- 192.168.123.103:0/927819178 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0f281a86f0 con 0x7f0f28077aa0 2026-03-10T07:26:01.134 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f2ef82640 1 -- 192.168.123.103:0/927819178 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0f281a8c30 con 0x7f0f28077aa0 2026-03-10T07:26:01.135 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f1effd640 1 -- 192.168.123.103:0/927819178 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f0f10007d90 con 0x7f0f28077aa0 2026-03-10T07:26:01.135 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.134+0000 7f0f1effd640 1 -- 192.168.123.103:0/927819178 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0f100056b0 con 0x7f0f28077aa0 2026-03-10T07:26:01.136 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.138+0000 7f0f1effd640 1 -- 192.168.123.103:0/927819178 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 15) ==== 100086+0+0 (secure 0 0 0) 0x7f0f10020020 con 0x7f0f28077aa0 2026-03-10T07:26:01.137 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.138+0000 7f0f1effd640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0efc0777a0 0x7f0efc079c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:01.138 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.138+0000 7f0f2d77f640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0efc0777a0 0x7f0efc079c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:01.138 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.138+0000 7f0f1effd640 1 -- 192.168.123.103:0/927819178 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(67..67 src has 1..67) ==== 7157+0+0 (secure 0 0 0) 0x7f0f1009a700 con 0x7f0f28077aa0 2026-03-10T07:26:01.139 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.138+0000 7f0f2ef82640 1 -- 192.168.123.103:0/927819178 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0f28079150 con 0x7f0f28077aa0 2026-03-10T07:26:01.142 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.142+0000 7f0f2d77f640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0efc0777a0 0x7f0efc079c60 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f0f180097c0 tx=0x7f0f18005cb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:01.142 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.142+0000 7f0f1effd640 1 -- 192.168.123.103:0/927819178 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0f10014030 con 0x7f0f28077aa0 2026-03-10T07:26:01.277 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.278+0000 7f0f2ef82640 1 -- 192.168.123.103:0/927819178 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}) -- 0x7f0f280630c0 con 0x7f0efc0777a0 2026-03-10T07:26:01.294 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.294+0000 7f0f1effd640 1 -- 192.168.123.103:0/927819178 <== mgr.14150 v2:192.168.123.100:6800/2669938860 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+34 (secure 0 0 0) 0x7f0f280630c0 con 0x7f0efc0777a0 2026-03-10T07:26:01.294 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled node-exporter update... 2026-03-10T07:26:01.302 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 -- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0efc0777a0 msgr2=0x7f0efc079c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:01.302 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0efc0777a0 0x7f0efc079c60 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f0f180097c0 tx=0x7f0f18005cb0 comp rx=0 tx=0).stop 2026-03-10T07:26:01.302 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 -- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f28077aa0 msgr2=0x7f0f281a0740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:01.302 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f28077aa0 0x7f0f281a0740 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7f0f1000c970 tx=0x7f0f1000ce40 comp rx=0 tx=0).stop 2026-03-10T07:26:01.302 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 -- 192.168.123.103:0/927819178 shutdown_connections 2026-03-10T07:26:01.302 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0efc0777a0 0x7f0efc079c60 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:01.302 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0f28079690 0x7f0f281a7d00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:01.303 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0f28078cf0 0x7f0f281a0c80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:01.303 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 --2- 192.168.123.103:0/927819178 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f28077aa0 0x7f0f281a0740 unknown :-1 s=CLOSED pgs=151 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:01.303 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 -- 192.168.123.103:0/927819178 >> 192.168.123.103:0/927819178 conn(0x7f0f281003b0 msgr2=0x7f0f28102700 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:01.303 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 -- 192.168.123.103:0/927819178 shutdown_connections 2026-03-10T07:26:01.303 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:01.302+0000 7f0f2ef82640 1 -- 192.168.123.103:0/927819178 wait complete. 2026-03-10T07:26:01.415 DEBUG:teuthology.orchestra.run.vm00:node-exporter.a> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@node-exporter.a.service 2026-03-10T07:26:01.417 DEBUG:teuthology.orchestra.run.vm03:node-exporter.b> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@node-exporter.b.service 2026-03-10T07:26:01.417 INFO:tasks.cephadm:Adding alertmanager.a on vm00 2026-03-10T07:26:01.418 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch apply alertmanager '1;vm00=a' 2026-03-10T07:26:01.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:01 vm03 bash[23382]: cluster 2026-03-10T07:25:59.917080+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 130 B/s wr, 1 op/s 2026-03-10T07:26:01.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:01 vm03 bash[23382]: cluster 2026-03-10T07:25:59.917080+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 130 B/s wr, 1 op/s 2026-03-10T07:26:01.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:01 vm03 bash[23382]: audit 2026-03-10T07:26:01.290396+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:01.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:01 vm03 bash[23382]: audit 2026-03-10T07:26:01.290396+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:01.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:01 vm00 bash[28005]: cluster 2026-03-10T07:25:59.917080+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 130 B/s wr, 1 op/s 2026-03-10T07:26:01.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:01 vm00 bash[28005]: cluster 2026-03-10T07:25:59.917080+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 130 B/s wr, 1 op/s 2026-03-10T07:26:01.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:01 vm00 bash[28005]: audit 2026-03-10T07:26:01.290396+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:01.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:01 vm00 bash[28005]: audit 2026-03-10T07:26:01.290396+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:01 vm00 bash[20701]: cluster 2026-03-10T07:25:59.917080+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 130 B/s wr, 1 op/s 2026-03-10T07:26:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:01 vm00 bash[20701]: cluster 2026-03-10T07:25:59.917080+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 130 B/s wr, 1 op/s 2026-03-10T07:26:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:01 vm00 bash[20701]: audit 2026-03-10T07:26:01.290396+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:01 vm00 bash[20701]: audit 2026-03-10T07:26:01.290396+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:02.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:02 vm03 bash[23382]: audit 2026-03-10T07:26:01.280351+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:02.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:02 vm03 bash[23382]: audit 2026-03-10T07:26:01.280351+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:02.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:02 vm03 bash[23382]: cephadm 2026-03-10T07:26:01.281642+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T07:26:02.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:02 vm03 bash[23382]: cephadm 2026-03-10T07:26:01.281642+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T07:26:02.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:02 vm00 bash[28005]: audit 2026-03-10T07:26:01.280351+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:02.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:02 vm00 bash[28005]: audit 2026-03-10T07:26:01.280351+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:02.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:02 vm00 bash[28005]: cephadm 2026-03-10T07:26:01.281642+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T07:26:02.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:02 vm00 bash[28005]: cephadm 2026-03-10T07:26:01.281642+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T07:26:02.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:02 vm00 bash[20701]: audit 2026-03-10T07:26:01.280351+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:02.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:02 vm00 bash[20701]: audit 2026-03-10T07:26:01.280351+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:02.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:02 vm00 bash[20701]: cephadm 2026-03-10T07:26:01.281642+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T07:26:02.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:02 vm00 bash[20701]: cephadm 2026-03-10T07:26:01.281642+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T07:26:03.267 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:02 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:26:03.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:03 vm03 bash[23382]: cluster 2026-03-10T07:26:01.917535+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 926 B/s rd, 115 B/s wr, 1 op/s 2026-03-10T07:26:03.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:03 vm03 bash[23382]: cluster 2026-03-10T07:26:01.917535+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 926 B/s rd, 115 B/s wr, 1 op/s 2026-03-10T07:26:03.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:03 vm00 bash[28005]: cluster 2026-03-10T07:26:01.917535+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 926 B/s rd, 115 B/s wr, 1 op/s 2026-03-10T07:26:03.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:03 vm00 bash[28005]: cluster 2026-03-10T07:26:01.917535+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 926 B/s rd, 115 B/s wr, 1 op/s 2026-03-10T07:26:03.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:03 vm00 bash[20701]: cluster 2026-03-10T07:26:01.917535+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 926 B/s rd, 115 B/s wr, 1 op/s 2026-03-10T07:26:03.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:03 vm00 bash[20701]: cluster 2026-03-10T07:26:01.917535+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 926 B/s rd, 115 B/s wr, 1 op/s 2026-03-10T07:26:04.049 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:03 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:04 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:26:03 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:26:04 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 07:26:03 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 07:26:04 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:03 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:04 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:03 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:03 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:03 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:03 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:03 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:04 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 07:26:03 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 07:26:04 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.049 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:26:03 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.050 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:26:04 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 systemd[1]: Started Ceph prometheus.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.266Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.266Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.266Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm03 (none))" 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.266Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.266Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.268Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.268Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.270Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.270Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.182µs 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.270Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.270Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.270Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=17.243µs wal_replay_duration=110.738µs wbl_replay_duration=160ns total_replay_duration=139.673µs 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.270Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.270Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.272Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.272Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.272Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.284Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=11.789622ms db_storage=621ns remote_storage=972ns web_handler=100ns query_engine=351ns scrape=1.698822ms scrape_sd=156.123µs notify=862ns notify_sd=792ns rules=9.738255ms tracing=5.441µs 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.284Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T07:26:04.454 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:04 vm03 bash[50125]: ts=2026-03-10T07:26:04.284Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T07:26:04.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:04 vm03 bash[23382]: audit 2026-03-10T07:26:02.828082+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:04.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:04 vm03 bash[23382]: audit 2026-03-10T07:26:02.828082+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:04.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:04 vm03 bash[23382]: audit 2026-03-10T07:26:04.165133+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:04 vm03 bash[23382]: audit 2026-03-10T07:26:04.165133+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:04 vm03 bash[23382]: audit 2026-03-10T07:26:04.171567+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:04 vm03 bash[23382]: audit 2026-03-10T07:26:04.171567+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:04 vm03 bash[23382]: audit 2026-03-10T07:26:04.178109+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:04 vm03 bash[23382]: audit 2026-03-10T07:26:04.178109+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:04 vm03 bash[23382]: audit 2026-03-10T07:26:04.182844+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T07:26:04.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:04 vm03 bash[23382]: audit 2026-03-10T07:26:04.182844+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:04 vm00 bash[28005]: audit 2026-03-10T07:26:02.828082+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:04 vm00 bash[28005]: audit 2026-03-10T07:26:02.828082+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:04 vm00 bash[28005]: audit 2026-03-10T07:26:04.165133+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:04 vm00 bash[28005]: audit 2026-03-10T07:26:04.165133+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:04 vm00 bash[28005]: audit 2026-03-10T07:26:04.171567+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:04 vm00 bash[28005]: audit 2026-03-10T07:26:04.171567+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:04 vm00 bash[28005]: audit 2026-03-10T07:26:04.178109+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:04 vm00 bash[28005]: audit 2026-03-10T07:26:04.178109+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:04 vm00 bash[28005]: audit 2026-03-10T07:26:04.182844+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:04 vm00 bash[28005]: audit 2026-03-10T07:26:04.182844+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:04 vm00 bash[20701]: audit 2026-03-10T07:26:02.828082+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:04 vm00 bash[20701]: audit 2026-03-10T07:26:02.828082+0000 mgr.y (mgr.14150) 265 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:04 vm00 bash[20701]: audit 2026-03-10T07:26:04.165133+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:04 vm00 bash[20701]: audit 2026-03-10T07:26:04.165133+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:04 vm00 bash[20701]: audit 2026-03-10T07:26:04.171567+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:04 vm00 bash[20701]: audit 2026-03-10T07:26:04.171567+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:04 vm00 bash[20701]: audit 2026-03-10T07:26:04.178109+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:04 vm00 bash[20701]: audit 2026-03-10T07:26:04.178109+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:04 vm00 bash[20701]: audit 2026-03-10T07:26:04.182844+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T07:26:04.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:04 vm00 bash[20701]: audit 2026-03-10T07:26:04.182844+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:05 vm00 bash[20701]: cluster 2026-03-10T07:26:03.918027+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:05 vm00 bash[20701]: cluster 2026-03-10T07:26:03.918027+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:05 vm00 bash[20701]: audit 2026-03-10T07:26:05.181178+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:05 vm00 bash[20701]: audit 2026-03-10T07:26:05.181178+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:05 vm00 bash[20701]: cluster 2026-03-10T07:26:05.199951+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:05 vm00 bash[20701]: cluster 2026-03-10T07:26:05.199951+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:05 vm00 bash[20971]: ignoring --setuser ceph since I am not root 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:05 vm00 bash[20971]: ignoring --setgroup ceph since I am not root 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:05 vm00 bash[20971]: debug 2026-03-10T07:26:05.306+0000 7fecf27bc140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:05 vm00 bash[20971]: debug 2026-03-10T07:26:05.342+0000 7fecf27bc140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:05 vm00 bash[20971]: debug 2026-03-10T07:26:05.470+0000 7fecf27bc140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:05 vm00 bash[28005]: cluster 2026-03-10T07:26:03.918027+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:05 vm00 bash[28005]: cluster 2026-03-10T07:26:03.918027+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:05 vm00 bash[28005]: audit 2026-03-10T07:26:05.181178+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:05 vm00 bash[28005]: audit 2026-03-10T07:26:05.181178+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:05 vm00 bash[28005]: cluster 2026-03-10T07:26:05.199951+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:05 vm00 bash[28005]: cluster 2026-03-10T07:26:05.199951+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:05 vm03 bash[23382]: cluster 2026-03-10T07:26:03.918027+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:05 vm03 bash[23382]: cluster 2026-03-10T07:26:03.918027+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:05 vm03 bash[23382]: audit 2026-03-10T07:26:05.181178+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:05 vm03 bash[23382]: audit 2026-03-10T07:26:05.181178+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.100:0/934853846' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:05 vm03 bash[23382]: cluster 2026-03-10T07:26:05.199951+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:05 vm03 bash[23382]: cluster 2026-03-10T07:26:05.199951+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:05 vm03 bash[24092]: ignoring --setuser ceph since I am not root 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:05 vm03 bash[24092]: ignoring --setgroup ceph since I am not root 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:05 vm03 bash[24092]: debug 2026-03-10T07:26:05.246+0000 7f4c4d55e640 1 -- 192.168.123.103:0/2948400116 <== mon.0 v2:192.168.123.100:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x563d18a232c0 con 0x563d18a25400 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:05 vm03 bash[24092]: debug 2026-03-10T07:26:05.310+0000 7f4c4fdbb140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T07:26:05.474 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:05 vm03 bash[24092]: debug 2026-03-10T07:26:05.350+0000 7f4c4fdbb140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T07:26:05.767 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:05 vm03 bash[24092]: debug 2026-03-10T07:26:05.474+0000 7f4c4fdbb140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T07:26:06.116 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:26:06.133 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:05 vm00 bash[20971]: debug 2026-03-10T07:26:05.766+0000 7fecf27bc140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T07:26:06.148 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:05 vm03 bash[24092]: debug 2026-03-10T07:26:05.790+0000 7f4c4fdbb140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T07:26:06.320 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.318+0000 7f06758b5640 1 -- 192.168.123.103:0/2707619282 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f067010a470 msgr2=0x7f067010a8d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:06.320 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.318+0000 7f06758b5640 1 --2- 192.168.123.103:0/2707619282 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f067010a470 0x7f067010a8d0 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f066800b0a0 tx=0x7f066802f450 comp rx=0 tx=0).stop 2026-03-10T07:26:06.320 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 -- 192.168.123.103:0/2707619282 shutdown_connections 2026-03-10T07:26:06.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 --2- 192.168.123.103:0/2707619282 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f067010ae10 0x7f06701116e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:06.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 --2- 192.168.123.103:0/2707619282 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f067010a470 0x7f067010a8d0 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:06.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 --2- 192.168.123.103:0/2707619282 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0670073f20 0x7f0670074320 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:06.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 -- 192.168.123.103:0/2707619282 >> 192.168.123.103:0/2707619282 conn(0x7f067006f820 msgr2=0x7f0670071c60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:06.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 -- 192.168.123.103:0/2707619282 shutdown_connections 2026-03-10T07:26:06.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 -- 192.168.123.103:0/2707619282 wait complete. 2026-03-10T07:26:06.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 Processor -- start 2026-03-10T07:26:06.322 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 -- start start 2026-03-10T07:26:06.322 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0670073f20 0x7f06701a9080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:06.322 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f067010a470 0x7f06701a95c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:06.323 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f067010ae10 0x7f06701b0640 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:06.323 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f067011ca10 con 0x7f0670073f20 2026-03-10T07:26:06.323 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f067011c890 con 0x7f067010ae10 2026-03-10T07:26:06.324 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06758b5640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f067011cb90 con 0x7f067010a470 2026-03-10T07:26:06.324 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f06748b3640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0670073f20 0x7f06701a9080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:06.324 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f066ffff640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f067010a470 0x7f06701a95c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:06.324 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f066ffff640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f067010a470 0x7f06701a95c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.103:57952/0 (socket says 192.168.123.103:57952) 2026-03-10T07:26:06.324 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f066ffff640 1 -- 192.168.123.103:0/1910896802 learned_addr learned my addr 192.168.123.103:0/1910896802 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:26:06.325 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f066ffff640 1 -- 192.168.123.103:0/1910896802 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f067010ae10 msgr2=0x7f06701b0640 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:26:06.325 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f066ffff640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f067010ae10 0x7f06701b0640 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:06.325 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f066ffff640 1 -- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0670073f20 msgr2=0x7f06701a9080 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:06.325 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f066ffff640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0670073f20 0x7f06701a9080 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:06.325 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f066ffff640 1 -- 192.168.123.103:0/1910896802 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0670111e90 con 0x7f067010a470 2026-03-10T07:26:06.325 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.322+0000 7f066ffff640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f067010a470 0x7f06701a95c0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f0668002790 tx=0x7f0668004640 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:06.326 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.326+0000 7f066dffb640 1 -- 192.168.123.103:0/1910896802 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0668004030 con 0x7f067010a470 2026-03-10T07:26:06.326 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.326+0000 7f06758b5640 1 -- 192.168.123.103:0/1910896802 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0670112120 con 0x7f067010a470 2026-03-10T07:26:06.326 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.326+0000 7f06758b5640 1 -- 192.168.123.103:0/1910896802 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f06701126b0 con 0x7f067010a470 2026-03-10T07:26:06.326 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.326+0000 7f066dffb640 1 -- 192.168.123.103:0/1910896802 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f0668007a00 con 0x7f067010a470 2026-03-10T07:26:06.326 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.326+0000 7f066dffb640 1 -- 192.168.123.103:0/1910896802 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f06680436b0 con 0x7f067010a470 2026-03-10T07:26:06.326 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.326+0000 7f066dffb640 1 -- 192.168.123.103:0/1910896802 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 16) ==== 100100+0+0 (secure 0 0 0) 0x7f0668038420 con 0x7f067010a470 2026-03-10T07:26:06.328 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.330+0000 7f06758b5640 1 -- 192.168.123.103:0/1910896802 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0648005180 con 0x7f067010a470 2026-03-10T07:26:06.328 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.330+0000 7f066dffb640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0650077690 0x7f0650079b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:06.328 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.330+0000 7f06748b3640 1 -- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0650077690 msgr2=0x7f0650079b50 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2669938860 2026-03-10T07:26:06.329 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.330+0000 7f06748b3640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0650077690 0x7f0650079b50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T07:26:06.329 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.330+0000 7f066dffb640 1 -- 192.168.123.103:0/1910896802 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(67..67 src has 1..67) ==== 7157+0+0 (secure 0 0 0) 0x7f06680075c0 con 0x7f067010a470 2026-03-10T07:26:06.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.334+0000 7f066dffb640 1 -- 192.168.123.103:0/1910896802 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0668048050 con 0x7f067010a470 2026-03-10T07:26:06.418 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:06 vm03 bash[24092]: debug 2026-03-10T07:26:06.322+0000 7f4c4fdbb140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T07:26:06.435 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.434+0000 7f06758b5640 1 -- 192.168.123.103:0/1910896802 --> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm00=a", "target": ["mon-mgr", ""]}) -- 0x7f0648002bf0 con 0x7f0650077690 2026-03-10T07:26:06.530 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.530+0000 7f06748b3640 1 -- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0650077690 msgr2=0x7f0650079b50 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2669938860 2026-03-10T07:26:06.530 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.530+0000 7f06748b3640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0650077690 0x7f0650079b50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-10T07:26:06.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:06 vm00 bash[20971]: debug 2026-03-10T07:26:06.246+0000 7fecf27bc140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T07:26:06.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:06 vm00 bash[20971]: debug 2026-03-10T07:26:06.342+0000 7fecf27bc140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T07:26:06.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:06 vm00 bash[20971]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T07:26:06.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:06 vm00 bash[20971]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T07:26:06.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:06 vm00 bash[20971]: from numpy import show_config as show_numpy_config 2026-03-10T07:26:06.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:06 vm00 bash[20971]: debug 2026-03-10T07:26:06.478+0000 7fecf27bc140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T07:26:06.701 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:06 vm03 bash[24092]: debug 2026-03-10T07:26:06.418+0000 7f4c4fdbb140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T07:26:06.701 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:06 vm03 bash[24092]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T07:26:06.701 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:06 vm03 bash[24092]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T07:26:06.701 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:06 vm03 bash[24092]: from numpy import show_config as show_numpy_config 2026-03-10T07:26:06.701 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:06 vm03 bash[24092]: debug 2026-03-10T07:26:06.550+0000 7f4c4fdbb140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T07:26:06.930 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.930+0000 7f06748b3640 1 -- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0650077690 msgr2=0x7f0650079b50 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2669938860 2026-03-10T07:26:06.931 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:06.930+0000 7f06748b3640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0650077690 0x7f0650079b50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-10T07:26:07.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:06 vm03 bash[24092]: debug 2026-03-10T07:26:06.698+0000 7f4c4fdbb140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T07:26:07.022 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:06 vm03 bash[24092]: debug 2026-03-10T07:26:06.742+0000 7f4c4fdbb140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T07:26:07.022 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:06 vm03 bash[24092]: debug 2026-03-10T07:26:06.786+0000 7f4c4fdbb140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T07:26:07.022 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:06 vm03 bash[24092]: debug 2026-03-10T07:26:06.830+0000 7f4c4fdbb140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T07:26:07.022 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:06 vm03 bash[24092]: debug 2026-03-10T07:26:06.882+0000 7f4c4fdbb140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T07:26:07.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:06 vm00 bash[20971]: debug 2026-03-10T07:26:06.638+0000 7fecf27bc140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T07:26:07.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:06 vm00 bash[20971]: debug 2026-03-10T07:26:06.674+0000 7fecf27bc140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T07:26:07.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:06 vm00 bash[20971]: debug 2026-03-10T07:26:06.710+0000 7fecf27bc140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T07:26:07.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:06 vm00 bash[20971]: debug 2026-03-10T07:26:06.758+0000 7fecf27bc140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T07:26:07.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:06 vm00 bash[20971]: debug 2026-03-10T07:26:06.814+0000 7fecf27bc140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T07:26:07.574 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:07 vm00 bash[20971]: debug 2026-03-10T07:26:07.286+0000 7fecf27bc140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T07:26:07.574 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:07 vm00 bash[20971]: debug 2026-03-10T07:26:07.334+0000 7fecf27bc140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T07:26:07.574 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:07 vm00 bash[20971]: debug 2026-03-10T07:26:07.374+0000 7fecf27bc140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T07:26:07.574 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:07 vm00 bash[20971]: debug 2026-03-10T07:26:07.526+0000 7fecf27bc140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T07:26:07.653 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:07 vm03 bash[24092]: debug 2026-03-10T07:26:07.366+0000 7f4c4fdbb140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T07:26:07.654 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:07 vm03 bash[24092]: debug 2026-03-10T07:26:07.406+0000 7f4c4fdbb140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T07:26:07.654 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:07 vm03 bash[24092]: debug 2026-03-10T07:26:07.450+0000 7f4c4fdbb140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T07:26:07.654 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:07 vm03 bash[24092]: debug 2026-03-10T07:26:07.606+0000 7f4c4fdbb140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T07:26:07.732 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:07.730+0000 7f06748b3640 1 -- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0650077690 msgr2=0x7f0650079b50 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.100:6800/2669938860 2026-03-10T07:26:07.732 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:07.730+0000 7f06748b3640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0650077690 0x7f0650079b50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-10T07:26:07.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:07 vm00 bash[20971]: debug 2026-03-10T07:26:07.570+0000 7fecf27bc140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T07:26:07.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:07 vm00 bash[20971]: debug 2026-03-10T07:26:07.614+0000 7fecf27bc140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T07:26:07.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:07 vm00 bash[20971]: debug 2026-03-10T07:26:07.726+0000 7fecf27bc140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:26:07.984 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:07 vm03 bash[24092]: debug 2026-03-10T07:26:07.654+0000 7f4c4fdbb140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T07:26:07.984 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:07 vm03 bash[24092]: debug 2026-03-10T07:26:07.698+0000 7f4c4fdbb140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T07:26:07.984 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:07 vm03 bash[24092]: debug 2026-03-10T07:26:07.814+0000 7f4c4fdbb140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:26:08.136 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:07 vm00 bash[20971]: debug 2026-03-10T07:26:07.882+0000 7fecf27bc140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T07:26:08.136 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:08 vm00 bash[20971]: debug 2026-03-10T07:26:08.058+0000 7fecf27bc140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T07:26:08.136 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:08 vm00 bash[20971]: debug 2026-03-10T07:26:08.094+0000 7fecf27bc140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T07:26:08.267 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:07 vm03 bash[24092]: debug 2026-03-10T07:26:07.982+0000 7f4c4fdbb140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T07:26:08.267 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:08 vm03 bash[24092]: debug 2026-03-10T07:26:08.178+0000 7f4c4fdbb140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T07:26:08.267 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:08 vm03 bash[24092]: debug 2026-03-10T07:26:08.218+0000 7f4c4fdbb140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T07:26:08.518 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:08 vm00 bash[20971]: debug 2026-03-10T07:26:08.134+0000 7fecf27bc140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T07:26:08.518 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:08 vm00 bash[20971]: debug 2026-03-10T07:26:08.282+0000 7fecf27bc140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:26:08.541 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:08.542+0000 7f066dffb640 1 -- 192.168.123.103:0/1910896802 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mgrmap(e 17) ==== 99714+0+0 (secure 0 0 0) 0x7f0668083a00 con 0x7f067010a470 2026-03-10T07:26:08.541 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:08.542+0000 7f066dffb640 1 -- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0650077690 msgr2=0x7f0650079b50 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:26:08.541 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:08.542+0000 7f066dffb640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/2669938860,v1:192.168.123.100:6801/2669938860] conn(0x7f0650077690 0x7f0650079b50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:08.580 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:08 vm03 bash[24092]: debug 2026-03-10T07:26:08.266+0000 7f4c4fdbb140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T07:26:08.580 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:08 vm03 bash[24092]: debug 2026-03-10T07:26:08.426+0000 7f4c4fdbb140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: cluster 2026-03-10T07:26:08.524725+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: cluster 2026-03-10T07:26:08.524725+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: cluster 2026-03-10T07:26:08.525014+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: cluster 2026-03-10T07:26:08.525014+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: cluster 2026-03-10T07:26:08.537946+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: cluster 2026-03-10T07:26:08.537946+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: cluster 2026-03-10T07:26:08.538251+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0133478s), standbys: x 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: cluster 2026-03-10T07:26:08.538251+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0133478s), standbys: x 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.547325+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.547325+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.547671+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.547671+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.548002+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:26:08.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.548002+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.549900+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.549900+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.550389+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.550389+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.551374+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.551374+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.551772+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.551772+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.552138+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.552138+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.552489+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.552489+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.552844+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.552844+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.553193+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.553193+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.554671+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.554671+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.555265+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.555265+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.555938+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.555938+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.556495+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.556495+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.557205+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: audit 2026-03-10T07:26:08.557205+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: cluster 2026-03-10T07:26:08.565273+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:08 vm00 bash[28005]: cluster 2026-03-10T07:26:08.565273+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:08 vm00 bash[20971]: debug 2026-03-10T07:26:08.514+0000 7fecf27bc140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:08 vm00 bash[20971]: [10/Mar/2026:07:26:08] ENGINE Bus STARTING 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:08 vm00 bash[20971]: CherryPy Checker: 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:08 vm00 bash[20971]: The Application mounted at '' has an empty config. 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:08 vm00 bash[20971]: [10/Mar/2026:07:26:08] ENGINE Serving on http://:::9283 2026-03-10T07:26:08.885 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:08 vm00 bash[20971]: [10/Mar/2026:07:26:08] ENGINE Bus STARTED 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: cluster 2026-03-10T07:26:08.524725+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: cluster 2026-03-10T07:26:08.524725+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: cluster 2026-03-10T07:26:08.525014+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: cluster 2026-03-10T07:26:08.525014+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: cluster 2026-03-10T07:26:08.537946+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: cluster 2026-03-10T07:26:08.537946+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: cluster 2026-03-10T07:26:08.538251+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0133478s), standbys: x 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: cluster 2026-03-10T07:26:08.538251+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0133478s), standbys: x 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.547325+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.547325+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.547671+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.547671+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.548002+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.548002+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.549900+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.549900+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.550389+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.550389+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.551374+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.551374+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.551772+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.551772+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.552138+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.552138+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.552489+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.552489+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.552844+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.552844+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.553193+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.553193+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.554671+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.554671+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.555265+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.555265+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.555938+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.555938+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.556495+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.556495+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:26:08.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.557205+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:26:08.887 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: audit 2026-03-10T07:26:08.557205+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:26:08.887 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: cluster 2026-03-10T07:26:08.565273+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-10T07:26:08.887 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:08 vm00 bash[20701]: cluster 2026-03-10T07:26:08.565273+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-10T07:26:09.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: cluster 2026-03-10T07:26:08.524725+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: cluster 2026-03-10T07:26:08.524725+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: cluster 2026-03-10T07:26:08.525014+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: cluster 2026-03-10T07:26:08.525014+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: cluster 2026-03-10T07:26:08.537946+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: cluster 2026-03-10T07:26:08.537946+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: cluster 2026-03-10T07:26:08.538251+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0133478s), standbys: x 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: cluster 2026-03-10T07:26:08.538251+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0133478s), standbys: x 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.547325+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.547325+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.547671+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.547671+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.548002+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.548002+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.549900+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.549900+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.550389+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.550389+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.551374+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.551374+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.551772+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.551772+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.552138+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.552138+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.552489+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.552489+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.552844+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.552844+0000 mon.c (mon.2) 35 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.553193+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.553193+0000 mon.c (mon.2) 36 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.554671+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.554671+0000 mon.c (mon.2) 37 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.555265+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.555265+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.555938+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.555938+0000 mon.c (mon.2) 39 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.556495+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.556495+0000 mon.c (mon.2) 40 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.557205+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: audit 2026-03-10T07:26:08.557205+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: cluster 2026-03-10T07:26:08.565273+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:08 vm03 bash[23382]: cluster 2026-03-10T07:26:08.565273+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:08 vm03 bash[24092]: debug 2026-03-10T07:26:08.694+0000 7f4c4fdbb140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:08 vm03 bash[24092]: [10/Mar/2026:07:26:08] ENGINE Bus STARTING 2026-03-10T07:26:09.018 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:08 vm03 bash[24092]: CherryPy Checker: 2026-03-10T07:26:09.019 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:08 vm03 bash[24092]: The Application mounted at '' has an empty config. 2026-03-10T07:26:09.019 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:08 vm03 bash[24092]: [10/Mar/2026:07:26:08] ENGINE Serving on http://:::9283 2026-03-10T07:26:09.019 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:08 vm03 bash[24092]: [10/Mar/2026:07:26:08] ENGINE Bus STARTED 2026-03-10T07:26:09.561 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.558+0000 7f066dffb640 1 -- 192.168.123.103:0/1910896802 <== mon.2 v2:192.168.123.100:3301/0 8 ==== mgrmap(e 18) ==== 99841+0+0 (secure 0 0 0) 0x7f0668088cf0 con 0x7f067010a470 2026-03-10T07:26:09.561 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.558+0000 7f066dffb640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f0650080d80 0x7f0650083170 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:09.562 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.558+0000 7f066dffb640 1 -- 192.168.123.103:0/1910896802 --> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm00=a", "target": ["mon-mgr", ""]}) -- 0x7f0648002bf0 con 0x7f0650080d80 2026-03-10T07:26:09.566 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.566+0000 7f06748b3640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f0650080d80 0x7f0650083170 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:09.572 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.570+0000 7f06748b3640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f0650080d80 0x7f0650083170 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f06600096f0 tx=0x7f0660009290 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:09.584 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled alertmanager update... 2026-03-10T07:26:09.585 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.586+0000 7f066dffb640 1 -- 192.168.123.103:0/1910896802 <== mgr.24407 v2:192.168.123.100:6800/3339031114 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+33 (secure 0 0 0) 0x7f0648002bf0 con 0x7f0650080d80 2026-03-10T07:26:09.587 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.586+0000 7f06758b5640 1 -- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f0650080d80 msgr2=0x7f0650083170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:09.587 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.586+0000 7f06758b5640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f0650080d80 0x7f0650083170 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f06600096f0 tx=0x7f0660009290 comp rx=0 tx=0).stop 2026-03-10T07:26:09.587 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.586+0000 7f06758b5640 1 -- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f067010a470 msgr2=0x7f06701a95c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:09.587 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.586+0000 7f06758b5640 1 --2- 192.168.123.103:0/1910896802 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f067010a470 0x7f06701a95c0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f0668002790 tx=0x7f0668004640 comp rx=0 tx=0).stop 2026-03-10T07:26:09.587 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.586+0000 7f06750b4640 1 -- 192.168.123.103:0/1910896802 reap_dead start 2026-03-10T07:26:09.587 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.586+0000 7f06758b5640 1 -- 192.168.123.103:0/1910896802 shutdown_connections 2026-03-10T07:26:09.587 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.586+0000 7f06758b5640 1 -- 192.168.123.103:0/1910896802 >> 192.168.123.103:0/1910896802 conn(0x7f067006f820 msgr2=0x7f06700723c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:09.587 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.586+0000 7f06758b5640 1 -- 192.168.123.103:0/1910896802 shutdown_connections 2026-03-10T07:26:09.587 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:09.586+0000 7f06758b5640 1 -- 192.168.123.103:0/1910896802 wait complete. 2026-03-10T07:26:09.654 DEBUG:teuthology.orchestra.run.vm00:alertmanager.a> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@alertmanager.a.service 2026-03-10T07:26:09.656 INFO:tasks.cephadm:Adding grafana.a on vm03 2026-03-10T07:26:09.656 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph orch apply grafana '1;vm03=a' 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.607355+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.607355+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.612006+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.612006+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.634757+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.634757+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.635971+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.635971+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.636380+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.636380+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.681897+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.681897+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.682429+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.682429+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: cluster 2026-03-10T07:26:08.701118+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: cluster 2026-03-10T07:26:08.701118+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: cluster 2026-03-10T07:26:08.701222+0000 mon.a (mon.0) 730 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: cluster 2026-03-10T07:26:08.701222+0000 mon.a (mon.0) 730 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:26:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.701875+0000 mon.b (mon.1) 22 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.701875+0000 mon.b (mon.1) 22 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.702495+0000 mon.b (mon.1) 23 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.702495+0000 mon.b (mon.1) 23 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.705131+0000 mon.b (mon.1) 24 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.705131+0000 mon.b (mon.1) 24 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.705728+0000 mon.b (mon.1) 25 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:08.705728+0000 mon.b (mon.1) 25 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: cluster 2026-03-10T07:26:09.571952+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e18: y(active, since 1.04704s), standbys: x 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: cluster 2026-03-10T07:26:09.571952+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e18: y(active, since 1.04704s), standbys: x 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:09.586348+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:09 vm00 bash[28005]: audit 2026-03-10T07:26:09.586348+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.607355+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.607355+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.612006+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.612006+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.634757+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.634757+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.635971+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.635971+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.636380+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.636380+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.681897+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.681897+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.682429+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.682429+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: cluster 2026-03-10T07:26:08.701118+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: cluster 2026-03-10T07:26:08.701118+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: cluster 2026-03-10T07:26:08.701222+0000 mon.a (mon.0) 730 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: cluster 2026-03-10T07:26:08.701222+0000 mon.a (mon.0) 730 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.701875+0000 mon.b (mon.1) 22 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.701875+0000 mon.b (mon.1) 22 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.702495+0000 mon.b (mon.1) 23 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.702495+0000 mon.b (mon.1) 23 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.705131+0000 mon.b (mon.1) 24 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.705131+0000 mon.b (mon.1) 24 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:26:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.705728+0000 mon.b (mon.1) 25 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:26:09.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:08.705728+0000 mon.b (mon.1) 25 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:26:09.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: cluster 2026-03-10T07:26:09.571952+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e18: y(active, since 1.04704s), standbys: x 2026-03-10T07:26:09.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: cluster 2026-03-10T07:26:09.571952+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e18: y(active, since 1.04704s), standbys: x 2026-03-10T07:26:09.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:09.586348+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:09.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:09 vm00 bash[20701]: audit 2026-03-10T07:26:09.586348+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:10.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.607355+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:10.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.607355+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:10.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.612006+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:10.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.612006+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:10.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.634757+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:10.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.634757+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.635971+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.635971+0000 mon.c (mon.2) 44 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.636380+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.636380+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.681897+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.681897+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.682429+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.682429+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: cluster 2026-03-10T07:26:08.701118+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: cluster 2026-03-10T07:26:08.701118+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: cluster 2026-03-10T07:26:08.701222+0000 mon.a (mon.0) 730 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: cluster 2026-03-10T07:26:08.701222+0000 mon.a (mon.0) 730 : cluster [DBG] Standby manager daemon x started 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.701875+0000 mon.b (mon.1) 22 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.701875+0000 mon.b (mon.1) 22 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.702495+0000 mon.b (mon.1) 23 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.702495+0000 mon.b (mon.1) 23 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.705131+0000 mon.b (mon.1) 24 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.705131+0000 mon.b (mon.1) 24 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.705728+0000 mon.b (mon.1) 25 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:08.705728+0000 mon.b (mon.1) 25 : audit [DBG] from='mgr.? 192.168.123.103:0/1261930238' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: cluster 2026-03-10T07:26:09.571952+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e18: y(active, since 1.04704s), standbys: x 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: cluster 2026-03-10T07:26:09.571952+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e18: y(active, since 1.04704s), standbys: x 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:09.586348+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:10.018 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:09 vm03 bash[23382]: audit 2026-03-10T07:26:09.586348+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:09.579046+0000 mgr.y (mgr.24407) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:09.579046+0000 mgr.y (mgr.24407) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:09.798989+0000 mgr.y (mgr.24407) 4 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Bus STARTING 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:09.798989+0000 mgr.y (mgr.24407) 4 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Bus STARTING 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:09.907910+0000 mgr.y (mgr.24407) 5 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:09.907910+0000 mgr.y (mgr.24407) 5 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:09.908657+0000 mgr.y (mgr.24407) 6 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Client ('192.168.123.100', 55758) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:09.908657+0000 mgr.y (mgr.24407) 6 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Client ('192.168.123.100', 55758) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:10.009524+0000 mgr.y (mgr.24407) 7 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:10.009524+0000 mgr.y (mgr.24407) 7 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:10.009569+0000 mgr.y (mgr.24407) 8 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Bus STARTED 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:10 vm00 bash[28005]: cephadm 2026-03-10T07:26:10.009569+0000 mgr.y (mgr.24407) 8 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Bus STARTED 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:09.579046+0000 mgr.y (mgr.24407) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:09.579046+0000 mgr.y (mgr.24407) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:09.798989+0000 mgr.y (mgr.24407) 4 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Bus STARTING 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:09.798989+0000 mgr.y (mgr.24407) 4 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Bus STARTING 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:09.907910+0000 mgr.y (mgr.24407) 5 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:09.907910+0000 mgr.y (mgr.24407) 5 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:09.908657+0000 mgr.y (mgr.24407) 6 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Client ('192.168.123.100', 55758) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:09.908657+0000 mgr.y (mgr.24407) 6 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Client ('192.168.123.100', 55758) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:10.009524+0000 mgr.y (mgr.24407) 7 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:10.009524+0000 mgr.y (mgr.24407) 7 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:10.009569+0000 mgr.y (mgr.24407) 8 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Bus STARTED 2026-03-10T07:26:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:10 vm00 bash[20701]: cephadm 2026-03-10T07:26:10.009569+0000 mgr.y (mgr.24407) 8 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Bus STARTED 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:09.579046+0000 mgr.y (mgr.24407) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:09.579046+0000 mgr.y (mgr.24407) 3 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:09.798989+0000 mgr.y (mgr.24407) 4 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Bus STARTING 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:09.798989+0000 mgr.y (mgr.24407) 4 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Bus STARTING 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:09.907910+0000 mgr.y (mgr.24407) 5 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:09.907910+0000 mgr.y (mgr.24407) 5 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:09.908657+0000 mgr.y (mgr.24407) 6 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Client ('192.168.123.100', 55758) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:09.908657+0000 mgr.y (mgr.24407) 6 : cephadm [INF] [10/Mar/2026:07:26:09] ENGINE Client ('192.168.123.100', 55758) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:10.009524+0000 mgr.y (mgr.24407) 7 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:10.009524+0000 mgr.y (mgr.24407) 7 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:10.009569+0000 mgr.y (mgr.24407) 8 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Bus STARTED 2026-03-10T07:26:11.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:10 vm03 bash[23382]: cephadm 2026-03-10T07:26:10.009569+0000 mgr.y (mgr.24407) 8 : cephadm [INF] [10/Mar/2026:07:26:10] ENGINE Bus STARTED 2026-03-10T07:26:11.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:26:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:26:11.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:11 vm00 bash[20701]: cluster 2026-03-10T07:26:10.550609+0000 mgr.y (mgr.24407) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:11.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:11 vm00 bash[20701]: cluster 2026-03-10T07:26:10.550609+0000 mgr.y (mgr.24407) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:11.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:11 vm00 bash[20701]: cluster 2026-03-10T07:26:10.638214+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T07:26:11.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:11 vm00 bash[20701]: cluster 2026-03-10T07:26:10.638214+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T07:26:11.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:11 vm00 bash[28005]: cluster 2026-03-10T07:26:10.550609+0000 mgr.y (mgr.24407) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:11.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:11 vm00 bash[28005]: cluster 2026-03-10T07:26:10.550609+0000 mgr.y (mgr.24407) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:11.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:11 vm00 bash[28005]: cluster 2026-03-10T07:26:10.638214+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T07:26:11.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:11 vm00 bash[28005]: cluster 2026-03-10T07:26:10.638214+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T07:26:12.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:11 vm03 bash[23382]: cluster 2026-03-10T07:26:10.550609+0000 mgr.y (mgr.24407) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:12.022 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:11 vm03 bash[23382]: cluster 2026-03-10T07:26:10.550609+0000 mgr.y (mgr.24407) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:12.022 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:11 vm03 bash[23382]: cluster 2026-03-10T07:26:10.638214+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T07:26:12.022 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:11 vm03 bash[23382]: cluster 2026-03-10T07:26:10.638214+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T07:26:13.267 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:12 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:26:13.817 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:26:14.008 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.006+0000 7fea9245e640 1 -- 192.168.123.103:0/1911251545 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fea84006d70 msgr2=0x7fea8409a910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:14.008 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.006+0000 7fea9245e640 1 --2- 192.168.123.103:0/1911251545 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fea84006d70 0x7fea8409a910 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7fea7c009a80 tx=0x7fea7c02f2f0 comp rx=0 tx=0).stop 2026-03-10T07:26:14.010 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.010+0000 7fea9245e640 1 -- 192.168.123.103:0/1911251545 shutdown_connections 2026-03-10T07:26:14.010 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.010+0000 7fea9245e640 1 --2- 192.168.123.103:0/1911251545 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fea8409d7d0 0x7fea8409fc90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:14.011 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.010+0000 7fea9245e640 1 --2- 192.168.123.103:0/1911251545 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fea8409ae50 0x7fea8409d240 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:14.011 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.010+0000 7fea9245e640 1 --2- 192.168.123.103:0/1911251545 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fea84006d70 0x7fea8409a910 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:14.011 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.010+0000 7fea9245e640 1 -- 192.168.123.103:0/1911251545 >> 192.168.123.103:0/1911251545 conn(0x7fea8408ff00 msgr2=0x7fea84092360 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:14.011 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.010+0000 7fea9245e640 1 -- 192.168.123.103:0/1911251545 shutdown_connections 2026-03-10T07:26:14.011 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.010+0000 7fea9245e640 1 -- 192.168.123.103:0/1911251545 wait complete. 2026-03-10T07:26:14.012 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.010+0000 7fea9245e640 1 Processor -- start 2026-03-10T07:26:14.012 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea9245e640 1 -- start start 2026-03-10T07:26:14.012 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea9245e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fea84006d70 0x7fea84099d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:14.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea9245e640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fea8409ae50 0x7fea8409a2b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:14.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea9245e640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fea8409d7d0 0x7fea84096870 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:14.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea9245e640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fea840a29f0 con 0x7fea84006d70 2026-03-10T07:26:14.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea9245e640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fea840a2870 con 0x7fea8409ae50 2026-03-10T07:26:14.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea9245e640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fea840a2b70 con 0x7fea8409d7d0 2026-03-10T07:26:14.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea91c5d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fea8409d7d0 0x7fea84096870 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:14.013 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea9145c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fea84006d70 0x7fea84099d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:14.014 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea91c5d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fea8409d7d0 0x7fea84096870 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.103:57966/0 (socket says 192.168.123.103:57966) 2026-03-10T07:26:14.014 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea91c5d640 1 -- 192.168.123.103:0/66762082 learned_addr learned my addr 192.168.123.103:0/66762082 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:26:14.014 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea90c5b640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fea8409ae50 0x7fea8409a2b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:14.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea91c5d640 1 -- 192.168.123.103:0/66762082 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fea8409ae50 msgr2=0x7fea8409a2b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:14.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea91c5d640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fea8409ae50 0x7fea8409a2b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:14.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea91c5d640 1 -- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fea84006d70 msgr2=0x7fea84099d70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:14.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea91c5d640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fea84006d70 0x7fea84099d70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:14.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea91c5d640 1 -- 192.168.123.103:0/66762082 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fea84097130 con 0x7fea8409d7d0 2026-03-10T07:26:14.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea9145c640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fea84006d70 0x7fea84099d70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:26:14.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea90c5b640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fea8409ae50 0x7fea8409a2b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:26:14.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea91c5d640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fea8409d7d0 0x7fea84096870 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fea88004820 tx=0x7fea8800d4a0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:14.015 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea7a7fc640 1 -- 192.168.123.103:0/66762082 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fea880090d0 con 0x7fea8409d7d0 2026-03-10T07:26:14.016 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.014+0000 7fea7a7fc640 1 -- 192.168.123.103:0/66762082 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fea88009270 con 0x7fea8409d7d0 2026-03-10T07:26:14.016 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.018+0000 7fea9245e640 1 -- 192.168.123.103:0/66762082 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fea840973c0 con 0x7fea8409d7d0 2026-03-10T07:26:14.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:13 vm03 bash[23382]: cluster 2026-03-10T07:26:12.550922+0000 mgr.y (mgr.24407) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:14.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:13 vm03 bash[23382]: cluster 2026-03-10T07:26:12.550922+0000 mgr.y (mgr.24407) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:14.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:13 vm03 bash[23382]: cluster 2026-03-10T07:26:12.753969+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-10T07:26:14.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:13 vm03 bash[23382]: cluster 2026-03-10T07:26:12.753969+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-10T07:26:14.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:13 vm03 bash[23382]: audit 2026-03-10T07:26:12.838954+0000 mgr.y (mgr.24407) 11 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:14.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:13 vm03 bash[23382]: audit 2026-03-10T07:26:12.838954+0000 mgr.y (mgr.24407) 11 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:14.017 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.018+0000 7fea9245e640 1 -- 192.168.123.103:0/66762082 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fea8413bb80 con 0x7fea8409d7d0 2026-03-10T07:26:14.017 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.018+0000 7fea7a7fc640 1 -- 192.168.123.103:0/66762082 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fea88013650 con 0x7fea8409d7d0 2026-03-10T07:26:14.018 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.018+0000 7fea7a7fc640 1 -- 192.168.123.103:0/66762082 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fea88012070 con 0x7fea8409d7d0 2026-03-10T07:26:14.018 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.018+0000 7fea7a7fc640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fea60077670 0x7fea60079b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:14.019 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.018+0000 7fea7a7fc640 1 -- 192.168.123.103:0/66762082 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7fea88099c50 con 0x7fea8409d7d0 2026-03-10T07:26:14.019 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.018+0000 7fea9145c640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fea60077670 0x7fea60079b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:14.019 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.018+0000 7fea9145c640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fea60077670 0x7fea60079b30 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fea7c02f9c0 tx=0x7fea7c031040 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:14.020 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.018+0000 7fea9245e640 1 -- 192.168.123.103:0/66762082 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fea54005180 con 0x7fea8409d7d0 2026-03-10T07:26:14.026 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.026+0000 7fea7a7fc640 1 -- 192.168.123.103:0/66762082 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fea88010070 con 0x7fea8409d7d0 2026-03-10T07:26:14.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:13 vm00 bash[20701]: cluster 2026-03-10T07:26:12.550922+0000 mgr.y (mgr.24407) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:14.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:13 vm00 bash[20701]: cluster 2026-03-10T07:26:12.550922+0000 mgr.y (mgr.24407) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:14.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:13 vm00 bash[20701]: cluster 2026-03-10T07:26:12.753969+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-10T07:26:14.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:13 vm00 bash[20701]: cluster 2026-03-10T07:26:12.753969+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-10T07:26:14.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:13 vm00 bash[20701]: audit 2026-03-10T07:26:12.838954+0000 mgr.y (mgr.24407) 11 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:14.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:13 vm00 bash[20701]: audit 2026-03-10T07:26:12.838954+0000 mgr.y (mgr.24407) 11 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:14.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:13 vm00 bash[28005]: cluster 2026-03-10T07:26:12.550922+0000 mgr.y (mgr.24407) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:14.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:13 vm00 bash[28005]: cluster 2026-03-10T07:26:12.550922+0000 mgr.y (mgr.24407) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:14.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:13 vm00 bash[28005]: cluster 2026-03-10T07:26:12.753969+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-10T07:26:14.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:13 vm00 bash[28005]: cluster 2026-03-10T07:26:12.753969+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e20: y(active, since 4s), standbys: x 2026-03-10T07:26:14.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:13 vm00 bash[28005]: audit 2026-03-10T07:26:12.838954+0000 mgr.y (mgr.24407) 11 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:14.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:13 vm00 bash[28005]: audit 2026-03-10T07:26:12.838954+0000 mgr.y (mgr.24407) 11 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:14.176 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.174+0000 7fea9245e640 1 -- 192.168.123.103:0/66762082 --> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}) -- 0x7fea54002bf0 con 0x7fea60077670 2026-03-10T07:26:14.186 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled grafana update... 2026-03-10T07:26:14.186 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.186+0000 7fea7a7fc640 1 -- 192.168.123.103:0/66762082 <== mgr.24407 v2:192.168.123.100:6800/3339031114 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+28 (secure 0 0 0) 0x7fea54002bf0 con 0x7fea60077670 2026-03-10T07:26:14.189 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 -- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fea60077670 msgr2=0x7fea60079b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:14.189 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fea60077670 0x7fea60079b30 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fea7c02f9c0 tx=0x7fea7c031040 comp rx=0 tx=0).stop 2026-03-10T07:26:14.189 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 -- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fea8409d7d0 msgr2=0x7fea84096870 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:14.189 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fea8409d7d0 0x7fea84096870 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fea88004820 tx=0x7fea8800d4a0 comp rx=0 tx=0).stop 2026-03-10T07:26:14.189 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 -- 192.168.123.103:0/66762082 shutdown_connections 2026-03-10T07:26:14.189 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fea60077670 0x7fea60079b30 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:14.189 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fea8409d7d0 0x7fea84096870 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:14.189 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fea8409ae50 0x7fea8409a2b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:14.189 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 --2- 192.168.123.103:0/66762082 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fea84006d70 0x7fea84099d70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:14.189 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 -- 192.168.123.103:0/66762082 >> 192.168.123.103:0/66762082 conn(0x7fea8408ff00 msgr2=0x7fea8409ba50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:14.190 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 -- 192.168.123.103:0/66762082 shutdown_connections 2026-03-10T07:26:14.190 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:14.190+0000 7fea9245e640 1 -- 192.168.123.103:0/66762082 wait complete. 2026-03-10T07:26:14.265 DEBUG:teuthology.orchestra.run.vm03:grafana.a> sudo journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@grafana.a.service 2026-03-10T07:26:14.266 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T07:26:14.266 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.057860+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.057860+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.072848+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.072848+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.178628+0000 mgr.y (mgr.24407) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.178628+0000 mgr.y (mgr.24407) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: cephadm 2026-03-10T07:26:14.179855+0000 mgr.y (mgr.24407) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: cephadm 2026-03-10T07:26:14.179855+0000 mgr.y (mgr.24407) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.187356+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.187356+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.324598+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.324598+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.333847+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.333847+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.738096+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.738096+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.746100+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.746100+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.749955+0000 mon.c (mon.2) 46 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.749955+0000 mon.c (mon.2) 46 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.750233+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.750233+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.967678+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.967678+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.981082+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.981082+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.985142+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.985142+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.985392+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.985392+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.987190+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.987190+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.988027+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 bash[20701]: audit 2026-03-10T07:26:14.988027+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.057860+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.057860+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.072848+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.072848+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.178628+0000 mgr.y (mgr.24407) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.178628+0000 mgr.y (mgr.24407) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: cephadm 2026-03-10T07:26:14.179855+0000 mgr.y (mgr.24407) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: cephadm 2026-03-10T07:26:14.179855+0000 mgr.y (mgr.24407) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.187356+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.187356+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.324598+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.324598+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.333847+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.333847+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.738096+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.738096+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.746100+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.746100+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.749955+0000 mon.c (mon.2) 46 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.749955+0000 mon.c (mon.2) 46 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.750233+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.750233+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.967678+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.967678+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.981082+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.981082+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.985142+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.985142+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.985392+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.985392+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.987190+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.987190+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.988027+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:15.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 bash[28005]: audit 2026-03-10T07:26:14.988027+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.057860+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.057860+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.072848+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.072848+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.178628+0000 mgr.y (mgr.24407) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.178628+0000 mgr.y (mgr.24407) 12 : audit [DBG] from='client.24440 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: cephadm 2026-03-10T07:26:14.179855+0000 mgr.y (mgr.24407) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: cephadm 2026-03-10T07:26:14.179855+0000 mgr.y (mgr.24407) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.187356+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.187356+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.324598+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.324598+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.333847+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.333847+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.738096+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.738096+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.746100+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.746100+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.749955+0000 mon.c (mon.2) 46 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.749955+0000 mon.c (mon.2) 46 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.750233+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.750233+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.967678+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.967678+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.981082+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.981082+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.985142+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.985142+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.985392+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.985392+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.987190+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.987190+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.988027+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:15.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:15 vm03 bash[23382]: audit 2026-03-10T07:26:14.988027+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:15.791 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:15.791 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:15.791 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:15.792 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:15.792 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:15.792 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:15.792 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:15.792 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:15.792 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.050 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.050 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.050 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.050 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.050 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.050 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.051 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.051 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:16 vm00 systemd[1]: Started Ceph node-exporter.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:26:16.051 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.051 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.051 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.051 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.051 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 07:26:15 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cluster 2026-03-10T07:26:14.551193+0000 mgr.y (mgr.24407) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cluster 2026-03-10T07:26:14.551193+0000 mgr.y (mgr.24407) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:14.989045+0000 mgr.y (mgr.24407) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:14.989045+0000 mgr.y (mgr.24407) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:14.989286+0000 mgr.y (mgr.24407) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:14.989286+0000 mgr.y (mgr.24407) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.025909+0000 mgr.y (mgr.24407) 17 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.025909+0000 mgr.y (mgr.24407) 17 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.027504+0000 mgr.y (mgr.24407) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.027504+0000 mgr.y (mgr.24407) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.068257+0000 mgr.y (mgr.24407) 19 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.068257+0000 mgr.y (mgr.24407) 19 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.073499+0000 mgr.y (mgr.24407) 20 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.073499+0000 mgr.y (mgr.24407) 20 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.101443+0000 mgr.y (mgr.24407) 21 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.101443+0000 mgr.y (mgr.24407) 21 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.108321+0000 mgr.y (mgr.24407) 22 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.108321+0000 mgr.y (mgr.24407) 22 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:15.141363+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:15.141363+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:15.156829+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:15.156829+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:15.231348+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:15.231348+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:15.240023+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:15.240023+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:15.252666+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:15.252666+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:16.060606+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:16.060606+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:16.068333+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:16.068333+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:16.074869+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:16 vm00 bash[28005]: audit 2026-03-10T07:26:16.074869+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.384 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[55429]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cluster 2026-03-10T07:26:14.551193+0000 mgr.y (mgr.24407) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cluster 2026-03-10T07:26:14.551193+0000 mgr.y (mgr.24407) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:14.989045+0000 mgr.y (mgr.24407) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:14.989045+0000 mgr.y (mgr.24407) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:14.989286+0000 mgr.y (mgr.24407) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:26:16.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:14.989286+0000 mgr.y (mgr.24407) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.025909+0000 mgr.y (mgr.24407) 17 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.025909+0000 mgr.y (mgr.24407) 17 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.027504+0000 mgr.y (mgr.24407) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.027504+0000 mgr.y (mgr.24407) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.068257+0000 mgr.y (mgr.24407) 19 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.068257+0000 mgr.y (mgr.24407) 19 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.073499+0000 mgr.y (mgr.24407) 20 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.073499+0000 mgr.y (mgr.24407) 20 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.101443+0000 mgr.y (mgr.24407) 21 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.101443+0000 mgr.y (mgr.24407) 21 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.108321+0000 mgr.y (mgr.24407) 22 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.108321+0000 mgr.y (mgr.24407) 22 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:15.141363+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:15.141363+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:15.156829+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:15.156829+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:15.231348+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:15.231348+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:15.240023+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:15.240023+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:15.252666+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:15.252666+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:16.060606+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:16.060606+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:16.068333+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:16.068333+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:16.074869+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:16 vm00 bash[20701]: audit 2026-03-10T07:26:16.074869+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cluster 2026-03-10T07:26:14.551193+0000 mgr.y (mgr.24407) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cluster 2026-03-10T07:26:14.551193+0000 mgr.y (mgr.24407) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:14.989045+0000 mgr.y (mgr.24407) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:14.989045+0000 mgr.y (mgr.24407) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:14.989286+0000 mgr.y (mgr.24407) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:14.989286+0000 mgr.y (mgr.24407) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.025909+0000 mgr.y (mgr.24407) 17 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.025909+0000 mgr.y (mgr.24407) 17 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.027504+0000 mgr.y (mgr.24407) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.027504+0000 mgr.y (mgr.24407) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.conf 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.068257+0000 mgr.y (mgr.24407) 19 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.068257+0000 mgr.y (mgr.24407) 19 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.073499+0000 mgr.y (mgr.24407) 20 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.073499+0000 mgr.y (mgr.24407) 20 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.101443+0000 mgr.y (mgr.24407) 21 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.101443+0000 mgr.y (mgr.24407) 21 : cephadm [INF] Updating vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.108321+0000 mgr.y (mgr.24407) 22 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.108321+0000 mgr.y (mgr.24407) 22 : cephadm [INF] Updating vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/config/ceph.client.admin.keyring 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:15.141363+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:15.141363+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:15.156829+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:15.156829+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:15.231348+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:15.231348+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:15.240023+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:15.240023+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:15.252666+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:15.252666+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:16.060606+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:16.060606+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:16.068333+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:16.068333+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:16.074869+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[23382]: audit 2026-03-10T07:26:16.074869+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:16.767 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:17.210 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:16 vm03 systemd[1]: Started Ceph node-exporter.b for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:26:17.211 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:16 vm03 bash[50879]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.256598+0000 mgr.y (mgr.24407) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: cephadm 2026-03-10T07:26:15.256598+0000 mgr.y (mgr.24407) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: cephadm 2026-03-10T07:26:16.077776+0000 mgr.y (mgr.24407) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: cephadm 2026-03-10T07:26:16.077776+0000 mgr.y (mgr.24407) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: audit 2026-03-10T07:26:16.857885+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: audit 2026-03-10T07:26:16.857885+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: audit 2026-03-10T07:26:16.865910+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: audit 2026-03-10T07:26:16.865910+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: audit 2026-03-10T07:26:16.872470+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: audit 2026-03-10T07:26:16.872470+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: audit 2026-03-10T07:26:16.876859+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: audit 2026-03-10T07:26:16.876859+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: audit 2026-03-10T07:26:16.883337+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:17 vm03 bash[23382]: audit 2026-03-10T07:26:16.883337+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.633 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[55429]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.256598+0000 mgr.y (mgr.24407) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: cephadm 2026-03-10T07:26:15.256598+0000 mgr.y (mgr.24407) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: cephadm 2026-03-10T07:26:16.077776+0000 mgr.y (mgr.24407) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: cephadm 2026-03-10T07:26:16.077776+0000 mgr.y (mgr.24407) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: audit 2026-03-10T07:26:16.857885+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: audit 2026-03-10T07:26:16.857885+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: audit 2026-03-10T07:26:16.865910+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: audit 2026-03-10T07:26:16.865910+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: audit 2026-03-10T07:26:16.872470+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: audit 2026-03-10T07:26:16.872470+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: audit 2026-03-10T07:26:16.876859+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: audit 2026-03-10T07:26:16.876859+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: audit 2026-03-10T07:26:16.883337+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:17 vm00 bash[28005]: audit 2026-03-10T07:26:16.883337+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.256598+0000 mgr.y (mgr.24407) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: cephadm 2026-03-10T07:26:15.256598+0000 mgr.y (mgr.24407) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: cephadm 2026-03-10T07:26:16.077776+0000 mgr.y (mgr.24407) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: cephadm 2026-03-10T07:26:16.077776+0000 mgr.y (mgr.24407) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: audit 2026-03-10T07:26:16.857885+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: audit 2026-03-10T07:26:16.857885+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: audit 2026-03-10T07:26:16.865910+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: audit 2026-03-10T07:26:16.865910+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: audit 2026-03-10T07:26:16.872470+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: audit 2026-03-10T07:26:16.872470+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: audit 2026-03-10T07:26:16.876859+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: audit 2026-03-10T07:26:16.876859+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: audit 2026-03-10T07:26:16.883337+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[20701]: audit 2026-03-10T07:26:16.883337+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:17.634 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:17 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:18.133 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[55429]: 2abcce694348: Pulling fs layer 2026-03-10T07:26:18.133 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[55429]: 455fd88e5221: Pulling fs layer 2026-03-10T07:26:18.133 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:17 vm00 bash[55429]: 324153f2810a: Pulling fs layer 2026-03-10T07:26:18.495 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:18 vm00 bash[28005]: cluster 2026-03-10T07:26:16.551661+0000 mgr.y (mgr.24407) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:26:18.495 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:18 vm00 bash[28005]: cluster 2026-03-10T07:26:16.551661+0000 mgr.y (mgr.24407) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:26:18.495 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:18 vm00 bash[28005]: cephadm 2026-03-10T07:26:16.888570+0000 mgr.y (mgr.24407) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T07:26:18.495 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:18 vm00 bash[28005]: cephadm 2026-03-10T07:26:16.888570+0000 mgr.y (mgr.24407) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T07:26:18.496 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: 455fd88e5221: Verifying Checksum 2026-03-10T07:26:18.496 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: 455fd88e5221: Download complete 2026-03-10T07:26:18.496 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: 2abcce694348: Verifying Checksum 2026-03-10T07:26:18.496 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: 2abcce694348: Download complete 2026-03-10T07:26:18.496 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: 2abcce694348: Pull complete 2026-03-10T07:26:18.496 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: 324153f2810a: Verifying Checksum 2026-03-10T07:26:18.496 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: 324153f2810a: Download complete 2026-03-10T07:26:18.496 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[20701]: cluster 2026-03-10T07:26:16.551661+0000 mgr.y (mgr.24407) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:26:18.496 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[20701]: cluster 2026-03-10T07:26:16.551661+0000 mgr.y (mgr.24407) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:26:18.496 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[20701]: cephadm 2026-03-10T07:26:16.888570+0000 mgr.y (mgr.24407) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T07:26:18.496 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[20701]: cephadm 2026-03-10T07:26:16.888570+0000 mgr.y (mgr.24407) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T07:26:18.517 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:18 vm03 bash[50879]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-10T07:26:18.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:18 vm03 bash[23382]: cluster 2026-03-10T07:26:16.551661+0000 mgr.y (mgr.24407) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:26:18.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:18 vm03 bash[23382]: cluster 2026-03-10T07:26:16.551661+0000 mgr.y (mgr.24407) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:26:18.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:18 vm03 bash[23382]: cephadm 2026-03-10T07:26:16.888570+0000 mgr.y (mgr.24407) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T07:26:18.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:18 vm03 bash[23382]: cephadm 2026-03-10T07:26:16.888570+0000 mgr.y (mgr.24407) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T07:26:18.749 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: 455fd88e5221: Pull complete 2026-03-10T07:26:18.749 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: 324153f2810a: Pull complete 2026-03-10T07:26:18.749 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-10T07:26:18.749 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T07:26:18.936 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:19.017 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:18 vm03 bash[50879]: 2abcce694348: Pulling fs layer 2026-03-10T07:26:19.017 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:18 vm03 bash[50879]: 455fd88e5221: Pulling fs layer 2026-03-10T07:26:19.017 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:18 vm03 bash[50879]: 324153f2810a: Pulling fs layer 2026-03-10T07:26:19.045 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.751Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.751Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.753Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.753Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.753Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.754Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.754Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.754Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.754Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.754Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.754Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.754Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.754Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.755Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.755Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.755Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.755Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.755Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.755Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.755Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.755Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.755Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.755Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.756Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.757Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.757Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.757Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.757Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.757Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.757Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.757Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.758Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T07:26:19.046 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.758Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T07:26:19.047 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.758Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T07:26:19.047 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.758Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T07:26:19.047 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.758Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T07:26:19.047 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.758Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T07:26:19.047 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.758Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T07:26:19.047 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.758Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T07:26:19.047 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.758Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T07:26:19.047 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.759Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T07:26:19.047 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:18 vm00 bash[55429]: ts=2026-03-10T07:26:18.759Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T07:26:19.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/3992668777 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3d74077620 msgr2=0x7f3d74077a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:19.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 --2- 192.168.123.100:0/3992668777 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3d74077620 0x7f3d74077a00 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7f3d64009a30 tx=0x7f3d6402f260 comp rx=0 tx=0).stop 2026-03-10T07:26:19.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/3992668777 shutdown_connections 2026-03-10T07:26:19.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 --2- 192.168.123.100:0/3992668777 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3d74113b60 0x7f3d74115f50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:19.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 --2- 192.168.123.100:0/3992668777 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3d74077f40 0x7f3d74113620 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:19.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 --2- 192.168.123.100:0/3992668777 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3d74077620 0x7f3d74077a00 unknown :-1 s=CLOSED pgs=160 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:19.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/3992668777 >> 192.168.123.100:0/3992668777 conn(0x7f3d74100980 msgr2=0x7f3d74102da0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:19.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/3992668777 shutdown_connections 2026-03-10T07:26:19.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/3992668777 wait complete. 2026-03-10T07:26:19.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 Processor -- start 2026-03-10T07:26:19.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 -- start start 2026-03-10T07:26:19.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3d74077620 0x7f3d741a0e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:19.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7a120640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3d74077620 0x7f3d741a0e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:19.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7a120640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3d74077620 0x7f3d741a0e40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:37720/0 (socket says 192.168.123.100:37720) 2026-03-10T07:26:19.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3d74077f40 0x7f3d741a1380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:19.162 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7991f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3d74077f40 0x7f3d741a1380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:19.162 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3d74113b60 0x7f3d741a5710 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:19.162 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f3d741187d0 con 0x7f3d74077f40 2026-03-10T07:26:19.163 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f3d74118650 con 0x7f3d74113b60 2026-03-10T07:26:19.163 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7c3ab640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f3d74118950 con 0x7f3d74077620 2026-03-10T07:26:19.163 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.158+0000 7f3d7a120640 1 -- 192.168.123.100:0/2998570730 learned_addr learned my addr 192.168.123.100:0/2998570730 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:19.163 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d7a921640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3d74113b60 0x7f3d741a5710 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:19.163 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d7a120640 1 -- 192.168.123.100:0/2998570730 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3d74113b60 msgr2=0x7f3d741a5710 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:19.163 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d7a120640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3d74113b60 0x7f3d741a5710 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:19.163 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d7a120640 1 -- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3d74077f40 msgr2=0x7f3d741a1380 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:19.164 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d7a120640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3d74077f40 0x7f3d741a1380 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:19.164 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d7a120640 1 -- 192.168.123.100:0/2998570730 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3d741a5df0 con 0x7f3d74077620 2026-03-10T07:26:19.164 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d7a921640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3d74113b60 0x7f3d741a5710 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:26:19.164 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d7991f640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3d74077f40 0x7f3d741a1380 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:26:19.164 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d7a120640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3d74077620 0x7f3d741a0e40 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f3d6402f770 tx=0x7f3d64002a50 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:19.164 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d637fe640 1 -- 192.168.123.100:0/2998570730 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3d64002bc0 con 0x7f3d74077620 2026-03-10T07:26:19.165 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d637fe640 1 -- 192.168.123.100:0/2998570730 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3d64038990 con 0x7f3d74077620 2026-03-10T07:26:19.165 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/2998570730 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3d741a6080 con 0x7f3d74077620 2026-03-10T07:26:19.166 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.162+0000 7f3d637fe640 1 -- 192.168.123.100:0/2998570730 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3d64036840 con 0x7f3d74077620 2026-03-10T07:26:19.167 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.166+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/2998570730 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f3d741a6530 con 0x7f3d74077620 2026-03-10T07:26:19.172 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.166+0000 7f3d637fe640 1 -- 192.168.123.100:0/2998570730 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f3d64038550 con 0x7f3d74077620 2026-03-10T07:26:19.172 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.166+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/2998570730 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3d74078d60 con 0x7f3d74077620 2026-03-10T07:26:19.172 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.170+0000 7f3d637fe640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3d480776c0 0x7f3d48079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:19.172 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.170+0000 7f3d637fe640 1 -- 192.168.123.100:0/2998570730 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f3d640bd790 con 0x7f3d74077620 2026-03-10T07:26:19.172 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.170+0000 7f3d7991f640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3d480776c0 0x7f3d48079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:19.173 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.170+0000 7f3d637fe640 1 -- 192.168.123.100:0/2998570730 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3d6408a120 con 0x7f3d74077620 2026-03-10T07:26:19.173 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.170+0000 7f3d7991f640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3d480776c0 0x7f3d48079b80 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f3d74115000 tx=0x7f3d6800a400 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:19.317 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.314+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/2998570730 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7f3d74077a00 con 0x7f3d74077620 2026-03-10T07:26:19.323 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.318+0000 7f3d637fe640 1 -- 192.168.123.100:0/2998570730 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v16) ==== 170+0+59 (secure 0 0 0) 0x7f3d6408efd0 con 0x7f3d74077620 2026-03-10T07:26:19.323 INFO:teuthology.orchestra.run.vm00.stdout:[client.0] 2026-03-10T07:26:19.323 INFO:teuthology.orchestra.run.vm00.stdout: key = AQCbx69pvIUiExAAuciiy57dZQ8J6fhZ0OYsMg== 2026-03-10T07:26:19.325 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3d480776c0 msgr2=0x7f3d48079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:19.325 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3d480776c0 0x7f3d48079b80 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f3d74115000 tx=0x7f3d6800a400 comp rx=0 tx=0).stop 2026-03-10T07:26:19.325 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3d74077620 msgr2=0x7f3d741a0e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:19.325 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3d74077620 0x7f3d741a0e40 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f3d6402f770 tx=0x7f3d64002a50 comp rx=0 tx=0).stop 2026-03-10T07:26:19.325 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/2998570730 shutdown_connections 2026-03-10T07:26:19.325 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3d480776c0 0x7f3d48079b80 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:19.325 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3d74113b60 0x7f3d741a5710 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:19.326 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3d74077f40 0x7f3d741a1380 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:19.326 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 --2- 192.168.123.100:0/2998570730 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3d74077620 0x7f3d741a0e40 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:19.326 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/2998570730 >> 192.168.123.100:0/2998570730 conn(0x7f3d74100980 msgr2=0x7f3d74102d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:19.326 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/2998570730 shutdown_connections 2026-03-10T07:26:19.326 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:19.322+0000 7f3d7c3ab640 1 -- 192.168.123.100:0/2998570730 wait complete. 2026-03-10T07:26:19.386 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:26:19.386 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T07:26:19.386 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T07:26:19.401 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T07:26:19.407 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: 455fd88e5221: Verifying Checksum 2026-03-10T07:26:19.407 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: 455fd88e5221: Download complete 2026-03-10T07:26:19.407 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: 2abcce694348: Verifying Checksum 2026-03-10T07:26:19.407 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: 2abcce694348: Download complete 2026-03-10T07:26:19.407 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: 2abcce694348: Pull complete 2026-03-10T07:26:19.407 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: 455fd88e5221: Pull complete 2026-03-10T07:26:19.407 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: 324153f2810a: Verifying Checksum 2026-03-10T07:26:19.407 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: 324153f2810a: Download complete 2026-03-10T07:26:19.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:19 vm00 bash[28005]: cluster 2026-03-10T07:26:18.551980+0000 mgr.y (mgr.24407) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T07:26:19.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:19 vm00 bash[28005]: cluster 2026-03-10T07:26:18.551980+0000 mgr.y (mgr.24407) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T07:26:19.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:19 vm00 bash[28005]: audit 2026-03-10T07:26:18.613631+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:19.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:19 vm00 bash[28005]: audit 2026-03-10T07:26:18.613631+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:19.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:19 vm00 bash[28005]: audit 2026-03-10T07:26:19.320553+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/2998570730' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:19 vm00 bash[28005]: audit 2026-03-10T07:26:19.320553+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/2998570730' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:19 vm00 bash[28005]: audit 2026-03-10T07:26:19.320900+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:19 vm00 bash[28005]: audit 2026-03-10T07:26:19.320900+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:19 vm00 bash[28005]: audit 2026-03-10T07:26:19.323564+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:19.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:19 vm00 bash[28005]: audit 2026-03-10T07:26:19.323564+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:19 vm00 bash[20701]: cluster 2026-03-10T07:26:18.551980+0000 mgr.y (mgr.24407) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T07:26:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:19 vm00 bash[20701]: cluster 2026-03-10T07:26:18.551980+0000 mgr.y (mgr.24407) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T07:26:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:19 vm00 bash[20701]: audit 2026-03-10T07:26:18.613631+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:19 vm00 bash[20701]: audit 2026-03-10T07:26:18.613631+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:19 vm00 bash[20701]: audit 2026-03-10T07:26:19.320553+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/2998570730' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:19 vm00 bash[20701]: audit 2026-03-10T07:26:19.320553+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/2998570730' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:19 vm00 bash[20701]: audit 2026-03-10T07:26:19.320900+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:19 vm00 bash[20701]: audit 2026-03-10T07:26:19.320900+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:19 vm00 bash[20701]: audit 2026-03-10T07:26:19.323564+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:19 vm00 bash[20701]: audit 2026-03-10T07:26:19.323564+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:19.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[23382]: cluster 2026-03-10T07:26:18.551980+0000 mgr.y (mgr.24407) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T07:26:19.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[23382]: cluster 2026-03-10T07:26:18.551980+0000 mgr.y (mgr.24407) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T07:26:19.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[23382]: audit 2026-03-10T07:26:18.613631+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:19.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[23382]: audit 2026-03-10T07:26:18.613631+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:19.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[23382]: audit 2026-03-10T07:26:19.320553+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/2998570730' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[23382]: audit 2026-03-10T07:26:19.320553+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/2998570730' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[23382]: audit 2026-03-10T07:26:19.320900+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[23382]: audit 2026-03-10T07:26:19.320900+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:19.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[23382]: audit 2026-03-10T07:26:19.323564+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:19.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[23382]: audit 2026-03-10T07:26:19.323564+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: 324153f2810a: Pull complete 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.586Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.586Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.587Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.587Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.587Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.587Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T07:26:19.768 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:19 vm03 bash[50879]: ts=2026-03-10T07:26:19.588Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T07:26:21.197 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:26:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:26:21.590 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.590 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.590 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.590 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.590 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.590 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.591 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.591 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.591 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.591 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.591 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.591 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.591 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:21 vm00 bash[20701]: cluster 2026-03-10T07:26:20.552485+0000 mgr.y (mgr.24407) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:21.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:21 vm00 bash[20701]: cluster 2026-03-10T07:26:20.552485+0000 mgr.y (mgr.24407) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:21.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:21 vm00 bash[20971]: [10/Mar/2026:07:26:21] ENGINE Bus STOPPING 2026-03-10T07:26:21.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:21 vm00 bash[28005]: cluster 2026-03-10T07:26:20.552485+0000 mgr.y (mgr.24407) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:21.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:21 vm00 bash[28005]: cluster 2026-03-10T07:26:20.552485+0000 mgr.y (mgr.24407) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:21.879 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.879 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.879 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.879 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.879 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.879 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:21.879 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 systemd[1]: Started Ceph alertmanager.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:26:22.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:21 vm03 bash[23382]: cluster 2026-03-10T07:26:20.552485+0000 mgr.y (mgr.24407) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:22.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:21 vm03 bash[23382]: cluster 2026-03-10T07:26:20.552485+0000 mgr.y (mgr.24407) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:22.133 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 bash[55893]: ts=2026-03-10T07:26:21.891Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T07:26:22.134 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 bash[55893]: ts=2026-03-10T07:26:21.891Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T07:26:22.134 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 bash[55893]: ts=2026-03-10T07:26:21.892Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.100 port=9094 2026-03-10T07:26:22.134 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 bash[55893]: ts=2026-03-10T07:26:21.893Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T07:26:22.134 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 bash[55893]: ts=2026-03-10T07:26:21.913Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T07:26:22.134 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 bash[55893]: ts=2026-03-10T07:26:21.913Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T07:26:22.134 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 bash[55893]: ts=2026-03-10T07:26:21.916Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T07:26:22.134 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:21 vm00 bash[55893]: ts=2026-03-10T07:26:21.916Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T07:26:22.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:22 vm00 bash[20971]: [10/Mar/2026:07:26:22] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T07:26:22.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:22 vm00 bash[20971]: [10/Mar/2026:07:26:22] ENGINE Bus STOPPED 2026-03-10T07:26:22.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:22 vm00 bash[20971]: [10/Mar/2026:07:26:22] ENGINE Bus STARTING 2026-03-10T07:26:22.517 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:22 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:22.633 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:22 vm00 bash[20971]: [10/Mar/2026:07:26:22] ENGINE Serving on http://:::9283 2026-03-10T07:26:22.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:22 vm00 bash[20971]: [10/Mar/2026:07:26:22] ENGINE Bus STARTED 2026-03-10T07:26:23.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:22 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.731999+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.731999+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.739602+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.739602+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.746760+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.746760+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.754257+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.754257+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: cephadm 2026-03-10T07:26:21.772090+0000 mgr.y (mgr.24407) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: cephadm 2026-03-10T07:26:21.772090+0000 mgr.y (mgr.24407) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.856499+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.856499+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.862666+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.862666+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.866492+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.866492+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.869164+0000 mgr.y (mgr.24407) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.869164+0000 mgr.y (mgr.24407) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.874048+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: audit 2026-03-10T07:26:21.874048+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: cephadm 2026-03-10T07:26:21.886473+0000 mgr.y (mgr.24407) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T07:26:23.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:22 vm03 bash[23382]: cephadm 2026-03-10T07:26:21.886473+0000 mgr.y (mgr.24407) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T07:26:23.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.731999+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.731999+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.739602+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.739602+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.746760+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.746760+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.754257+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.754257+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: cephadm 2026-03-10T07:26:21.772090+0000 mgr.y (mgr.24407) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T07:26:23.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: cephadm 2026-03-10T07:26:21.772090+0000 mgr.y (mgr.24407) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T07:26:23.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.856499+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.856499+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.862666+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.862666+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.866492+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.866492+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.869164+0000 mgr.y (mgr.24407) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.869164+0000 mgr.y (mgr.24407) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.874048+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: audit 2026-03-10T07:26:21.874048+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: cephadm 2026-03-10T07:26:21.886473+0000 mgr.y (mgr.24407) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:22 vm00 bash[20701]: cephadm 2026-03-10T07:26:21.886473+0000 mgr.y (mgr.24407) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.731999+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.731999+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.739602+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.739602+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.746760+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.746760+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.754257+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.754257+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: cephadm 2026-03-10T07:26:21.772090+0000 mgr.y (mgr.24407) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: cephadm 2026-03-10T07:26:21.772090+0000 mgr.y (mgr.24407) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.856499+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.856499+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.862666+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.862666+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.866492+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.866492+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.869164+0000 mgr.y (mgr.24407) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.869164+0000 mgr.y (mgr.24407) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.874048+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: audit 2026-03-10T07:26:21.874048+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: cephadm 2026-03-10T07:26:21.886473+0000 mgr.y (mgr.24407) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T07:26:23.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:22 vm00 bash[28005]: cephadm 2026-03-10T07:26:21.886473+0000 mgr.y (mgr.24407) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T07:26:24.082 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.b/config 2026-03-10T07:26:24.128 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:23 vm03 bash[23382]: cluster 2026-03-10T07:26:22.552837+0000 mgr.y (mgr.24407) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:24.128 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:23 vm03 bash[23382]: cluster 2026-03-10T07:26:22.552837+0000 mgr.y (mgr.24407) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:24.128 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:23 vm03 bash[23382]: audit 2026-03-10T07:26:22.847457+0000 mgr.y (mgr.24407) 33 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:24.128 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:23 vm03 bash[23382]: audit 2026-03-10T07:26:22.847457+0000 mgr.y (mgr.24407) 33 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:24.128 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:23 vm03 bash[23382]: audit 2026-03-10T07:26:23.620422+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:24.128 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:23 vm03 bash[23382]: audit 2026-03-10T07:26:23.620422+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:24.128 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:23 vm03 bash[23382]: audit 2026-03-10T07:26:23.660628+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:24.128 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:23 vm03 bash[23382]: audit 2026-03-10T07:26:23.660628+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:23 vm00 bash[28005]: cluster 2026-03-10T07:26:22.552837+0000 mgr.y (mgr.24407) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:23 vm00 bash[28005]: cluster 2026-03-10T07:26:22.552837+0000 mgr.y (mgr.24407) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:23 vm00 bash[28005]: audit 2026-03-10T07:26:22.847457+0000 mgr.y (mgr.24407) 33 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:23 vm00 bash[28005]: audit 2026-03-10T07:26:22.847457+0000 mgr.y (mgr.24407) 33 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:23 vm00 bash[28005]: audit 2026-03-10T07:26:23.620422+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:23 vm00 bash[28005]: audit 2026-03-10T07:26:23.620422+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:23 vm00 bash[28005]: audit 2026-03-10T07:26:23.660628+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:23 vm00 bash[28005]: audit 2026-03-10T07:26:23.660628+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:23 vm00 bash[20701]: cluster 2026-03-10T07:26:22.552837+0000 mgr.y (mgr.24407) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:23 vm00 bash[20701]: cluster 2026-03-10T07:26:22.552837+0000 mgr.y (mgr.24407) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:23 vm00 bash[20701]: audit 2026-03-10T07:26:22.847457+0000 mgr.y (mgr.24407) 33 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:23 vm00 bash[20701]: audit 2026-03-10T07:26:22.847457+0000 mgr.y (mgr.24407) 33 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:23 vm00 bash[20701]: audit 2026-03-10T07:26:23.620422+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:23 vm00 bash[20701]: audit 2026-03-10T07:26:23.620422+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:23 vm00 bash[20701]: audit 2026-03-10T07:26:23.660628+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:24.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:23 vm00 bash[20701]: audit 2026-03-10T07:26:23.660628+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:24.134 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:23 vm00 bash[55893]: ts=2026-03-10T07:26:23.893Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000293993s 2026-03-10T07:26:24.240 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 -- 192.168.123.103:0/3182679386 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f62cc10a850 msgr2=0x7f62cc10acb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:24.240 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 --2- 192.168.123.103:0/3182679386 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f62cc10a850 0x7f62cc10acb0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f62c800b0a0 tx=0x7f62c802f450 comp rx=0 tx=0).stop 2026-03-10T07:26:24.240 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 -- 192.168.123.103:0/3182679386 shutdown_connections 2026-03-10T07:26:24.240 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 --2- 192.168.123.103:0/3182679386 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f62cc11c780 0x7f62cc11eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:24.240 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 --2- 192.168.123.103:0/3182679386 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f62cc10a850 0x7f62cc10acb0 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:24.240 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 --2- 192.168.123.103:0/3182679386 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f62cc10a470 0x7f62cc1114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:24.240 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 -- 192.168.123.103:0/3182679386 >> 192.168.123.103:0/3182679386 conn(0x7f62cc06dad0 msgr2=0x7f62cc06dee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:24.240 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 -- 192.168.123.103:0/3182679386 shutdown_connections 2026-03-10T07:26:24.240 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 -- 192.168.123.103:0/3182679386 wait complete. 2026-03-10T07:26:24.241 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 Processor -- start 2026-03-10T07:26:24.241 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 -- start start 2026-03-10T07:26:24.241 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f62cc10a470 0x7f62cc1af610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:24.241 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f62cc10a850 0x7f62cc1afb50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:24.241 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f62cc11c780 0x7f62cc1a96e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:24.241 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f62cc1212c0 con 0x7f62cc10a470 2026-03-10T07:26:24.241 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f62cc121140 con 0x7f62cc11c780 2026-03-10T07:26:24.241 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f62cc121440 con 0x7f62cc10a850 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d1a37640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f62cc10a850 0x7f62cc1afb50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d2238640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f62cc10a470 0x7f62cc1af610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d1a37640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f62cc10a850 0x7f62cc1afb50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.103:46038/0 (socket says 192.168.123.103:46038) 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d1a37640 1 -- 192.168.123.103:0/2986741483 learned_addr learned my addr 192.168.123.103:0/2986741483 (peer_addr_for_me v2:192.168.123.103:0/0) 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d2a39640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f62cc11c780 0x7f62cc1a96e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d1a37640 1 -- 192.168.123.103:0/2986741483 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f62cc11c780 msgr2=0x7f62cc1a96e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d1a37640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f62cc11c780 0x7f62cc1a96e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d1a37640 1 -- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f62cc10a470 msgr2=0x7f62cc1af610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d1a37640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f62cc10a470 0x7f62cc1af610 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d1a37640 1 -- 192.168.123.103:0/2986741483 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f62cc1a9f70 con 0x7f62cc10a850 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d2238640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f62cc10a470 0x7f62cc1af610 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d2a39640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f62cc11c780 0x7f62cc1a96e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:26:24.242 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d1a37640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f62cc10a850 0x7f62cc1afb50 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f62c80048d0 tx=0x7f62c8004310 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:24.243 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62b77fe640 1 -- 192.168.123.103:0/2986741483 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f62c8047070 con 0x7f62cc10a850 2026-03-10T07:26:24.243 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62b77fe640 1 -- 192.168.123.103:0/2986741483 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f62c80093d0 con 0x7f62cc10a850 2026-03-10T07:26:24.243 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62b77fe640 1 -- 192.168.123.103:0/2986741483 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f62c8042420 con 0x7f62cc10a850 2026-03-10T07:26:24.243 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 -- 192.168.123.103:0/2986741483 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f62cc1aa200 con 0x7f62cc10a850 2026-03-10T07:26:24.243 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.238+0000 7f62d44c3640 1 -- 192.168.123.103:0/2986741483 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f62cc1b63c0 con 0x7f62cc10a850 2026-03-10T07:26:24.244 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.242+0000 7f62d44c3640 1 -- 192.168.123.103:0/2986741483 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f629c005180 con 0x7f62cc10a850 2026-03-10T07:26:24.245 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.242+0000 7f62b77fe640 1 -- 192.168.123.103:0/2986741483 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f62c8002a60 con 0x7f62cc10a850 2026-03-10T07:26:24.245 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.242+0000 7f62b77fe640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f62ac077700 0x7f62ac079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:24.245 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.242+0000 7f62b77fe640 1 -- 192.168.123.103:0/2986741483 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f62c80bf0b0 con 0x7f62cc10a850 2026-03-10T07:26:24.248 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.246+0000 7f62d2238640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f62ac077700 0x7f62ac079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:24.248 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.246+0000 7f62b77fe640 1 -- 192.168.123.103:0/2986741483 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f62c803d070 con 0x7f62cc10a850 2026-03-10T07:26:24.248 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.246+0000 7f62d2238640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f62ac077700 0x7f62ac079bc0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f62b8004620 tx=0x7f62b8009290 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:24.382 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.378+0000 7f62d44c3640 1 -- 192.168.123.103:0/2986741483 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7f629c005470 con 0x7f62cc10a850 2026-03-10T07:26:24.389 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.386+0000 7f62b77fe640 1 -- 192.168.123.103:0/2986741483 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v17) ==== 170+0+59 (secure 0 0 0) 0x7f62c808b9b0 con 0x7f62cc10a850 2026-03-10T07:26:24.389 INFO:teuthology.orchestra.run.vm03.stdout:[client.1] 2026-03-10T07:26:24.389 INFO:teuthology.orchestra.run.vm03.stdout: key = AQCgx69pqqgJFxAAYUvi3q7blzWnK7L+1fnt1A== 2026-03-10T07:26:24.392 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 -- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f62ac077700 msgr2=0x7f62ac079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:24.392 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f62ac077700 0x7f62ac079bc0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f62b8004620 tx=0x7f62b8009290 comp rx=0 tx=0).stop 2026-03-10T07:26:24.392 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 -- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f62cc10a850 msgr2=0x7f62cc1afb50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:24.392 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f62cc10a850 0x7f62cc1afb50 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f62c80048d0 tx=0x7f62c8004310 comp rx=0 tx=0).stop 2026-03-10T07:26:24.392 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 -- 192.168.123.103:0/2986741483 shutdown_connections 2026-03-10T07:26:24.392 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f62ac077700 0x7f62ac079bc0 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:24.392 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f62cc11c780 0x7f62cc1a96e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:24.392 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f62cc10a850 0x7f62cc1afb50 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:24.392 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 --2- 192.168.123.103:0/2986741483 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f62cc10a470 0x7f62cc1af610 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:24.392 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 -- 192.168.123.103:0/2986741483 >> 192.168.123.103:0/2986741483 conn(0x7f62cc06dad0 msgr2=0x7f62cc11d170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:24.392 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 -- 192.168.123.103:0/2986741483 shutdown_connections 2026-03-10T07:26:24.393 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-10T07:26:24.390+0000 7f62d44c3640 1 -- 192.168.123.103:0/2986741483 wait complete. 2026-03-10T07:26:24.456 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T07:26:24.456 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T07:26:24.456 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T07:26:24.522 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T07:26:24.522 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T07:26:24.522 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph mgr dump --format=json 2026-03-10T07:26:25.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:24 vm03 bash[23382]: audit 2026-03-10T07:26:24.386102+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.103:0/2986741483' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:24 vm03 bash[23382]: audit 2026-03-10T07:26:24.386102+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.103:0/2986741483' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:24 vm03 bash[23382]: audit 2026-03-10T07:26:24.386409+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:24 vm03 bash[23382]: audit 2026-03-10T07:26:24.386409+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:24 vm03 bash[23382]: audit 2026-03-10T07:26:24.389429+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:25.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:24 vm03 bash[23382]: audit 2026-03-10T07:26:24.389429+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:25.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:24 vm00 bash[28005]: audit 2026-03-10T07:26:24.386102+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.103:0/2986741483' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:24 vm00 bash[28005]: audit 2026-03-10T07:26:24.386102+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.103:0/2986741483' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:24 vm00 bash[28005]: audit 2026-03-10T07:26:24.386409+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:24 vm00 bash[28005]: audit 2026-03-10T07:26:24.386409+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:24 vm00 bash[28005]: audit 2026-03-10T07:26:24.389429+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:25.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:24 vm00 bash[28005]: audit 2026-03-10T07:26:24.389429+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:25.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:24 vm00 bash[20701]: audit 2026-03-10T07:26:24.386102+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.103:0/2986741483' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:24 vm00 bash[20701]: audit 2026-03-10T07:26:24.386102+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.103:0/2986741483' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:24 vm00 bash[20701]: audit 2026-03-10T07:26:24.386409+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:24 vm00 bash[20701]: audit 2026-03-10T07:26:24.386409+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:26:25.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:24 vm00 bash[20701]: audit 2026-03-10T07:26:24.389429+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:25.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:24 vm00 bash[20701]: audit 2026-03-10T07:26:24.389429+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:26:26.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:25 vm03 bash[23382]: cluster 2026-03-10T07:26:24.553150+0000 mgr.y (mgr.24407) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:26.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:25 vm03 bash[23382]: cluster 2026-03-10T07:26:24.553150+0000 mgr.y (mgr.24407) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:26.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:25 vm00 bash[20701]: cluster 2026-03-10T07:26:24.553150+0000 mgr.y (mgr.24407) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:26.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:25 vm00 bash[20701]: cluster 2026-03-10T07:26:24.553150+0000 mgr.y (mgr.24407) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:26.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:25 vm00 bash[28005]: cluster 2026-03-10T07:26:24.553150+0000 mgr.y (mgr.24407) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:26.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:25 vm00 bash[28005]: cluster 2026-03-10T07:26:24.553150+0000 mgr.y (mgr.24407) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T07:26:28.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:28 vm00 bash[20701]: cluster 2026-03-10T07:26:26.553730+0000 mgr.y (mgr.24407) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:28.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:28 vm00 bash[20701]: cluster 2026-03-10T07:26:26.553730+0000 mgr.y (mgr.24407) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:28.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:28 vm00 bash[28005]: cluster 2026-03-10T07:26:26.553730+0000 mgr.y (mgr.24407) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:28.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:28 vm00 bash[28005]: cluster 2026-03-10T07:26:26.553730+0000 mgr.y (mgr.24407) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:28.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:28 vm03 bash[23382]: cluster 2026-03-10T07:26:26.553730+0000 mgr.y (mgr.24407) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:28.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:28 vm03 bash[23382]: cluster 2026-03-10T07:26:26.553730+0000 mgr.y (mgr.24407) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:26:29.181 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:29.360 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 -- 192.168.123.100:0/207842511 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff554104f40 msgr2=0x7ff554105320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:29.372 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 --2- 192.168.123.100:0/207842511 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff554104f40 0x7ff554105320 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7ff548009a30 tx=0x7ff54802f240 comp rx=0 tx=0).stop 2026-03-10T07:26:29.372 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 -- 192.168.123.100:0/207842511 shutdown_connections 2026-03-10T07:26:29.372 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 --2- 192.168.123.100:0/207842511 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff55410a070 0x7ff554111bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:29.372 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 --2- 192.168.123.100:0/207842511 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff5541058f0 0x7ff554109940 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:29.372 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 --2- 192.168.123.100:0/207842511 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff554104f40 0x7ff554105320 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:29.372 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 -- 192.168.123.100:0/207842511 >> 192.168.123.100:0/207842511 conn(0x7ff5541009e0 msgr2=0x7ff554102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:29.373 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 -- 192.168.123.100:0/207842511 shutdown_connections 2026-03-10T07:26:29.373 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 -- 192.168.123.100:0/207842511 wait complete. 2026-03-10T07:26:29.373 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 Processor -- start 2026-03-10T07:26:29.373 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 -- start start 2026-03-10T07:26:29.373 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff554104f40 0x7ff5541a2610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:29.373 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff5541058f0 0x7ff5541a2b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:29.373 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff559195640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff554104f40 0x7ff5541a2610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:29.373 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff559195640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff554104f40 0x7ff5541a2610 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:51130/0 (socket says 192.168.123.100:51130) 2026-03-10T07:26:29.373 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff559195640 1 -- 192.168.123.100:0/4153075582 learned_addr learned my addr 192.168.123.100:0/4153075582 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:29.373 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff55410a070 0x7ff55419c790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:29.373 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff558994640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff5541058f0 0x7ff5541a2b50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff554114370 con 0x7ff5541058f0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7ff5541141f0 con 0x7ff55410a070 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.358+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7ff5541144f0 con 0x7ff554104f40 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff559996640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff55410a070 0x7ff55419c790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff558994640 1 -- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff554104f40 msgr2=0x7ff5541a2610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff558994640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff554104f40 0x7ff5541a2610 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff558994640 1 -- 192.168.123.100:0/4153075582 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff55410a070 msgr2=0x7ff55419c790 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff558994640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff55410a070 0x7ff55419c790 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff558994640 1 -- 192.168.123.100:0/4153075582 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff55419cf30 con 0x7ff5541058f0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff559195640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff554104f40 0x7ff5541a2610 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff558994640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff5541058f0 0x7ff5541a2b50 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7ff54400c970 tx=0x7ff54400ce40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff5427fc640 1 -- 192.168.123.100:0/4153075582 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff544013070 con 0x7ff5541058f0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff5427fc640 1 -- 192.168.123.100:0/4153075582 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ff544004480 con 0x7ff5541058f0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff5427fc640 1 -- 192.168.123.100:0/4153075582 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff544002cd0 con 0x7ff5541058f0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff55419d220 con 0x7ff5541058f0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.362+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff554077530 con 0x7ff5541058f0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.366+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff51c005180 con 0x7ff5541058f0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.366+0000 7ff5427fc640 1 -- 192.168.123.100:0/4153075582 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7ff544020050 con 0x7ff5541058f0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.366+0000 7ff5427fc640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff5300777d0 0x7ff530079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.366+0000 7ff5427fc640 1 -- 192.168.123.100:0/4153075582 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7ff544099aa0 con 0x7ff5541058f0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.370+0000 7ff559195640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff5300777d0 0x7ff530079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.370+0000 7ff559195640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff5300777d0 0x7ff530079c90 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7ff5480097c0 tx=0x7ff548005830 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:29.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.370+0000 7ff5427fc640 1 -- 192.168.123.100:0/4153075582 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff544066420 con 0x7ff5541058f0 2026-03-10T07:26:29.508 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.506+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mgr dump", "format": "json"} v 0) -- 0x7ff51c005470 con 0x7ff5541058f0 2026-03-10T07:26:29.511 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.510+0000 7ff5427fc640 1 -- 192.168.123.100:0/4153075582 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "mgr dump", "format": "json"}]=0 v20) ==== 74+0+192102 (secure 0 0 0) 0x7ff54406b2d0 con 0x7ff5541058f0 2026-03-10T07:26:29.511 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:26:29.515 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff5300777d0 msgr2=0x7ff530079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:29.515 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff5300777d0 0x7ff530079c90 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7ff5480097c0 tx=0x7ff548005830 comp rx=0 tx=0).stop 2026-03-10T07:26:29.516 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff5541058f0 msgr2=0x7ff5541a2b50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:29.516 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff5541058f0 0x7ff5541a2b50 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7ff54400c970 tx=0x7ff54400ce40 comp rx=0 tx=0).stop 2026-03-10T07:26:29.516 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 shutdown_connections 2026-03-10T07:26:29.516 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff5300777d0 0x7ff530079c90 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:29.516 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff55410a070 0x7ff55419c790 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:29.516 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff5541058f0 0x7ff5541a2b50 unknown :-1 s=CLOSED pgs=161 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:29.516 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 --2- 192.168.123.100:0/4153075582 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff554104f40 0x7ff5541a2610 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:29.516 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 >> 192.168.123.100:0/4153075582 conn(0x7ff5541009e0 msgr2=0x7ff554101130 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:29.516 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 shutdown_connections 2026-03-10T07:26:29.516 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:29.514+0000 7ff55b420640 1 -- 192.168.123.100:0/4153075582 wait complete. 2026-03-10T07:26:29.579 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":20,"flags":0,"active_gid":24407,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":3339031114},{"type":"v1","addr":"192.168.123.100:6801","nonce":3339031114}]},"active_addr":"192.168.123.100:6801/3339031114","active_change":"2026-03-10T07:26:08.524899+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24394,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.100:8443/","prometheus":"http://192.168.123.100:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":68,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1926041675}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1205230726}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1427336249}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":2501634331}]}]} 2026-03-10T07:26:29.580 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T07:26:29.580 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T07:26:29.580 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd dump --format=json 2026-03-10T07:26:30.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:30 vm00 bash[28005]: cluster 2026-03-10T07:26:28.554052+0000 mgr.y (mgr.24407) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:30.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:30 vm00 bash[28005]: cluster 2026-03-10T07:26:28.554052+0000 mgr.y (mgr.24407) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:30.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:30 vm00 bash[28005]: audit 2026-03-10T07:26:29.512065+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.100:0/4153075582' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:26:30.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:30 vm00 bash[28005]: audit 2026-03-10T07:26:29.512065+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.100:0/4153075582' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:26:30.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:30 vm00 bash[20701]: cluster 2026-03-10T07:26:28.554052+0000 mgr.y (mgr.24407) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:30.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:30 vm00 bash[20701]: cluster 2026-03-10T07:26:28.554052+0000 mgr.y (mgr.24407) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:30.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:30 vm00 bash[20701]: audit 2026-03-10T07:26:29.512065+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.100:0/4153075582' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:26:30.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:30 vm00 bash[20701]: audit 2026-03-10T07:26:29.512065+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.100:0/4153075582' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:26:30.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:30 vm03 bash[23382]: cluster 2026-03-10T07:26:28.554052+0000 mgr.y (mgr.24407) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:30.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:30 vm03 bash[23382]: cluster 2026-03-10T07:26:28.554052+0000 mgr.y (mgr.24407) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:30.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:30 vm03 bash[23382]: audit 2026-03-10T07:26:29.512065+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.100:0/4153075582' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:26:30.661 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:30 vm03 bash[23382]: audit 2026-03-10T07:26:29.512065+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.100:0/4153075582' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:26:31.289 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.289 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.289 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.289 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.289 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.289 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.289 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.289 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.289 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.290 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.290 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.290 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.290 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.339 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:26:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:26:31.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:31 vm00 bash[20701]: cluster 2026-03-10T07:26:30.554486+0000 mgr.y (mgr.24407) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:31.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:31 vm00 bash[20701]: cluster 2026-03-10T07:26:30.554486+0000 mgr.y (mgr.24407) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:31.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:31 vm00 bash[28005]: cluster 2026-03-10T07:26:30.554486+0000 mgr.y (mgr.24407) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:31.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:31 vm00 bash[28005]: cluster 2026-03-10T07:26:30.554486+0000 mgr.y (mgr.24407) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:31.637 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.637 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.637 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.638 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.638 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: Started Ceph grafana.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637454535Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-10T07:26:31Z 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637783406Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637805537Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637815265Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637823601Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637831505Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637839392Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637847216Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637855371Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637863586Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637878805Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637890457Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637899684Z level=info msg=Target target=[all] 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637909472Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.63792361Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637932827Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637940571Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637948236Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=settings t=2026-03-10T07:26:31.637964186Z level=info msg="App mode production" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=sqlstore t=2026-03-10T07:26:31.638185913Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=sqlstore t=2026-03-10T07:26:31.638210912Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.638657983Z level=info msg="Starting DB migrations" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.639553562Z level=info msg="Executing migration" id="create migration_log table" 2026-03-10T07:26:31.638 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.638 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:31 vm03 bash[23382]: cluster 2026-03-10T07:26:30.554486+0000 mgr.y (mgr.24407) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:31.638 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:31 vm03 bash[23382]: cluster 2026-03-10T07:26:30.554486+0000 mgr.y (mgr.24407) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:31.639 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.639 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.639 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:31 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:26:31.887 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.640155297Z level=info msg="Migration successfully executed" id="create migration_log table" duration=600.221µs 2026-03-10T07:26:31.887 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.641852245Z level=info msg="Executing migration" id="create user table" 2026-03-10T07:26:31.887 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.642283838Z level=info msg="Migration successfully executed" id="create user table" duration=432.526µs 2026-03-10T07:26:31.887 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.644473946Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.644946848Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=473.032µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.646559829Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.647033341Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=473.602µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.648614321Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.649106218Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=494.653µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.651028181Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.651513996Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=486.086µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.652653264Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.653821345Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.170125ms 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.655221936Z level=info msg="Executing migration" id="create user table v2" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.655697472Z level=info msg="Migration successfully executed" id="create user table v2" duration=475.737µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.656925126Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.657367039Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=441.842µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.659131966Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.659639162Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=507.827µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.661287909Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.661578137Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=287.804µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.662963698Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.663348494Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=388.143µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.664958188Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.665631406Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=673.61µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.667034731Z level=info msg="Executing migration" id="Update user table charset" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.667104013Z level=info msg="Migration successfully executed" id="Update user table charset" duration=70.804µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.668395357Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.669008632Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=612.885µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.670211059Z level=info msg="Executing migration" id="Add missing user data" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.670548134Z level=info msg="Migration successfully executed" id="Add missing user data" duration=337.646µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.672371882Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.672987132Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=613.026µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.674165664Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.674630128Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=464.426µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.675960206Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.676609339Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=648.832µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.677863443Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.681238484Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=3.371425ms 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.683495169Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.684431483Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=936.965µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.685750891Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.685939866Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=189.557µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.687302606Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.687850378Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=546.379µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.689863973Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.690454997Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=596.365µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.691932994Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.692353997Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=420.684µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.693858332Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.694267403Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=409.052µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.696062417Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.696470185Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=407.618µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.697885613Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.698721098Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=832.309µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.700415722Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.700521041Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=107.042µs 2026-03-10T07:26:31.888 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.701870635Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.703760817Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.889793ms 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.705896473Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.706615408Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=733.974µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.708024064Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.708536009Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=512.024µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.710088856Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.710593757Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=505.583µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.711972266Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.713261004Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.288408ms 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.71439382Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.714898502Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=505.313µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.716240131Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.716727429Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=486.667µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.718296807Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.718887992Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=591.615µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.720265989Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.720845541Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=580.724µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.72211303Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.722627671Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=514.971µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.724513614Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.724835401Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=320.003µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.725926628Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.726306695Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=380.107µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.727589944Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.727858771Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=267.525µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.729353889Z level=info msg="Executing migration" id="create star table" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.729800681Z level=info msg="Migration successfully executed" id="create star table" duration=448.847µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.731149243Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.731649977Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=500.934µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.732942943Z level=info msg="Executing migration" id="create org table v1" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.733430473Z level=info msg="Migration successfully executed" id="create org table v1" duration=487.218µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.734679376Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.735117702Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=438.376µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.736779766Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.73718487Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=404.614µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.738556475Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.739009289Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=452.713µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.740312104Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.740731795Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=419.701µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.742116675Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.742547919Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=431.202µs 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.744353001Z level=info msg="Executing migration" id="Update org table charset" 2026-03-10T07:26:31.889 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.744445926Z level=info msg="Migration successfully executed" id="Update org table charset" duration=94.769µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.745508548Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.745598007Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=86.783µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.746907045Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.747098336Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=191.412µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.74824713Z level=info msg="Executing migration" id="create dashboard table" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.748758153Z level=info msg="Migration successfully executed" id="create dashboard table" duration=510.772µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.750521618Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.751137298Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=616µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.75411982Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.75486231Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=743.1µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.756369491Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.7568516Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=482.67µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.758490329Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.759145895Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=655.926µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.76103208Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.761667417Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=635.469µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.76309585Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.765847226Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=2.750293ms 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.767257644Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.768002258Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=744.583µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.769572347Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.770253451Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=683.91µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.771681093Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.772358088Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=676.745µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.773834932Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.774145458Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=310.245µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.775794295Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.776652733Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=815.578µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.777991677Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.778136681Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=144.903µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.779495402Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.780423852Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=928.22µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.781891127Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.782775053Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=881.582µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.783983221Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.784707515Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=723.964µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.785766532Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.786224876Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=458.485µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.787983622Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.78875777Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=774.159µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.789924821Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.790459488Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=535.079µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.791740983Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.792225356Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=484.964µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.793870338Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.793947243Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=77.626µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.795294592Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.795409628Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=119.155µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.796493732Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.797377117Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=883.045µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.798966432Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.79973967Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=766.295µs 2026-03-10T07:26:31.890 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.800797464Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.801556374Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=758.751µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.802578842Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.803307375Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=728.643µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.80484844Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.805043578Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=194.736µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.806183717Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.806749583Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=566.127µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.80823857Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.808782426Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=544.347µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.809877068Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.809950557Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=73.76µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.811599055Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.812080102Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=481.067µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.813192488Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.81360225Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=409.39µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.814932497Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.816910156Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=1.975454ms 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.819864264Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.820475025Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=611.371µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.821804451Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.822312397Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=507.587µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.823609403Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.824042309Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=433.046µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.825749075Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.826006471Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=257.816µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.827205892Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.827572864Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=366.831µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.828574642Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.82937965Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=804.746µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.830909333Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.831382305Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=473.363µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.8324621Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.832615418Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=155.934µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.833838463Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.833990782Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=152.139µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.835655168Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.83618665Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=531.843µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.83753933Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.838390534Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=850.895µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.83978378Z level=info msg="Executing migration" id="create data_source table" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.840311676Z level=info msg="Migration successfully executed" id="create data_source table" duration=527.736µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.842038791Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.842585451Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=542.843µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.843973588Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.844507905Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=531.161µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.845844845Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.846338355Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=493.851µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.847936848Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.848440647Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=504.31µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.849498822Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.851505485Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=2.005851ms 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.852830493Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.853425283Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=595.311µs 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.855330545Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-10T07:26:31.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.855882595Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=549.165µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.857242589Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.857791253Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=549.235µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.859218574Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.859683259Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=466.729µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.861157758Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.862148636Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=990.707µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.863451101Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.864348584Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=897.371µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.865448826Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.865541993Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=93.787µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.867097154Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.867289936Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=193.034µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.868320109Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.869269639Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=948.979µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.87031013Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.870501601Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=191.561µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.87174248Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.871888816Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=146.547µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.873248238Z level=info msg="Executing migration" id="Add uid column" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.874200313Z level=info msg="Migration successfully executed" id="Add uid column" duration=954.48µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.875512095Z level=info msg="Executing migration" id="Update uid value" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.875692245Z level=info msg="Migration successfully executed" id="Update uid value" duration=180.439µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.876827194Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.877423779Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=596.505µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.87910116Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.879624286Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=523.346µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.880959073Z level=info msg="Executing migration" id="create api_key table" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.881553473Z level=info msg="Migration successfully executed" id="create api_key table" duration=594.16µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.883033263Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.883553032Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=519.889µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.885321615Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.885787935Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=460.709µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.88704258Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.887507746Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=465.526µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.88877797Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.88929733Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=519.54µs 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.890961136Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-10T07:26:31.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.891474303Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=515.251µs 2026-03-10T07:26:32.139 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.893384924Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.89390245Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=518.306µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.895588116Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.897836546Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.253449ms 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.899201559Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.899834892Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=633.585µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.900952359Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.901660193Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=707.854µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.903516892Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.904264983Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=750.845µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.905482948Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.905987649Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=504.841µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.907339708Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.907617843Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=278.014µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.908962137Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.909333075Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=370.809µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.910324304Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.910406388Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=80.282µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.911702992Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.912598391Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=895.136µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.914015251Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.914947668Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=932.277µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.916304826Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.916472523Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=167.697µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.917458131Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.918379988Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=921.688µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.919610898Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.920524419Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=913.341µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.921881649Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.92229597Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=414.132µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.923518744Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.92385577Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=339.76µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.924959058Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.925407524Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=448.286µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.927052985Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.927508555Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=459.627µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.928880581Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.929476184Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=595.162µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.930772547Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.931306203Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=529.148µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.932985628Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.933102779Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=96.391µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.934253358Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.934336454Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=83.658µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.935752674Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.936738482Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=985.588µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.938266653Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.939544982Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.275654ms 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.941091467Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.941256578Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=181.361µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.942450528Z level=info msg="Executing migration" id="create quota table v1" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.942899135Z level=info msg="Migration successfully executed" id="create quota table v1" duration=448.506µs 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.946505282Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-10T07:26:32.140 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.947035431Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=531.341µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.948267894Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.948286849Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=22.342µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.949198759Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.949656923Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=457.764µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.951651031Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.952182684Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=532.504µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.954798524Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.955914307Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=1.127425ms 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.959227342Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.959255816Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=31.88µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.960736356Z level=info msg="Executing migration" id="create session table" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.961384738Z level=info msg="Migration successfully executed" id="create session table" duration=644.695µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.963142843Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.963298425Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=152.367µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.964733029Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.965011905Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=278.936µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.966321944Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.967029879Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=707.543µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.968801438Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.969559607Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=760.023µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.971058944Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.971073982Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=16.261µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.972369023Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.972385323Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=17.643µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.973919265Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.975269069Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.344394ms 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.976639973Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.977927772Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.290041ms 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.980164186Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.980441218Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=277.011µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.981783859Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.982035944Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=251.725µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.984011979Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.984971669Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=959.74µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.986554942Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.986572155Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=17.774µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.98898337Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.990546446Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.563026ms 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.9922942Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.992621416Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=325.713µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.993770663Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.99501565Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=1.244085ms 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.996342582Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.997550038Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.207237ms 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.998869265Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:31 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:31.998900493Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=29.164µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.000750139Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.00151455Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=765.163µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.003578551Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.004239487Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=660.645µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.005816921Z level=info msg="Executing migration" id="create alert table v1" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.006659068Z level=info msg="Migration successfully executed" id="create alert table v1" duration=842.018µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.008571553Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.009368294Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=798.835µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.010912915Z level=info msg="Executing migration" id="add index alert state" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.011670013Z level=info msg="Migration successfully executed" id="add index alert state" duration=757.638µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.013630898Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.014508031Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=877.635µs 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.016068372Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-10T07:26:32.141 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.016827133Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=754.953µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.018274923Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.018961216Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=686.194µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.020736262Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.02136702Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=630.939µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.022533358Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.026153353Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=3.6178ms 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.027956101Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.028685355Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=728.413µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.030394647Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.031233588Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=840.375µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.032759073Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.033202789Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=443.937µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.034395217Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.03494836Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=553.935µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.036653734Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.037231935Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=578.09µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.038374919Z level=info msg="Executing migration" id="Add column is_default" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.03972846Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.353201ms 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.041117739Z level=info msg="Executing migration" id="Add column frequency" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.042392492Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.274843ms 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.044237689Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.045540646Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.302767ms 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.046619719Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.047931933Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.307615ms 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.049199032Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.049752745Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=553.503µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.051411742Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.051422824Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=11.643µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.052808545Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.052820047Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=12.154µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.053908238Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.054469215Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=562.15µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.055990383Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.056558765Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=571.468µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.057955687Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.058522907Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=566.418µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.059752634Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.060285839Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=533.005µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.061710535Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.062284898Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=574.132µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.063550363Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.064857016Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=1.306031ms 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.065848295Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.067166119Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.313877ms 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.0688011Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.069044588Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=243.329µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.070282773Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.070841987Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=559.534µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.072052258Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.07261594Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=563.742µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.073932472Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.07539031Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=1.457307ms 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.076559815Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.076767005Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=207.451µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.077957479Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.07852565Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=568.351µs 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.080346952Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-10T07:26:32.142 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.080945953Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=596.014µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.082262603Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.082497577Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=235.064µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.083637065Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.084200176Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=562.922µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.085830599Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.086373813Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=543.425µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.087641833Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.088225683Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=587.156µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.089596387Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.090224382Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=628.674µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.091606336Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.092215824Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=609.628µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.093920749Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.094629976Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=706.703µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.095954172Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.095967728Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=14.527µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.097435494Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.099012807Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=1.576681ms 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.100395703Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.101039447Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=643.884µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.102113251Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.103606526Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.493035ms 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.105281883Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.105881895Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=599.691µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.107263789Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.107848682Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=580.444µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.10907865Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.10968322Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=604.531µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.111231258Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.115113244Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=3.880795ms 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.116645784Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.117298935Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=652.59µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.118556294Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.119171985Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=617.364µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.120881227Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.121037622Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=156.696µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.121992171Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.122855338Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=862.697µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.124112397Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.124382456Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=272.804µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.125757468Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.127543184Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=1.785135ms 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.128944816Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.130357899Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=1.413455ms 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.131687316Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.132286706Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=599.822µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.133815949Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.13456014Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=743.732µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.135978093Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.136298618Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=320.385µs 2026-03-10T07:26:32.143 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.137829203Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-10T07:26:32.383 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:31 vm00 bash[55893]: ts=2026-03-10T07:26:31.895Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002498267s 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.140147713Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=2.318601ms 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.144495669Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.145235453Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=734.393µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.146667441Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.146924276Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=256.544µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.148234126Z level=info msg="Executing migration" id="Move region to single row" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.148604834Z level=info msg="Migration successfully executed" id="Move region to single row" duration=370.738µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.150246809Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.15091023Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=663.39µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.152164063Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.152816573Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=652.6µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.154131932Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.154835298Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=700.761µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.156461864Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.157106168Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=646.728µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.158487983Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.159462619Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=976.128µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.1613885Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.162097176Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=708.947µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.163609968Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.163873584Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=263.796µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.165368071Z level=info msg="Executing migration" id="create test_data table" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.165981417Z level=info msg="Migration successfully executed" id="create test_data table" duration=613.144µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.167439446Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.168002106Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=562.861µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.169796068Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.170441505Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=648.882µs 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.17183954Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-10T07:26:32.392 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.172464698Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=625.169µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.173846712Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.174111743Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=265.03µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.175649591Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.176001815Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=352.154µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.1770919Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.17731428Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=226.197µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.178455591Z level=info msg="Executing migration" id="create team table" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.179047816Z level=info msg="Migration successfully executed" id="create team table" duration=591.784µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.181512161Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.182523368Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.010916ms 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.184967345Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.18592015Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=952.856µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.188104507Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.18982965Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=1.728087ms 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.192231366Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.19263071Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=396.097µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.193846201Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.194603568Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=757.257µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.195980994Z level=info msg="Executing migration" id="create team member table" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.196581316Z level=info msg="Migration successfully executed" id="create team member table" duration=600.062µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.197952902Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.198742339Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=789.448µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.200484013Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.201243954Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=759.701µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.202517195Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.203192638Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=675.434µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.204593298Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.206800628Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=2.207129ms 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.208492878Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.209969312Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=1.476193ms 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.211284009Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.212785369Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=1.501571ms 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.213907705Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.214493168Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=586.725µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.216219573Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.216793114Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=575.535µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.218166632Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.218786932Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=622.714µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.220222768Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.220830032Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=607.194µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.222532872Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.223134657Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=602.125µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.22493501Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.225616124Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=681.665µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.227893507Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.228621539Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=728.132µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.230536178Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.23118949Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=651.6µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.232669529Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.233109158Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=439.608µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.235049075Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.235346615Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=297.341µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.236545255Z level=info msg="Executing migration" id="create tag table" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.237076687Z level=info msg="Migration successfully executed" id="create tag table" duration=530.83µs 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.23855861Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-10T07:26:32.393 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.239180012Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=621.42µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.24104684Z level=info msg="Executing migration" id="create login attempt table" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.241582059Z level=info msg="Migration successfully executed" id="create login attempt table" duration=538.115µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.242957131Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.243561691Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=604.62µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.244991715Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.24559343Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=603.328µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.247331417Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.251661258Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=4.328057ms 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.253332989Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.253983415Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=647.801µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.255471519Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.256119009Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=647.841µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.257899005Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.258245248Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=346.213µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.259828712Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.260306914Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=477.981µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.261522585Z level=info msg="Executing migration" id="create user auth table" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.262058996Z level=info msg="Migration successfully executed" id="create user auth table" duration=535.971µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.26376985Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.264375673Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=606.062µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.265818633Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.266024161Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=205.417µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.295879386Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.298168109Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=2.290406ms 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.30014219Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.302121751Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.975493ms 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.304634207Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.307140592Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=2.506314ms 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.308678601Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.310799108Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=2.127981ms 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.312614811Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.313373621Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=759.251µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.314847709Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.316903915Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=2.055583ms 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.318506766Z level=info msg="Executing migration" id="create server_lock table" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.319128669Z level=info msg="Migration successfully executed" id="create server_lock table" duration=622.483µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.320928721Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.321610757Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=682.467µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.323101927Z level=info msg="Executing migration" id="create user auth token table" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.323771931Z level=info msg="Migration successfully executed" id="create user auth token table" duration=670.384µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.325700826Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.326524379Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=823.353µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.328564735Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.329508403Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=948.327µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.331113699Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.331838806Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=725.286µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.333318385Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.335359552Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=2.039955ms 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.337162791Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.337983208Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=820.688µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.339704893Z level=info msg="Executing migration" id="create cache_data table" 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.340306768Z level=info msg="Migration successfully executed" id="create cache_data table" duration=601.564µs 2026-03-10T07:26:32.394 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.341776447Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.342404932Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=628.946µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.344247756Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.344836676Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=584.642µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.346250792Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.346928549Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=677.897µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.348349517Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.348549454Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=200.048µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.350076332Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.350302729Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=229.993µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.351663825Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.352314762Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=650.716µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.353736551Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.354394422Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=658.151µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.355904097Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.356519147Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=615.24µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.358211036Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.358497746Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=284.185µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.359911851Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.360365157Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=459.134µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.361781375Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.362173273Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=392.088µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.363412169Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.363835327Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=422.967µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.365164632Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.365726692Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=561.779µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.366838198Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.36869685Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=1.858393ms 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.369645589Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.370199202Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=553.343µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.378524004Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.37866497Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=141.267µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.37986426Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.380401223Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=537.304µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.381479094Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.382008272Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=529.148µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.383581607Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.384122036Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=540.178µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.385145926Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.385256935Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=114.727µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.386297918Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.386849798Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=551.82µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.388315449Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.388820542Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=504.762µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.389926587Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.390585219Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=656.177µs 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.392083512Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-10T07:26:32.395 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.392619463Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=535.84µs 2026-03-10T07:26:32.640 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:32 vm03 bash[23382]: audit 2026-03-10T07:26:31.510830+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.641 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:32 vm03 bash[23382]: audit 2026-03-10T07:26:31.510830+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.641 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:32 vm03 bash[23382]: audit 2026-03-10T07:26:31.530617+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.641 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:32 vm03 bash[23382]: audit 2026-03-10T07:26:31.530617+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.641 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:32 vm03 bash[23382]: audit 2026-03-10T07:26:31.537131+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.641 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:32 vm03 bash[23382]: audit 2026-03-10T07:26:31.537131+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.641 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:32 vm03 bash[23382]: audit 2026-03-10T07:26:31.554426+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.641 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:32 vm03 bash[23382]: audit 2026-03-10T07:26:31.554426+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.641 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:32 vm03 bash[23382]: audit 2026-03-10T07:26:31.572019+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:32.641 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:32 vm03 bash[23382]: audit 2026-03-10T07:26:31.572019+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.39682551Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.400357439Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=3.531809ms 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.402646975Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.403483019Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=838.24µs 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.404928845Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.405646577Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=718.644µs 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.407577328Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.41559472Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=8.014656ms 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.417313098Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.425145703Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=7.829489ms 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.427082593Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.427885777Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=803.785µs 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.429236984Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.42992466Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=688.357µs 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.432316348Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.434654606Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=2.332877ms 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.436483863Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.438503851Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=2.019187ms 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.439949557Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.440611094Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=661.075µs 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.442156035Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.442808966Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=653.211µs 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.44428102Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.44486478Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=583.689µs 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.446641029Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.447305541Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=664.503µs 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.448693749Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.4488853Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=191.481µs 2026-03-10T07:26:32.641 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.449992265Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.452008747Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=2.015079ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.453519485Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.455459431Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=1.939345ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.456828402Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.458995527Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=2.166464ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.460346763Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.461021405Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=674.711µs 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.463898949Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.464594981Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=696.844µs 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.465717517Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.467666411Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=1.94658ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.469000775Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.470964397Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=1.964934ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.47268007Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.473283919Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=604.359µs 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.474671575Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.476503248Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=1.830981ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.477705494Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.47955528Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=1.849556ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.481050729Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.481280582Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=230.686µs 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.482386396Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.483049496Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=663.201µs 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.484577367Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.485359621Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=782.955µs 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.487243492Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.487986001Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=741.186µs 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.489559939Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.489754645Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=194.435µs 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.491118075Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.493332899Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=2.215496ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.495064604Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.497025059Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=1.959003ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.498217337Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.500190676Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=1.972798ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.501523288Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.503442435Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=1.918126ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.504871239Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.507278056Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=2.404542ms 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.508665921Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.508820162Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=154.872µs 2026-03-10T07:26:32.642 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.51023021Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.510992827Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=762.197µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.513758941Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.517261603Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=3.499066ms 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.518607039Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.51880351Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=196.721µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.520248964Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.523479834Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=3.230629ms 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.524716646Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.525277072Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=560.165µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.526945768Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.52890448Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=1.958572ms 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.529959989Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.530401221Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=441.302µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.531737139Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.532288258Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=550.788µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.533882072Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.535961682Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=2.079469ms 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.537094918Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.537810016Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=711.481µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.539811759Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.540528259Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=717.622µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.542363398Z level=info msg="Executing migration" id="create alert_image table" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.543005058Z level=info msg="Migration successfully executed" id="create alert_image table" duration=641.498µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.544399566Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.54495901Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=559.956µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.546251267Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.546445393Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=194.217µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.547630877Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.548231329Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=600.032µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.550020591Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.550614251Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=596.716µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.551767194Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.552070015Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.553506793Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.55394025Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=433.937µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.555081342Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.555706088Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=624.617µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.556665156Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.55905447Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=2.388933ms 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.560594803Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.561175869Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=581.015µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.562696224Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.56326118Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=564.705µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.564594001Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.565066222Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=471.279µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.566628055Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.567158445Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=529.808µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.56850363Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.569037888Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=534.267µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.570329752Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.570342396Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=13.215µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.571871108Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.572019337Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=147.698µs 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.573019814Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-10T07:26:32.643 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.573290184Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=270.41µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.574267827Z level=info msg="Executing migration" id="create data_keys table" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.574827882Z level=info msg="Migration successfully executed" id="create data_keys table" duration=559.925µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.576100561Z level=info msg="Executing migration" id="create secrets table" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.57657235Z level=info msg="Migration successfully executed" id="create secrets table" duration=470.867µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.578103716Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.588442344Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=10.337575ms 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.589659037Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.591912926Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.255312ms 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.593121273Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.59335847Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=236.405µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.594917009Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.605617939Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=10.70072ms 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.606899895Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.616503367Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=9.60255ms 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.617816652Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.618758317Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=936.436µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.620172643Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.620868474Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=696.263µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.622774618Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.623063743Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=289.205µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.624379563Z level=info msg="Executing migration" id="create permission table" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.624990134Z level=info msg="Migration successfully executed" id="create permission table" duration=610.491µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.626331512Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.626904623Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=572.198µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.628567547Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.629135197Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=567.439µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.630526219Z level=info msg="Executing migration" id="create role table" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.631136108Z level=info msg="Migration successfully executed" id="create role table" duration=609.428µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.632444955Z level=info msg="Executing migration" id="add column display_name" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.635665546Z level=info msg="Migration successfully executed" id="add column display_name" duration=3.216122ms 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.637384276Z level=info msg="Executing migration" id="add column group_name" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.639723956Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.340221ms 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.64089321Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.641451342Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=558.011µs 2026-03-10T07:26:32.644 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.643176443Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:32 vm00 bash[28005]: audit 2026-03-10T07:26:31.510830+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:32 vm00 bash[28005]: audit 2026-03-10T07:26:31.510830+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:32 vm00 bash[28005]: audit 2026-03-10T07:26:31.530617+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:32 vm00 bash[28005]: audit 2026-03-10T07:26:31.530617+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:32 vm00 bash[28005]: audit 2026-03-10T07:26:31.537131+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:32 vm00 bash[28005]: audit 2026-03-10T07:26:31.537131+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:32 vm00 bash[28005]: audit 2026-03-10T07:26:31.554426+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:32 vm00 bash[28005]: audit 2026-03-10T07:26:31.554426+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:32 vm00 bash[28005]: audit 2026-03-10T07:26:31.572019+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:32 vm00 bash[28005]: audit 2026-03-10T07:26:31.572019+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:32 vm00 bash[20701]: audit 2026-03-10T07:26:31.510830+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:32 vm00 bash[20701]: audit 2026-03-10T07:26:31.510830+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:32 vm00 bash[20701]: audit 2026-03-10T07:26:31.530617+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:32 vm00 bash[20701]: audit 2026-03-10T07:26:31.530617+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:32 vm00 bash[20701]: audit 2026-03-10T07:26:31.537131+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:32 vm00 bash[20701]: audit 2026-03-10T07:26:31.537131+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:32 vm00 bash[20701]: audit 2026-03-10T07:26:31.554426+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:32 vm00 bash[20701]: audit 2026-03-10T07:26:31.554426+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:32.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:32 vm00 bash[20701]: audit 2026-03-10T07:26:31.572019+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:32.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:32 vm00 bash[20701]: audit 2026-03-10T07:26:31.572019+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:32.891 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.645019778Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.843135ms 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.646692841Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.647281842Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=587.728µs 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.648646253Z level=info msg="Executing migration" id="create team role table" 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.649121128Z level=info msg="Migration successfully executed" id="create team role table" duration=474.945µs 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.650460002Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.65105844Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=598.389µs 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.65275054Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.653416665Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=658.231µs 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.655406416Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.656269663Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=865.22µs 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.658089544Z level=info msg="Executing migration" id="create user role table" 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.658736543Z level=info msg="Migration successfully executed" id="create user role table" duration=646.909µs 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.660601108Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.661218771Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=617.915µs 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.662668455Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.663294174Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=625.931µs 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.665030105Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.665634565Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=606.503µs 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.667296087Z level=info msg="Executing migration" id="create builtin role table" 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.667856543Z level=info msg="Migration successfully executed" id="create builtin role table" duration=559.114µs 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.66928717Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-10T07:26:32.891 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.669882433Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=595.273µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.671177984Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.671729383Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=551.519µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.673337565Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.675845132Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.507005ms 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.677147536Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.677778476Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=630.969µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.679185758Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.679782383Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=594.621µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.681380494Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.68198295Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=602.185µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.683085059Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.683671553Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=585.643µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.684657863Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.685191008Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=533.416µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.686875453Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.687507334Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=632.232µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.688856718Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.691465564Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.607744ms 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.692740548Z level=info msg="Executing migration" id="permission kind migration" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.695423896Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.682456ms 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.696933061Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.699385424Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.451902ms 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.700460359Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.702967725Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.507116ms 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.704498501Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.705061723Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=562.11µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.707663888Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.708272524Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=608.466µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.709711407Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.710241896Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=530.6µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.711880996Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.712393222Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=511.985µs 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.713418935Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-10T07:26:32.892 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.714000962Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=581.898µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.715329706Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.715486582Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=156.686µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.716907581Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.716946244Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=39.205µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.718149442Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.71853615Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=386.929µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.719665619Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.720052088Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=384.255µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.721069966Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.725795323Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=4.719536ms 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.727822124Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.728133431Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=313.622µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.7292207Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.729655249Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=434.669µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.730887262Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.731415778Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=528.738µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.732740786Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.733396882Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=656.537µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.735058825Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.737610395Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.550707ms 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.73885992Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.73902478Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=164.53µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.740111168Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.740743689Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=632.392µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.742790869Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.743382615Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=592.207µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.744810455Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.745419594Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=609.328µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.746687464Z level=info msg="Executing migration" id="add correlation config column" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.749300779Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.612344ms 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.750891168Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.751556501Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=667.558µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.752693655Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.753396731Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=703.446µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.754894323Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.761536823Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=6.639364ms 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.762828158Z level=info msg="Executing migration" id="create correlation v2" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.763468885Z level=info msg="Migration successfully executed" id="create correlation v2" duration=640.107µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.764476103Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.765023716Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=548.274µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.766693013Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.767289727Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=596.985µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.768642878Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.769159181Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=515.992µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.770443742Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.77064435Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=198.995µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.77214021Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.772608764Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=467.983µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.773682769Z level=info msg="Executing migration" id="add provisioning column" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.776220481Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.536992ms 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.777438147Z level=info msg="Executing migration" id="create entity_events table" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.777960621Z level=info msg="Migration successfully executed" id="create entity_events table" duration=522.264µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.779395516Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.779985228Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=587.887µs 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.781308632Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.781577569Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.782586952Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.782841642Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.784355406Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-10T07:26:32.893 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.784826963Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=471.318µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.785824614Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.786373269Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=548.314µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.787739754Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.788334114Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=594.05µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.78978035Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.790349444Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=567.291µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.791632342Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.792144146Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=511.834µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.793126197Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.793646809Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=519.01µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.795070532Z level=info msg="Executing migration" id="Drop public config table" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.795503788Z level=info msg="Migration successfully executed" id="Drop public config table" duration=433.758µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.796475571Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.797013024Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=537.092µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.798270163Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.79885199Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=580.434µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.800009501Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.800545522Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=536.081µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.801751605Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.802255154Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=503.64µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.803423156Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.811195165Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=7.768011ms 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.812535031Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.815334939Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.799076ms 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.816578352Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.818982975Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.404311ms 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.820450542Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.820655488Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=207.892µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.821638561Z level=info msg="Executing migration" id="add share column" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.824177426Z level=info msg="Migration successfully executed" id="add share column" duration=2.538395ms 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.825406323Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.82558038Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=174.298µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.82700674Z level=info msg="Executing migration" id="create file table" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.827533833Z level=info msg="Migration successfully executed" id="create file table" duration=527.463µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.828870041Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.829424266Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=552.552µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.83083759Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.831384472Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=547.232µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.833035473Z level=info msg="Executing migration" id="create file_meta table" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.833485251Z level=info msg="Migration successfully executed" id="create file_meta table" duration=448.987µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.834722944Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.835327043Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=604.15µs 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.837090416Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-10T07:26:32.894 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.837220091Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=129.795µs 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.838373955Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.838504672Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=75.523µs 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.840070002Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.840442735Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=372.763µs 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.841514114Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.841716927Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=202.703µs 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.842887394Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.843669658Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=784.75µs 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.844640608Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.847338223Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=2.696783ms 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.848871743Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.84902932Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=157.537µs 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.850019797Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.850635167Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=615.28µs 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.851958762Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.852232017Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=274.629µs 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.853585298Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.853792349Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=207.051µs 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.85508789Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.855744839Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=657.01µs 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.857000956Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.859776217Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.774611ms 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.86101389Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.86378841Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=2.773728ms 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.865269121Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.865913205Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=644.014µs 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.867010533Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-10T07:26:32.895 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.892051865Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=25.033816ms 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.893525852Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.894338785Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=806.791µs 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.896598083Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.897178037Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=579.984µs 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.89854825Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.909379856Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=10.826247ms 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.911350711Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.91391231Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.561348ms 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.915157937Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.915386188Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=228.381µs 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.916355504Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.916506209Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=148.801µs 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.917753379Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.917938649Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=186.973µs 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.919189397Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.919412988Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=221.627µs 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.920523501Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-10T07:26:33.204 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.920752603Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=229.421µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.921893554Z level=info msg="Executing migration" id="create folder table" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.922621827Z level=info msg="Migration successfully executed" id="create folder table" duration=727.762µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.924116714Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.925015158Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=897.161µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.926463909Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.927321515Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=857.827µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.928647615Z level=info msg="Executing migration" id="Update folder title length" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.928685045Z level=info msg="Migration successfully executed" id="Update folder title length" duration=37.851µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.92998713Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.930824367Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=836.917µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.932717696Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.933589681Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=869.089µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.93491599Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.936317772Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.40091ms 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.938717666Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.938989399Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=271.712µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.940361614Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.940527959Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=166.143µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.941504419Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.942021944Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=515.602µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.943785539Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.944787116Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.000165ms 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.947711699Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.948927992Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.218256ms 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.950502249Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.951175417Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=672.807µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.952889168Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.9535321Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=639.886µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.955305413Z level=info msg="Executing migration" id="create anon_device table" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.955845331Z level=info msg="Migration successfully executed" id="create anon_device table" duration=540.289µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.957098994Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.95774971Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=650.747µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.960654455Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.962749785Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.09507ms 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.965914791Z level=info msg="Executing migration" id="create signing_key table" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.967058896Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.143726ms 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.968729105Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.969713832Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=984.786µs 2026-03-10T07:26:33.205 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.970992742Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.971605166Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=612.435µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.972597316Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.972822821Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=225.906µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.97421745Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.977363801Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=3.145389ms 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.978666125Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.979131452Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=465.948µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.98033421Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.98097657Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=642.361µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.98264742Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.983272318Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=626.792µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.984502127Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.985105333Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=603.809µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.986267675Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.987191516Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=923.811µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.988640075Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.989210462Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=570.375µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.990189386Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.990703846Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=514.3µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.992324961Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.992863697Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=537.774µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.994321756Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.99454703Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=225.515µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.995878651Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.995969391Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=90.5µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.99699302Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:32 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:32.999786596Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.793295ms 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:33.001630953Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:33.004614126Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.978454ms 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:33.006300595Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:33.006687854Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=388.302µs 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=migrator t=2026-03-10T07:26:33.007846799Z level=info msg="migrations completed" performed=547 skipped=0 duration=1.368319606s 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=sqlstore t=2026-03-10T07:26:33.008619005Z level=info msg="Created default organization" 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=secrets t=2026-03-10T07:26:33.010054159Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=plugin.store t=2026-03-10T07:26:33.018622249Z level=info msg="Loading plugins..." 2026-03-10T07:26:33.206 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=local.finder t=2026-03-10T07:26:33.060308445Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=plugin.store t=2026-03-10T07:26:33.060331158Z level=info msg="Plugins loaded" count=55 duration=41.70951ms 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=query_data t=2026-03-10T07:26:33.062023778Z level=info msg="Query Service initialization" 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=live.push_http t=2026-03-10T07:26:33.068744918Z level=info msg="Live Push Gateway initialization" 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=ngalert.migration t=2026-03-10T07:26:33.071112581Z level=info msg=Starting 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=ngalert.migration t=2026-03-10T07:26:33.071651456Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=ngalert.migration orgID=1 t=2026-03-10T07:26:33.072054857Z level=info msg="Migrating alerts for organisation" 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=ngalert.migration orgID=1 t=2026-03-10T07:26:33.072586689Z level=info msg="Alerts found to migrate" alerts=0 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=ngalert.migration t=2026-03-10T07:26:33.073722921Z level=info msg="Completed alerting migration" 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=ngalert.state.manager t=2026-03-10T07:26:33.085683475Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=infra.usagestats.collector t=2026-03-10T07:26:33.087398208Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=provisioning.datasources t=2026-03-10T07:26:33.089735103Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=provisioning.datasources t=2026-03-10T07:26:33.095472326Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=provisioning.alerting t=2026-03-10T07:26:33.100970561Z level=info msg="starting to provision alerting" 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=provisioning.alerting t=2026-03-10T07:26:33.100997942Z level=info msg="finished to provision alerting" 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=http.server t=2026-03-10T07:26:33.102385828Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=http.server t=2026-03-10T07:26:33.102607315Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=ngalert.state.manager t=2026-03-10T07:26:33.102664824Z level=info msg="Warming state cache for startup" 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=provisioning.dashboard t=2026-03-10T07:26:33.105732747Z level=info msg="starting to provision dashboards" 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=ngalert.multiorg.alertmanager t=2026-03-10T07:26:33.10612714Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=grafanaStorageLogger t=2026-03-10T07:26:33.126008495Z level=info msg="Storage starting" 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=ngalert.state.manager t=2026-03-10T07:26:33.105991073Z level=info msg="State cache has been initialized" states=0 duration=3.323014ms 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=ngalert.scheduler t=2026-03-10T07:26:33.132166182Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-10T07:26:33.207 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=ticker t=2026-03-10T07:26:33.132287731Z level=info msg=starting first_tick=2026-03-10T07:26:40Z 2026-03-10T07:26:33.517 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=plugins.update.checker t=2026-03-10T07:26:33.201562669Z level=info msg="Update check succeeded" duration=84.40136ms 2026-03-10T07:26:33.517 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=provisioning.dashboard t=2026-03-10T07:26:33.276503427Z level=info msg="finished to provision dashboards" 2026-03-10T07:26:33.517 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=grafana-apiserver t=2026-03-10T07:26:33.306467887Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-10T07:26:33.517 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:26:33 vm03 bash[51371]: logger=grafana-apiserver t=2026-03-10T07:26:33.30730229Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-10T07:26:33.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:33 vm00 bash[28005]: cluster 2026-03-10T07:26:32.554763+0000 mgr.y (mgr.24407) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:33.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:33 vm00 bash[28005]: cluster 2026-03-10T07:26:32.554763+0000 mgr.y (mgr.24407) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:33.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:33 vm00 bash[28005]: audit 2026-03-10T07:26:32.855788+0000 mgr.y (mgr.24407) 39 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:33.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:33 vm00 bash[28005]: audit 2026-03-10T07:26:32.855788+0000 mgr.y (mgr.24407) 39 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:33.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:33 vm00 bash[20701]: cluster 2026-03-10T07:26:32.554763+0000 mgr.y (mgr.24407) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:33.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:33 vm00 bash[20701]: cluster 2026-03-10T07:26:32.554763+0000 mgr.y (mgr.24407) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:33.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:33 vm00 bash[20701]: audit 2026-03-10T07:26:32.855788+0000 mgr.y (mgr.24407) 39 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:33.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:33 vm00 bash[20701]: audit 2026-03-10T07:26:32.855788+0000 mgr.y (mgr.24407) 39 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:33 vm03 bash[23382]: cluster 2026-03-10T07:26:32.554763+0000 mgr.y (mgr.24407) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:33 vm03 bash[23382]: cluster 2026-03-10T07:26:32.554763+0000 mgr.y (mgr.24407) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:33 vm03 bash[23382]: audit 2026-03-10T07:26:32.855788+0000 mgr.y (mgr.24407) 39 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:34.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:33 vm03 bash[23382]: audit 2026-03-10T07:26:32.855788+0000 mgr.y (mgr.24407) 39 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:34.225 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:34.379 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 -- 192.168.123.100:0/1940736478 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f834c103410 msgr2=0x7f834c10f940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:34.379 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 --2- 192.168.123.100:0/1940736478 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f834c103410 0x7f834c10f940 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f834000b0d0 tx=0x7f834002f450 comp rx=0 tx=0).stop 2026-03-10T07:26:34.379 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 -- 192.168.123.100:0/1940736478 shutdown_connections 2026-03-10T07:26:34.379 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 --2- 192.168.123.100:0/1940736478 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f834c103410 0x7f834c10f940 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:34.379 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 --2- 192.168.123.100:0/1940736478 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f834c102a70 0x7f834c102ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:34.379 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 --2- 192.168.123.100:0/1940736478 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f834c108a70 0x7f834c108e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:34.379 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 -- 192.168.123.100:0/1940736478 >> 192.168.123.100:0/1940736478 conn(0x7f834c0fe790 msgr2=0x7f834c100bb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:34.379 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 -- 192.168.123.100:0/1940736478 shutdown_connections 2026-03-10T07:26:34.379 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 -- 192.168.123.100:0/1940736478 wait complete. 2026-03-10T07:26:34.380 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 Processor -- start 2026-03-10T07:26:34.380 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 -- start start 2026-03-10T07:26:34.380 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f834c102a70 0x7f834c198380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:34.380 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f834bfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f834c102a70 0x7f834c198380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f834bfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f834c102a70 0x7f834c198380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:47164/0 (socket says 192.168.123.100:47164) 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f834c103410 0x7f834c1988c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f834c108a70 0x7f834c19cc50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f834c076e80 con 0x7f834c102a70 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f834c076d00 con 0x7f834c103410 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83529b2640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f834c077000 con 0x7f834c108a70 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f834bfff640 1 -- 192.168.123.100:0/3195515907 learned_addr learned my addr 192.168.123.100:0/3195515907 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f834bfff640 1 -- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f834c108a70 msgr2=0x7f834c19cc50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f834b7fe640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f834c103410 0x7f834c1988c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f8350f28640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f834c108a70 0x7f834c19cc50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f834bfff640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f834c108a70 0x7f834c19cc50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f834bfff640 1 -- 192.168.123.100:0/3195515907 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f834c103410 msgr2=0x7f834c1988c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f834bfff640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f834c103410 0x7f834c1988c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f834bfff640 1 -- 192.168.123.100:0/3195515907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f834c19d3d0 con 0x7f834c102a70 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f834b7fe640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f834c103410 0x7f834c1988c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f8350f28640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f834c108a70 0x7f834c19cc50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:26:34.381 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f834bfff640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f834c102a70 0x7f834c198380 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f8338002990 tx=0x7f8338002e60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:34.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.378+0000 7f83497fa640 1 -- 192.168.123.100:0/3195515907 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8338022070 con 0x7f834c102a70 2026-03-10T07:26:34.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.382+0000 7f83529b2640 1 -- 192.168.123.100:0/3195515907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f834c19d6c0 con 0x7f834c102a70 2026-03-10T07:26:34.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.382+0000 7f83529b2640 1 -- 192.168.123.100:0/3195515907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f834c1a4f00 con 0x7f834c102a70 2026-03-10T07:26:34.383 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.382+0000 7f83497fa640 1 -- 192.168.123.100:0/3195515907 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f833800bd40 con 0x7f834c102a70 2026-03-10T07:26:34.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.382+0000 7f83497fa640 1 -- 192.168.123.100:0/3195515907 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f833801dd60 con 0x7f834c102a70 2026-03-10T07:26:34.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.382+0000 7f83497fa640 1 -- 192.168.123.100:0/3195515907 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f8338014ce0 con 0x7f834c102a70 2026-03-10T07:26:34.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.382+0000 7f83497fa640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8314077740 0x7f8314079c00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:34.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.382+0000 7f83497fa640 1 -- 192.168.123.100:0/3195515907 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f83380a2da0 con 0x7f834c102a70 2026-03-10T07:26:34.384 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.382+0000 7f834b7fe640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8314077740 0x7f8314079c00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:34.387 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.382+0000 7f83529b2640 1 -- 192.168.123.100:0/3195515907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f834c1041b0 con 0x7f834c102a70 2026-03-10T07:26:34.387 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.386+0000 7f834b7fe640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8314077740 0x7f8314079c00 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f834c108140 tx=0x7f833c009290 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:34.387 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.386+0000 7f83497fa640 1 -- 192.168.123.100:0/3195515907 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f833806f720 con 0x7f834c102a70 2026-03-10T07:26:34.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.482+0000 7f83529b2640 1 -- 192.168.123.100:0/3195515907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f834c102ed0 con 0x7f834c102a70 2026-03-10T07:26:34.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.482+0000 7f83497fa640 1 -- 192.168.123.100:0/3195515907 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v68) ==== 74+0+23504 (secure 0 0 0) 0x7f83380745d0 con 0x7f834c102a70 2026-03-10T07:26:34.484 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:26:34.484 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":68,"fsid":"534d9c8a-1c51-11f1-ac87-d1fb9a119953","created":"2026-03-10T07:19:29.470223+0000","modified":"2026-03-10T07:26:08.524766+0000","last_up_change":"2026-03-10T07:25:15.530223+0000","last_in_change":"2026-03-10T07:24:58.209477+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"luminous","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T07:22:27.928486+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-10T07:25:34.918040+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"56","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-10T07:25:37.015057+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"58","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-10T07:25:38.852316+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"64","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":64,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-10T07:25:38.976738+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"60","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-10T07:25:41.043916+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"62","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"103cba6f-bd9d-4169-adab-61ce873b1107","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6803","nonce":944390886}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6805","nonce":944390886}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6809","nonce":944390886}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6807","nonce":944390886}]},"public_addr":"192.168.123.100:6803/944390886","cluster_addr":"192.168.123.100:6805/944390886","heartbeat_back_addr":"192.168.123.100:6809/944390886","heartbeat_front_addr":"192.168.123.100:6807/944390886","state":["exists","up"]},{"osd":1,"uuid":"99ca2b37-ae0a-4199-ac17-e89aa50eb255","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6811","nonce":1715502331}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6813","nonce":1715502331}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6817","nonce":1715502331}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6815","nonce":1715502331}]},"public_addr":"192.168.123.100:6811/1715502331","cluster_addr":"192.168.123.100:6813/1715502331","heartbeat_back_addr":"192.168.123.100:6817/1715502331","heartbeat_front_addr":"192.168.123.100:6815/1715502331","state":["exists","up"]},{"osd":2,"uuid":"7d09342f-42e2-41fc-9c97-fa4b821fa628","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":65,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6819","nonce":3026087437}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6821","nonce":3026087437}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6825","nonce":3026087437}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6823","nonce":3026087437}]},"public_addr":"192.168.123.100:6819/3026087437","cluster_addr":"192.168.123.100:6821/3026087437","heartbeat_back_addr":"192.168.123.100:6825/3026087437","heartbeat_front_addr":"192.168.123.100:6823/3026087437","state":["exists","up"]},{"osd":3,"uuid":"76d2f5e3-81b1-4e08-917a-1bb3561d67e1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6827","nonce":2171328275}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6829","nonce":2171328275}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6833","nonce":2171328275}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6831","nonce":2171328275}]},"public_addr":"192.168.123.100:6827/2171328275","cluster_addr":"192.168.123.100:6829/2171328275","heartbeat_back_addr":"192.168.123.100:6833/2171328275","heartbeat_front_addr":"192.168.123.100:6831/2171328275","state":["exists","up"]},{"osd":4,"uuid":"f7c9bda9-fb82-468f-b7f9-e588fcc193bf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6801","nonce":2627693272}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6803","nonce":2627693272}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6807","nonce":2627693272}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6805","nonce":2627693272}]},"public_addr":"192.168.123.103:6801/2627693272","cluster_addr":"192.168.123.103:6803/2627693272","heartbeat_back_addr":"192.168.123.103:6807/2627693272","heartbeat_front_addr":"192.168.123.103:6805/2627693272","state":["exists","up"]},{"osd":5,"uuid":"361df97b-1006-4ba7-a86f-36dc13915955","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6809","nonce":3238215945}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6811","nonce":3238215945}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6815","nonce":3238215945}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6813","nonce":3238215945}]},"public_addr":"192.168.123.103:6809/3238215945","cluster_addr":"192.168.123.103:6811/3238215945","heartbeat_back_addr":"192.168.123.103:6815/3238215945","heartbeat_front_addr":"192.168.123.103:6813/3238215945","state":["exists","up"]},{"osd":6,"uuid":"a6dfdf0a-06d2-49ea-8222-a0f8f776983e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6817","nonce":665664252}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6819","nonce":665664252}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6823","nonce":665664252}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6820","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6821","nonce":665664252}]},"public_addr":"192.168.123.103:6817/665664252","cluster_addr":"192.168.123.103:6819/665664252","heartbeat_back_addr":"192.168.123.103:6823/665664252","heartbeat_front_addr":"192.168.123.103:6821/665664252","state":["exists","up"]},{"osd":7,"uuid":"6b79230f-59b8-4c24-91c0-cf41cbad4dc5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":51,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6824","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6825","nonce":3078297940}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6826","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6827","nonce":3078297940}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6830","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6831","nonce":3078297940}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6828","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6829","nonce":3078297940}]},"public_addr":"192.168.123.103:6825/3078297940","cluster_addr":"192.168.123.103:6827/3078297940","heartbeat_back_addr":"192.168.123.103:6831/3078297940","heartbeat_front_addr":"192.168.123.103:6829/3078297940","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:21:17.397942+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:21:51.044130+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:22:23.888280+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:22:57.672323+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:23:31.220146+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:24:04.406099+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:24:38.896132+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:25:13.178511+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[{"pgid":"2.8","mappings":[{"from":7,"to":2}]}],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:6801/2669938860":"2026-03-11T07:26:08.524737+0000","192.168.123.100:6800/2669938860":"2026-03-11T07:26:08.524737+0000","192.168.123.100:6800/2344477988":"2026-03-11T07:19:40.638072+0000","192.168.123.100:0/2484343054":"2026-03-11T07:26:08.524737+0000","192.168.123.100:0/2755473020":"2026-03-11T07:26:08.524737+0000","192.168.123.100:0/1894884310":"2026-03-11T07:26:08.524737+0000","192.168.123.100:6801/2344477988":"2026-03-11T07:19:40.638072+0000","192.168.123.100:0/1054483043":"2026-03-11T07:19:40.638072+0000","192.168.123.100:0/2799046240":"2026-03-11T07:19:51.853862+0000","192.168.123.100:6801/1944661180":"2026-03-11T07:19:51.853862+0000","192.168.123.100:0/1284741572":"2026-03-11T07:26:08.524737+0000","192.168.123.100:0/57166232":"2026-03-11T07:19:40.638072+0000","192.168.123.100:0/1289482675":"2026-03-11T07:19:51.853862+0000","192.168.123.100:0/709545184":"2026-03-11T07:19:40.638072+0000","192.168.123.100:0/532732704":"2026-03-11T07:19:51.853862+0000","192.168.123.100:6800/1944661180":"2026-03-11T07:19:51.853862+0000","192.168.123.100:0/3071319423":"2026-03-11T07:26:08.524737+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T07:26:34.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 -- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8314077740 msgr2=0x7f8314079c00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:34.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8314077740 0x7f8314079c00 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f834c108140 tx=0x7f833c009290 comp rx=0 tx=0).stop 2026-03-10T07:26:34.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 -- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f834c102a70 msgr2=0x7f834c198380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:34.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f834c102a70 0x7f834c198380 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f8338002990 tx=0x7f8338002e60 comp rx=0 tx=0).stop 2026-03-10T07:26:34.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 -- 192.168.123.100:0/3195515907 shutdown_connections 2026-03-10T07:26:34.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8314077740 0x7f8314079c00 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:34.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f834c108a70 0x7f834c19cc50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:34.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f834c103410 0x7f834c1988c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:34.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 --2- 192.168.123.100:0/3195515907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f834c102a70 0x7f834c198380 unknown :-1 s=CLOSED pgs=162 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:34.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 -- 192.168.123.100:0/3195515907 >> 192.168.123.100:0/3195515907 conn(0x7f834c0fe790 msgr2=0x7f834c1000d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:34.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 -- 192.168.123.100:0/3195515907 shutdown_connections 2026-03-10T07:26:34.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:34.486+0000 7f83529b2640 1 -- 192.168.123.100:0/3195515907 wait complete. 2026-03-10T07:26:34.544 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T07:26:34.544 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd dump --format=json 2026-03-10T07:26:34.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:34 vm00 bash[20701]: audit 2026-03-10T07:26:33.627403+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:34.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:34 vm00 bash[20701]: audit 2026-03-10T07:26:33.627403+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:34.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:34 vm00 bash[20701]: audit 2026-03-10T07:26:34.486272+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.100:0/3195515907' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:34.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:34 vm00 bash[20701]: audit 2026-03-10T07:26:34.486272+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.100:0/3195515907' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:34.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:34 vm00 bash[28005]: audit 2026-03-10T07:26:33.627403+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:34.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:34 vm00 bash[28005]: audit 2026-03-10T07:26:33.627403+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:34.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:34 vm00 bash[28005]: audit 2026-03-10T07:26:34.486272+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.100:0/3195515907' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:34.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:34 vm00 bash[28005]: audit 2026-03-10T07:26:34.486272+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.100:0/3195515907' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:35.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:34 vm03 bash[23382]: audit 2026-03-10T07:26:33.627403+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:35.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:34 vm03 bash[23382]: audit 2026-03-10T07:26:33.627403+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:35.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:34 vm03 bash[23382]: audit 2026-03-10T07:26:34.486272+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.100:0/3195515907' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:35.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:34 vm03 bash[23382]: audit 2026-03-10T07:26:34.486272+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.100:0/3195515907' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:36.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:35 vm03 bash[23382]: cluster 2026-03-10T07:26:34.555058+0000 mgr.y (mgr.24407) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:36.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:35 vm03 bash[23382]: cluster 2026-03-10T07:26:34.555058+0000 mgr.y (mgr.24407) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:36.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:35 vm00 bash[28005]: cluster 2026-03-10T07:26:34.555058+0000 mgr.y (mgr.24407) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:36.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:35 vm00 bash[28005]: cluster 2026-03-10T07:26:34.555058+0000 mgr.y (mgr.24407) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:36.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:35 vm00 bash[20701]: cluster 2026-03-10T07:26:34.555058+0000 mgr.y (mgr.24407) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:36.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:35 vm00 bash[20701]: cluster 2026-03-10T07:26:34.555058+0000 mgr.y (mgr.24407) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: cluster 2026-03-10T07:26:36.555473+0000 mgr.y (mgr.24407) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: cluster 2026-03-10T07:26:36.555473+0000 mgr.y (mgr.24407) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:36.608260+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:36.608260+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:36.616353+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:36.616353+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:37.131447+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:37.131447+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:37.138135+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:37.138135+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:37.140869+0000 mon.c (mon.2) 55 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:37.140869+0000 mon.c (mon.2) 55 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:37.141711+0000 mon.c (mon.2) 56 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:37.141711+0000 mon.c (mon.2) 56 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:37.145994+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: audit 2026-03-10T07:26:37.145994+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: cephadm 2026-03-10T07:26:37.159676+0000 mgr.y (mgr.24407) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: cephadm 2026-03-10T07:26:37.159676+0000 mgr.y (mgr.24407) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: cephadm 2026-03-10T07:26:37.162257+0000 mgr.y (mgr.24407) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:37 vm00 bash[28005]: cephadm 2026-03-10T07:26:37.162257+0000 mgr.y (mgr.24407) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: cluster 2026-03-10T07:26:36.555473+0000 mgr.y (mgr.24407) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: cluster 2026-03-10T07:26:36.555473+0000 mgr.y (mgr.24407) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:36.608260+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:36.608260+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:36.616353+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:36.616353+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:37.131447+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:37.131447+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:37.138135+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:37.138135+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:37.140869+0000 mon.c (mon.2) 55 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:37.706 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:37.140869+0000 mon.c (mon.2) 55 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:37.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:37.141711+0000 mon.c (mon.2) 56 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:37.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:37.141711+0000 mon.c (mon.2) 56 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:37.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:37.145994+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: audit 2026-03-10T07:26:37.145994+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:37.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: cephadm 2026-03-10T07:26:37.159676+0000 mgr.y (mgr.24407) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T07:26:37.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: cephadm 2026-03-10T07:26:37.159676+0000 mgr.y (mgr.24407) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T07:26:37.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: cephadm 2026-03-10T07:26:37.162257+0000 mgr.y (mgr.24407) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T07:26:37.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[20701]: cephadm 2026-03-10T07:26:37.162257+0000 mgr.y (mgr.24407) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T07:26:37.707 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 systemd[1]: Stopping Ceph alertmanager.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T07:26:37.960 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[55893]: ts=2026-03-10T07:26:37.705Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T07:26:37.960 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[56646]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-alertmanager-a 2026-03-10T07:26:37.960 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@alertmanager.a.service: Deactivated successfully. 2026-03-10T07:26:37.960 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 systemd[1]: Stopped Ceph alertmanager.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:26:37.960 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 systemd[1]: Started Ceph alertmanager.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:26:37.960 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[56723]: ts=2026-03-10T07:26:37.939Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T07:26:37.960 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[56723]: ts=2026-03-10T07:26:37.939Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T07:26:37.960 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[56723]: ts=2026-03-10T07:26:37.941Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.100 port=9094 2026-03-10T07:26:37.960 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[56723]: ts=2026-03-10T07:26:37.942Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: cluster 2026-03-10T07:26:36.555473+0000 mgr.y (mgr.24407) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: cluster 2026-03-10T07:26:36.555473+0000 mgr.y (mgr.24407) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:36.608260+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:36.608260+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:36.616353+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:36.616353+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:37.131447+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:37.131447+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:37.138135+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:37.138135+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:37.140869+0000 mon.c (mon.2) 55 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:37.140869+0000 mon.c (mon.2) 55 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:37.141711+0000 mon.c (mon.2) 56 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:37.141711+0000 mon.c (mon.2) 56 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:37.145994+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: audit 2026-03-10T07:26:37.145994+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: cephadm 2026-03-10T07:26:37.159676+0000 mgr.y (mgr.24407) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: cephadm 2026-03-10T07:26:37.159676+0000 mgr.y (mgr.24407) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: cephadm 2026-03-10T07:26:37.162257+0000 mgr.y (mgr.24407) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T07:26:38.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:37 vm03 bash[23382]: cephadm 2026-03-10T07:26:37.162257+0000 mgr.y (mgr.24407) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T07:26:38.260 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:38.333 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[56723]: ts=2026-03-10T07:26:37.963Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T07:26:38.333 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[56723]: ts=2026-03-10T07:26:37.964Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T07:26:38.333 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[56723]: ts=2026-03-10T07:26:37.965Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T07:26:38.333 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:37 vm00 bash[56723]: ts=2026-03-10T07:26:37.965Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T07:26:38.441 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.438+0000 7fabc7d91640 1 -- 192.168.123.100:0/4058622725 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabc0077620 msgr2=0x7fabc0077a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:38.441 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.438+0000 7fabc7d91640 1 --2- 192.168.123.100:0/4058622725 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabc0077620 0x7fabc0077a00 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7fabb0009a30 tx=0x7fabb002f220 comp rx=0 tx=0).stop 2026-03-10T07:26:38.441 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.438+0000 7fabc7d91640 1 -- 192.168.123.100:0/4058622725 shutdown_connections 2026-03-10T07:26:38.441 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.438+0000 7fabc7d91640 1 --2- 192.168.123.100:0/4058622725 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabc0113bb0 0x7fabc0115fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:38.441 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.438+0000 7fabc7d91640 1 --2- 192.168.123.100:0/4058622725 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fabc0077f40 0x7fabc0113670 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:38.441 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.438+0000 7fabc7d91640 1 --2- 192.168.123.100:0/4058622725 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabc0077620 0x7fabc0077a00 unknown :-1 s=CLOSED pgs=163 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:38.441 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.438+0000 7fabc7d91640 1 -- 192.168.123.100:0/4058622725 >> 192.168.123.100:0/4058622725 conn(0x7fabc0100a70 msgr2=0x7fabc0102e90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:38.441 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.438+0000 7fabc7d91640 1 -- 192.168.123.100:0/4058622725 shutdown_connections 2026-03-10T07:26:38.441 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.438+0000 7fabc7d91640 1 -- 192.168.123.100:0/4058622725 wait complete. 2026-03-10T07:26:38.441 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.438+0000 7fabc7d91640 1 Processor -- start 2026-03-10T07:26:38.442 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.438+0000 7fabc7d91640 1 -- start start 2026-03-10T07:26:38.442 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc7d91640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fabc0077620 0x7fabc01a0ee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:38.442 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc7d91640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabc0077f40 0x7fabc01a1420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:38.442 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc7d91640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabc0113bb0 0x7fabc01a57b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc7d91640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fabc01187d0 con 0x7fabc0077f40 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc7d91640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fabc0118650 con 0x7fabc0077620 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc7d91640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fabc0118950 con 0x7fabc0113bb0 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc5b06640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fabc0077620 0x7fabc01a0ee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc5b06640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fabc0077620 0x7fabc01a0ee0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:41676/0 (socket says 192.168.123.100:41676) 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc5b06640 1 -- 192.168.123.100:0/3130234245 learned_addr learned my addr 192.168.123.100:0/3130234245 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc6307640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabc0113bb0 0x7fabc01a57b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc5305640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabc0077f40 0x7fabc01a1420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc6307640 1 -- 192.168.123.100:0/3130234245 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fabc0077620 msgr2=0x7fabc01a0ee0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc6307640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fabc0077620 0x7fabc01a0ee0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc6307640 1 -- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabc0077f40 msgr2=0x7fabc01a1420 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc6307640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabc0077f40 0x7fabc01a1420 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:38.443 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc6307640 1 -- 192.168.123.100:0/3130234245 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fabc01a5e90 con 0x7fabc0113bb0 2026-03-10T07:26:38.444 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc6307640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabc0113bb0 0x7fabc01a57b0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7fabbc00ce80 tx=0x7fabbc00a430 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:38.444 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabaeffd640 1 -- 192.168.123.100:0/3130234245 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fabbc017070 con 0x7fabc0113bb0 2026-03-10T07:26:38.445 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabaeffd640 1 -- 192.168.123.100:0/3130234245 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fabbc00adf0 con 0x7fabc0113bb0 2026-03-10T07:26:38.445 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabaeffd640 1 -- 192.168.123.100:0/3130234245 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fabbc012400 con 0x7fabc0113bb0 2026-03-10T07:26:38.445 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc7d91640 1 -- 192.168.123.100:0/3130234245 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fabc01a6180 con 0x7fabc0113bb0 2026-03-10T07:26:38.445 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc7d91640 1 -- 192.168.123.100:0/3130234245 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fabc01ada60 con 0x7fabc0113bb0 2026-03-10T07:26:38.446 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.442+0000 7fabc7d91640 1 -- 192.168.123.100:0/3130234245 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fab88005180 con 0x7fabc0113bb0 2026-03-10T07:26:38.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.446+0000 7fabaeffd640 1 -- 192.168.123.100:0/3130234245 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fabbc002a60 con 0x7fabc0113bb0 2026-03-10T07:26:38.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.446+0000 7fabaeffd640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fab9c077700 0x7fab9c079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:38.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.446+0000 7fabaeffd640 1 -- 192.168.123.100:0/3130234245 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7fabbc099730 con 0x7fabc0113bb0 2026-03-10T07:26:38.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.450+0000 7fabc5b06640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fab9c077700 0x7fab9c079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:38.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.450+0000 7fabaeffd640 1 -- 192.168.123.100:0/3130234245 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fabbc0660b0 con 0x7fabc0113bb0 2026-03-10T07:26:38.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.450+0000 7fabc5b06640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fab9c077700 0x7fab9c079bc0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7fabb00097c0 tx=0x7fabb003a040 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:38.546 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.546+0000 7fabc7d91640 1 -- 192.168.123.100:0/3130234245 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7fab88005470 con 0x7fabc0113bb0 2026-03-10T07:26:38.548 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.546+0000 7fabaeffd640 1 -- 192.168.123.100:0/3130234245 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v68) ==== 74+0+23504 (secure 0 0 0) 0x7fabbc06af60 con 0x7fabc0113bb0 2026-03-10T07:26:38.549 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:26:38.549 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":68,"fsid":"534d9c8a-1c51-11f1-ac87-d1fb9a119953","created":"2026-03-10T07:19:29.470223+0000","modified":"2026-03-10T07:26:08.524766+0000","last_up_change":"2026-03-10T07:25:15.530223+0000","last_in_change":"2026-03-10T07:24:58.209477+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"luminous","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T07:22:27.928486+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-10T07:25:34.918040+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"56","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-10T07:25:37.015057+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"58","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-10T07:25:38.852316+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"64","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":64,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-10T07:25:38.976738+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"60","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-10T07:25:41.043916+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"62","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"103cba6f-bd9d-4169-adab-61ce873b1107","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6803","nonce":944390886}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6805","nonce":944390886}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6809","nonce":944390886}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":944390886},{"type":"v1","addr":"192.168.123.100:6807","nonce":944390886}]},"public_addr":"192.168.123.100:6803/944390886","cluster_addr":"192.168.123.100:6805/944390886","heartbeat_back_addr":"192.168.123.100:6809/944390886","heartbeat_front_addr":"192.168.123.100:6807/944390886","state":["exists","up"]},{"osd":1,"uuid":"99ca2b37-ae0a-4199-ac17-e89aa50eb255","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6811","nonce":1715502331}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6813","nonce":1715502331}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6817","nonce":1715502331}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":1715502331},{"type":"v1","addr":"192.168.123.100:6815","nonce":1715502331}]},"public_addr":"192.168.123.100:6811/1715502331","cluster_addr":"192.168.123.100:6813/1715502331","heartbeat_back_addr":"192.168.123.100:6817/1715502331","heartbeat_front_addr":"192.168.123.100:6815/1715502331","state":["exists","up"]},{"osd":2,"uuid":"7d09342f-42e2-41fc-9c97-fa4b821fa628","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":65,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6819","nonce":3026087437}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6821","nonce":3026087437}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6825","nonce":3026087437}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":3026087437},{"type":"v1","addr":"192.168.123.100:6823","nonce":3026087437}]},"public_addr":"192.168.123.100:6819/3026087437","cluster_addr":"192.168.123.100:6821/3026087437","heartbeat_back_addr":"192.168.123.100:6825/3026087437","heartbeat_front_addr":"192.168.123.100:6823/3026087437","state":["exists","up"]},{"osd":3,"uuid":"76d2f5e3-81b1-4e08-917a-1bb3561d67e1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6827","nonce":2171328275}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6829","nonce":2171328275}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6833","nonce":2171328275}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":2171328275},{"type":"v1","addr":"192.168.123.100:6831","nonce":2171328275}]},"public_addr":"192.168.123.100:6827/2171328275","cluster_addr":"192.168.123.100:6829/2171328275","heartbeat_back_addr":"192.168.123.100:6833/2171328275","heartbeat_front_addr":"192.168.123.100:6831/2171328275","state":["exists","up"]},{"osd":4,"uuid":"f7c9bda9-fb82-468f-b7f9-e588fcc193bf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6801","nonce":2627693272}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6803","nonce":2627693272}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6807","nonce":2627693272}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":2627693272},{"type":"v1","addr":"192.168.123.103:6805","nonce":2627693272}]},"public_addr":"192.168.123.103:6801/2627693272","cluster_addr":"192.168.123.103:6803/2627693272","heartbeat_back_addr":"192.168.123.103:6807/2627693272","heartbeat_front_addr":"192.168.123.103:6805/2627693272","state":["exists","up"]},{"osd":5,"uuid":"361df97b-1006-4ba7-a86f-36dc13915955","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6809","nonce":3238215945}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6811","nonce":3238215945}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6815","nonce":3238215945}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":3238215945},{"type":"v1","addr":"192.168.123.103:6813","nonce":3238215945}]},"public_addr":"192.168.123.103:6809/3238215945","cluster_addr":"192.168.123.103:6811/3238215945","heartbeat_back_addr":"192.168.123.103:6815/3238215945","heartbeat_front_addr":"192.168.123.103:6813/3238215945","state":["exists","up"]},{"osd":6,"uuid":"a6dfdf0a-06d2-49ea-8222-a0f8f776983e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6817","nonce":665664252}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6819","nonce":665664252}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6823","nonce":665664252}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6820","nonce":665664252},{"type":"v1","addr":"192.168.123.103:6821","nonce":665664252}]},"public_addr":"192.168.123.103:6817/665664252","cluster_addr":"192.168.123.103:6819/665664252","heartbeat_back_addr":"192.168.123.103:6823/665664252","heartbeat_front_addr":"192.168.123.103:6821/665664252","state":["exists","up"]},{"osd":7,"uuid":"6b79230f-59b8-4c24-91c0-cf41cbad4dc5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":51,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6824","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6825","nonce":3078297940}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6826","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6827","nonce":3078297940}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6830","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6831","nonce":3078297940}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6828","nonce":3078297940},{"type":"v1","addr":"192.168.123.103:6829","nonce":3078297940}]},"public_addr":"192.168.123.103:6825/3078297940","cluster_addr":"192.168.123.103:6827/3078297940","heartbeat_back_addr":"192.168.123.103:6831/3078297940","heartbeat_front_addr":"192.168.123.103:6829/3078297940","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:21:17.397942+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:21:51.044130+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:22:23.888280+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:22:57.672323+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:23:31.220146+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:24:04.406099+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:24:38.896132+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:25:13.178511+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[{"pgid":"2.8","mappings":[{"from":7,"to":2}]}],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:6801/2669938860":"2026-03-11T07:26:08.524737+0000","192.168.123.100:6800/2669938860":"2026-03-11T07:26:08.524737+0000","192.168.123.100:6800/2344477988":"2026-03-11T07:19:40.638072+0000","192.168.123.100:0/2484343054":"2026-03-11T07:26:08.524737+0000","192.168.123.100:0/2755473020":"2026-03-11T07:26:08.524737+0000","192.168.123.100:0/1894884310":"2026-03-11T07:26:08.524737+0000","192.168.123.100:6801/2344477988":"2026-03-11T07:19:40.638072+0000","192.168.123.100:0/1054483043":"2026-03-11T07:19:40.638072+0000","192.168.123.100:0/2799046240":"2026-03-11T07:19:51.853862+0000","192.168.123.100:6801/1944661180":"2026-03-11T07:19:51.853862+0000","192.168.123.100:0/1284741572":"2026-03-11T07:26:08.524737+0000","192.168.123.100:0/57166232":"2026-03-11T07:19:40.638072+0000","192.168.123.100:0/1289482675":"2026-03-11T07:19:51.853862+0000","192.168.123.100:0/709545184":"2026-03-11T07:19:40.638072+0000","192.168.123.100:0/532732704":"2026-03-11T07:19:51.853862+0000","192.168.123.100:6800/1944661180":"2026-03-11T07:19:51.853862+0000","192.168.123.100:0/3071319423":"2026-03-11T07:26:08.524737+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T07:26:38.551 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 -- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fab9c077700 msgr2=0x7fab9c079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:38.551 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fab9c077700 0x7fab9c079bc0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7fabb00097c0 tx=0x7fabb003a040 comp rx=0 tx=0).stop 2026-03-10T07:26:38.552 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 -- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabc0113bb0 msgr2=0x7fabc01a57b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:38.552 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabc0113bb0 0x7fabc01a57b0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7fabbc00ce80 tx=0x7fabbc00a430 comp rx=0 tx=0).stop 2026-03-10T07:26:38.553 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 -- 192.168.123.100:0/3130234245 shutdown_connections 2026-03-10T07:26:38.553 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fab9c077700 0x7fab9c079bc0 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:38.553 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fabc0113bb0 0x7fabc01a57b0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:38.553 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fabc0077f40 0x7fabc01a1420 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:38.553 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 --2- 192.168.123.100:0/3130234245 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fabc0077620 0x7fabc01a0ee0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:38.553 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 -- 192.168.123.100:0/3130234245 >> 192.168.123.100:0/3130234245 conn(0x7fabc0100a70 msgr2=0x7fabc0102e60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:38.553 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 -- 192.168.123.100:0/3130234245 shutdown_connections 2026-03-10T07:26:38.553 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:38.550+0000 7fabc7d91640 1 -- 192.168.123.100:0/3130234245 wait complete. 2026-03-10T07:26:38.568 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 systemd[1]: Stopping Ceph prometheus.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T07:26:38.612 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph tell osd.0 flush_pg_stats 2026-03-10T07:26:38.612 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph tell osd.1 flush_pg_stats 2026-03-10T07:26:38.612 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph tell osd.2 flush_pg_stats 2026-03-10T07:26:38.612 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph tell osd.3 flush_pg_stats 2026-03-10T07:26:38.612 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph tell osd.4 flush_pg_stats 2026-03-10T07:26:38.612 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph tell osd.5 flush_pg_stats 2026-03-10T07:26:38.612 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph tell osd.6 flush_pg_stats 2026-03-10T07:26:38.613 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph tell osd.7 flush_pg_stats 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.569Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.569Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.569Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.569Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.569Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.569Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.569Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.570Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.570Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.572Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.572Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[50125]: ts=2026-03-10T07:26:38.572Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[51938]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-prometheus-a 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@prometheus.a.service: Deactivated successfully. 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 systemd[1]: Stopped Ceph prometheus.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 systemd[1]: Started Ceph prometheus.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.772Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.772Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.772Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm03 (none))" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.772Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.772Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.774Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.775Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.777Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.777Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.423µs 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.777Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.777Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.777Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.778Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.778Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.779Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=18.355µs wal_replay_duration=2.632573ms wbl_replay_duration=129ns total_replay_duration=2.665433ms 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.781Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.781Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.781Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.794Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=13.302595ms db_storage=541ns remote_storage=912ns web_handler=140ns query_engine=291ns scrape=744.734µs scrape_sd=91.743µs notify=5.45µs notify_sd=3.787µs rules=12.029004ms tracing=3.257µs 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.794Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T07:26:38.820 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 07:26:38 vm03 bash[52014]: ts=2026-03-10T07:26:38.794Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T07:26:38.821 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:37.811795+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.821 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:37.811795+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:37.811795+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:37.811795+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:37.821244+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:37.821244+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: cephadm 2026-03-10T07:26:37.825703+0000 mgr.y (mgr.24407) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: cephadm 2026-03-10T07:26:37.825703+0000 mgr.y (mgr.24407) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: cephadm 2026-03-10T07:26:38.029802+0000 mgr.y (mgr.24407) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: cephadm 2026-03-10T07:26:38.029802+0000 mgr.y (mgr.24407) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.550197+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.100:0/3130234245' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.550197+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.100:0/3130234245' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.667332+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.667332+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.684565+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.684565+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.700731+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.700731+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.708722+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.708722+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.713050+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.713050+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.718855+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.718855+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.741719+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.741719+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.744537+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.744537+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.751080+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.751080+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.778196+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.778196+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.782823+0000 mon.c (mon.2) 64 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.782823+0000 mon.c (mon.2) 64 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.788603+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:38 vm00 bash[20701]: audit 2026-03-10T07:26:38.788603+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:38 vm00 bash[20971]: [10/Mar/2026:07:26:38] ENGINE Bus STOPPING 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:37.811795+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:37.811795+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:37.821244+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:37.821244+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: cephadm 2026-03-10T07:26:37.825703+0000 mgr.y (mgr.24407) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: cephadm 2026-03-10T07:26:37.825703+0000 mgr.y (mgr.24407) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: cephadm 2026-03-10T07:26:38.029802+0000 mgr.y (mgr.24407) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: cephadm 2026-03-10T07:26:38.029802+0000 mgr.y (mgr.24407) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.550197+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.100:0/3130234245' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.550197+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.100:0/3130234245' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:38.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.667332+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.667332+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.684565+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.684565+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.700731+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.700731+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.708722+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.708722+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.713050+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.713050+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.718855+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.718855+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.741719+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.741719+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.744537+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.744537+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.751080+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.751080+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.778196+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.778196+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.782823+0000 mon.c (mon.2) 64 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.782823+0000 mon.c (mon.2) 64 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.788603+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:38.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:38 vm00 bash[28005]: audit 2026-03-10T07:26:38.788603+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.079 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T07:26:39.079 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Bus STOPPED 2026-03-10T07:26:39.079 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Bus STARTING 2026-03-10T07:26:39.180 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Serving on http://:::9283 2026-03-10T07:26:39.180 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Bus STARTED 2026-03-10T07:26:39.180 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Bus STOPPING 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:37.821244+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:37.821244+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: cephadm 2026-03-10T07:26:37.825703+0000 mgr.y (mgr.24407) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: cephadm 2026-03-10T07:26:37.825703+0000 mgr.y (mgr.24407) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: cephadm 2026-03-10T07:26:38.029802+0000 mgr.y (mgr.24407) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: cephadm 2026-03-10T07:26:38.029802+0000 mgr.y (mgr.24407) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.550197+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.100:0/3130234245' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.550197+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.100:0/3130234245' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.667332+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.667332+0000 mon.c (mon.2) 58 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.684565+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.684565+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.700731+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.700731+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.708722+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:39.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.708722+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.713050+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.713050+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.718855+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.718855+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.741719+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.741719+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.744537+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.744537+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.751080+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.751080+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.778196+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.778196+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.782823+0000 mon.c (mon.2) 64 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.782823+0000 mon.c (mon.2) 64 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.788603+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:38 vm03 bash[23382]: audit 2026-03-10T07:26:38.788603+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:39.383 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T07:26:39.383 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Bus STOPPED 2026-03-10T07:26:39.383 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Bus STARTING 2026-03-10T07:26:39.383 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Serving on http://:::9283 2026-03-10T07:26:39.383 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Bus STARTED 2026-03-10T07:26:39.383 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Bus STOPPING 2026-03-10T07:26:39.383 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T07:26:39.383 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Bus STOPPED 2026-03-10T07:26:39.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Bus STARTING 2026-03-10T07:26:39.848 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Serving on http://:::9283 2026-03-10T07:26:39.848 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:39 vm00 bash[20971]: [10/Mar/2026:07:26:39] ENGINE Bus STARTED 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: cluster 2026-03-10T07:26:38.555773+0000 mgr.y (mgr.24407) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: cluster 2026-03-10T07:26:38.555773+0000 mgr.y (mgr.24407) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.709647+0000 mgr.y (mgr.24407) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.709647+0000 mgr.y (mgr.24407) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.713340+0000 mgr.y (mgr.24407) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.713340+0000 mgr.y (mgr.24407) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.742119+0000 mgr.y (mgr.24407) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.742119+0000 mgr.y (mgr.24407) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.744862+0000 mgr.y (mgr.24407) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.744862+0000 mgr.y (mgr.24407) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.781769+0000 mgr.y (mgr.24407) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.781769+0000 mgr.y (mgr.24407) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.783108+0000 mgr.y (mgr.24407) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.783108+0000 mgr.y (mgr.24407) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.879469+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:40.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:39 vm00 bash[28005]: audit 2026-03-10T07:26:38.879469+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: cluster 2026-03-10T07:26:38.555773+0000 mgr.y (mgr.24407) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: cluster 2026-03-10T07:26:38.555773+0000 mgr.y (mgr.24407) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.709647+0000 mgr.y (mgr.24407) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.709647+0000 mgr.y (mgr.24407) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.713340+0000 mgr.y (mgr.24407) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.713340+0000 mgr.y (mgr.24407) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.742119+0000 mgr.y (mgr.24407) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.742119+0000 mgr.y (mgr.24407) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.744862+0000 mgr.y (mgr.24407) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.744862+0000 mgr.y (mgr.24407) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.781769+0000 mgr.y (mgr.24407) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.781769+0000 mgr.y (mgr.24407) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.783108+0000 mgr.y (mgr.24407) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.783108+0000 mgr.y (mgr.24407) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.879469+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[20701]: audit 2026-03-10T07:26:38.879469+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:40.134 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:39 vm00 bash[56723]: ts=2026-03-10T07:26:39.943Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000907963s 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: cluster 2026-03-10T07:26:38.555773+0000 mgr.y (mgr.24407) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: cluster 2026-03-10T07:26:38.555773+0000 mgr.y (mgr.24407) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.709647+0000 mgr.y (mgr.24407) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.709647+0000 mgr.y (mgr.24407) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.713340+0000 mgr.y (mgr.24407) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.713340+0000 mgr.y (mgr.24407) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.742119+0000 mgr.y (mgr.24407) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.742119+0000 mgr.y (mgr.24407) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.744862+0000 mgr.y (mgr.24407) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.744862+0000 mgr.y (mgr.24407) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.781769+0000 mgr.y (mgr.24407) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.781769+0000 mgr.y (mgr.24407) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.783108+0000 mgr.y (mgr.24407) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.783108+0000 mgr.y (mgr.24407) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.879469+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:40.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:39 vm03 bash[23382]: audit 2026-03-10T07:26:38.879469+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:26:42.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:41 vm00 bash[28005]: cluster 2026-03-10T07:26:40.556222+0000 mgr.y (mgr.24407) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:42.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:41 vm00 bash[28005]: cluster 2026-03-10T07:26:40.556222+0000 mgr.y (mgr.24407) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:42.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:41 vm00 bash[20701]: cluster 2026-03-10T07:26:40.556222+0000 mgr.y (mgr.24407) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:42.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:41 vm00 bash[20701]: cluster 2026-03-10T07:26:40.556222+0000 mgr.y (mgr.24407) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:42.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:41 vm03 bash[23382]: cluster 2026-03-10T07:26:40.556222+0000 mgr.y (mgr.24407) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:42.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:41 vm03 bash[23382]: cluster 2026-03-10T07:26:40.556222+0000 mgr.y (mgr.24407) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:43.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:42 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:26:43.739 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:43.741 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:43.743 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:43.747 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:43.750 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:43.751 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:43.752 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:43.754 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.062+0000 7f035ce8d640 1 -- 192.168.123.100:0/2906614366 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f035810a850 msgr2=0x7f035810acd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.062+0000 7f035ce8d640 1 --2- 192.168.123.100:0/2906614366 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f035810a850 0x7f035810acd0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f0348009960 tx=0x7f034802f140 comp rx=0 tx=0).stop 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 -- 192.168.123.100:0/2906614366 shutdown_connections 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 --2- 192.168.123.100:0/2906614366 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f035811c780 0x7f035811eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 --2- 192.168.123.100:0/2906614366 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f035810a850 0x7f035810acd0 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 --2- 192.168.123.100:0/2906614366 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f035810a470 0x7f03581114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 -- 192.168.123.100:0/2906614366 >> 192.168.123.100:0/2906614366 conn(0x7f035806d9f0 msgr2=0x7f035806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 -- 192.168.123.100:0/2906614366 shutdown_connections 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 -- 192.168.123.100:0/2906614366 wait complete. 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 Processor -- start 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 -- start start 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f035810a470 0x7f0358119b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f035810a850 0x7f035811a0c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f035811c780 0x7f0358112d10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f03581211f0 con 0x7f035810a850 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f0358121070 con 0x7f035811c780 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f035ce8d640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f0358121370 con 0x7f035810a470 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f0356575640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f035810a470 0x7f0358119b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f0356575640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f035810a470 0x7f0358119b80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:52204/0 (socket says 192.168.123.100:52204) 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.066+0000 7f0356575640 1 -- 192.168.123.100:0/2066070426 learned_addr learned my addr 192.168.123.100:0/2066070426 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.070+0000 7f0356575640 1 -- 192.168.123.100:0/2066070426 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f035811c780 msgr2=0x7f0358112d10 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.070+0000 7f0356575640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f035811c780 0x7f0358112d10 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.070+0000 7f0356575640 1 -- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f035810a850 msgr2=0x7f035811a0c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.070+0000 7f0356575640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f035810a850 0x7f035811a0c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.070+0000 7f0356575640 1 -- 192.168.123.100:0/2066070426 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f03581135d0 con 0x7f035810a470 2026-03-10T07:26:44.073 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:43 vm00 bash[28005]: cluster 2026-03-10T07:26:42.556553+0000 mgr.y (mgr.24407) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:44.073 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:43 vm00 bash[28005]: cluster 2026-03-10T07:26:42.556553+0000 mgr.y (mgr.24407) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:44.073 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:43 vm00 bash[28005]: audit 2026-03-10T07:26:42.866514+0000 mgr.y (mgr.24407) 55 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:44.073 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:43 vm00 bash[28005]: audit 2026-03-10T07:26:42.866514+0000 mgr.y (mgr.24407) 55 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:44.073 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:43 vm00 bash[20701]: cluster 2026-03-10T07:26:42.556553+0000 mgr.y (mgr.24407) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:44.073 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:43 vm00 bash[20701]: cluster 2026-03-10T07:26:42.556553+0000 mgr.y (mgr.24407) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:44.073 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:43 vm00 bash[20701]: audit 2026-03-10T07:26:42.866514+0000 mgr.y (mgr.24407) 55 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:44.073 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:43 vm00 bash[20701]: audit 2026-03-10T07:26:42.866514+0000 mgr.y (mgr.24407) 55 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:44.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.070+0000 7f0356575640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f035810a470 0x7f0358119b80 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f035000c0c0 tx=0x7f035000c590 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.070+0000 7f03477fe640 1 -- 192.168.123.100:0/2066070426 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0350019070 con 0x7f035810a470 2026-03-10T07:26:44.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.070+0000 7f035ce8d640 1 -- 192.168.123.100:0/2066070426 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0358113860 con 0x7f035810a470 2026-03-10T07:26:44.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.070+0000 7f035ce8d640 1 -- 192.168.123.100:0/2066070426 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f0358117880 con 0x7f035810a470 2026-03-10T07:26:44.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.074+0000 7f03477fe640 1 -- 192.168.123.100:0/2066070426 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f03500092d0 con 0x7f035810a470 2026-03-10T07:26:44.075 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.074+0000 7f03477fe640 1 -- 192.168.123.100:0/2066070426 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0350004850 con 0x7f035810a470 2026-03-10T07:26:44.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.074+0000 7f03477fe640 1 -- 192.168.123.100:0/2066070426 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f0350002a60 con 0x7f035810a470 2026-03-10T07:26:44.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.074+0000 7f03477fe640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f03280777d0 0x7f0328079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.074+0000 7f035ce8d640 1 -- 192.168.123.100:0/2066070426 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f031c000f80 con 0x7f035810a470 2026-03-10T07:26:44.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.074+0000 7f0355d74640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f03280777d0 0x7f0328079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.074+0000 7f03477fe640 1 -- 192.168.123.100:0/2066070426 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f0350099df0 con 0x7f035810a470 2026-03-10T07:26:44.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.078+0000 7f03477fe640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] conn(0x7f0328081640 0x7f0328083aa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.078+0000 7f03477fe640 1 -- 192.168.123.100:0/2066070426 --> [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f0328084170 con 0x7f0328081640 2026-03-10T07:26:44.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.078+0000 7f03477fe640 1 -- 192.168.123.100:0/2066070426 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_get_version_reply(handle=1 version=68) ==== 24+0+0 (secure 0 0 0) 0x7f0350062240 con 0x7f035810a470 2026-03-10T07:26:44.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.078+0000 7f0355d74640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f03280777d0 0x7f0328079c90 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f03480096f0 tx=0x7f0348005810 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.078+0000 7f0356d76640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] conn(0x7f0328081640 0x7f0328083aa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.080 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.078+0000 7f0356d76640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] conn(0x7f0328081640 0x7f0328083aa0 crc :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.082 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.078+0000 7f03477fe640 1 -- 192.168.123.100:0/2066070426 <== osd.2 v2:192.168.123.100:6818/3026087437 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f0328084170 con 0x7f0328081640 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f035ce8d640 1 -- 192.168.123.100:0/2066070426 --> [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f031c002d70 con 0x7f0328081640 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f03477fe640 1 -- 192.168.123.100:0/2066070426 <== osd.2 v2:192.168.123.100:6818/3026087437 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f031c002d70 con 0x7f0328081640 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f035ce8d640 1 -- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] conn(0x7f0328081640 msgr2=0x7f0328083aa0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f035ce8d640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:6818/3026087437,v1:192.168.123.100:6819/3026087437] conn(0x7f0328081640 0x7f0328083aa0 crc :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f035ce8d640 1 -- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f03280777d0 msgr2=0x7f0328079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f035ce8d640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f03280777d0 0x7f0328079c90 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f03480096f0 tx=0x7f0348005810 comp rx=0 tx=0).stop 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f035ce8d640 1 -- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f035810a470 msgr2=0x7f0358119b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f035ce8d640 1 --2- 192.168.123.100:0/2066070426 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f035810a470 0x7f0358119b80 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f035000c0c0 tx=0x7f035000c590 comp rx=0 tx=0).stop 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f0356d76640 1 -- 192.168.123.100:0/2066070426 reap_dead start 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f035ce8d640 1 -- 192.168.123.100:0/2066070426 shutdown_connections 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f035ce8d640 1 -- 192.168.123.100:0/2066070426 >> 192.168.123.100:0/2066070426 conn(0x7f035806d9f0 msgr2=0x7f035811cf60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f035ce8d640 1 -- 192.168.123.100:0/2066070426 shutdown_connections 2026-03-10T07:26:44.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.134+0000 7f035ce8d640 1 -- 192.168.123.100:0/2066070426 wait complete. 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.218+0000 7f416ea45640 1 -- 192.168.123.100:0/3504745958 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f416810a6d0 msgr2=0x7f416810aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.218+0000 7f416ea45640 1 --2- 192.168.123.100:0/3504745958 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f416810a6d0 0x7f416810aab0 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7f4158009ef0 tx=0x7f4158030440 comp rx=0 tx=0).stop 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.218+0000 7f416ea45640 1 -- 192.168.123.100:0/3504745958 shutdown_connections 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.218+0000 7f416ea45640 1 --2- 192.168.123.100:0/3504745958 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f4168075470 0x7f416807be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.218+0000 7f416ea45640 1 --2- 192.168.123.100:0/3504745958 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f416810b080 0x7f4168074d30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.218+0000 7f416ea45640 1 --2- 192.168.123.100:0/3504745958 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f416810a6d0 0x7f416810aab0 unknown :-1 s=CLOSED pgs=164 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416ea45640 1 -- 192.168.123.100:0/3504745958 >> 192.168.123.100:0/3504745958 conn(0x7f416806d9f0 msgr2=0x7f416806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416ea45640 1 -- 192.168.123.100:0/3504745958 shutdown_connections 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416ea45640 1 -- 192.168.123.100:0/3504745958 wait complete. 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416ea45640 1 Processor -- start 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416ea45640 1 -- start start 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416ea45640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f4168075470 0x7f4168085980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416ea45640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f416810b080 0x7f4168085ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416ea45640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f416807fa00 0x7f416807feb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416ea45640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f416807e2d0 con 0x7f416810b080 2026-03-10T07:26:44.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416ea45640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f416807e150 con 0x7f4168075470 2026-03-10T07:26:44.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416ea45640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f416807e450 con 0x7f416807fa00 2026-03-10T07:26:44.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f41677fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f416810b080 0x7f4168085ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f41677fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f416810b080 0x7f4168085ec0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59546/0 (socket says 192.168.123.100:59546) 2026-03-10T07:26:44.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f41677fe640 1 -- 192.168.123.100:0/1110449066 learned_addr learned my addr 192.168.123.100:0/1110449066 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:44.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416cfbb640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f416807fa00 0x7f416807feb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f4167fff640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f4168075470 0x7f4168085980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.240 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416cfbb640 1 -- 192.168.123.100:0/1110449066 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f4168075470 msgr2=0x7f4168085980 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.241 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416cfbb640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f4168075470 0x7f4168085980 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.241 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416cfbb640 1 -- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f416810b080 msgr2=0x7f4168085ec0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.241 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416cfbb640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f416810b080 0x7f4168085ec0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.241 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.222+0000 7f416cfbb640 1 -- 192.168.123.100:0/1110449066 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4168080680 con 0x7f416807fa00 2026-03-10T07:26:44.241 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.238+0000 7f416cfbb640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f416807fa00 0x7f416807feb0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f415c00d6a0 tx=0x7f415c00db70 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.238+0000 7f41657fa640 1 -- 192.168.123.100:0/1110449066 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f415c014070 con 0x7f416807fa00 2026-03-10T07:26:44.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.238+0000 7f416ea45640 1 -- 192.168.123.100:0/1110449066 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4168137d00 con 0x7f416807fa00 2026-03-10T07:26:44.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.238+0000 7f416ea45640 1 -- 192.168.123.100:0/1110449066 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f4168138290 con 0x7f416807fa00 2026-03-10T07:26:44.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.238+0000 7f41657fa640 1 -- 192.168.123.100:0/1110449066 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f415c004ce0 con 0x7f416807fa00 2026-03-10T07:26:44.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.238+0000 7f41657fa640 1 -- 192.168.123.100:0/1110449066 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f415c00bd70 con 0x7f416807fa00 2026-03-10T07:26:44.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.242+0000 7f41657fa640 1 -- 192.168.123.100:0/1110449066 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f415c005000 con 0x7f416807fa00 2026-03-10T07:26:44.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.242+0000 7f41657fa640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f4154077700 0x7f4154079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.242+0000 7f41657fa640 1 -- 192.168.123.100:0/1110449066 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f415c09ac90 con 0x7f416807fa00 2026-03-10T07:26:44.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.242+0000 7f416ea45640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] conn(0x7f4134001650 0x7f4134003b10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.249 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.242+0000 7f4167fff640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f4154077700 0x7f4154079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.249 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.246+0000 7f41677fe640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] conn(0x7f4134001650 0x7f4134003b10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.249 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.246+0000 7f416ea45640 1 -- 192.168.123.100:0/1110449066 --> [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f4134006c00 con 0x7f4134001650 2026-03-10T07:26:44.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:43 vm03 bash[23382]: cluster 2026-03-10T07:26:42.556553+0000 mgr.y (mgr.24407) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:44.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:43 vm03 bash[23382]: cluster 2026-03-10T07:26:42.556553+0000 mgr.y (mgr.24407) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:44.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:43 vm03 bash[23382]: audit 2026-03-10T07:26:42.866514+0000 mgr.y (mgr.24407) 55 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:44.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:43 vm03 bash[23382]: audit 2026-03-10T07:26:42.866514+0000 mgr.y (mgr.24407) 55 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:44.279 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.258+0000 7f4167fff640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f4154077700 0x7f4154079bc0 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f4158005ec0 tx=0x7f4158005e50 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.279 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.258+0000 7f41677fe640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] conn(0x7f4134001650 0x7f4134003b10 crc :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.279 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.262+0000 7f41657fa640 1 -- 192.168.123.100:0/1110449066 <== osd.1 v2:192.168.123.100:6810/1715502331 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f4134006c00 con 0x7f4134001650 2026-03-10T07:26:44.290 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.282+0000 7f416ea45640 1 -- 192.168.123.100:0/1110449066 --> [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f4134005ce0 con 0x7f4134001650 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.290+0000 7f41657fa640 1 -- 192.168.123.100:0/1110449066 <== osd.1 v2:192.168.123.100:6810/1715502331 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f4134005ce0 con 0x7f4134001650 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.290+0000 7f4146ffd640 1 -- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] conn(0x7f4134001650 msgr2=0x7f4134003b10 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.290+0000 7f4146ffd640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:6810/1715502331,v1:192.168.123.100:6811/1715502331] conn(0x7f4134001650 0x7f4134003b10 crc :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.290+0000 7f4146ffd640 1 -- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f4154077700 msgr2=0x7f4154079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.290+0000 7f4146ffd640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f4154077700 0x7f4154079bc0 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f4158005ec0 tx=0x7f4158005e50 comp rx=0 tx=0).stop 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.290+0000 7f4146ffd640 1 -- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f416807fa00 msgr2=0x7f416807feb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.290+0000 7f4146ffd640 1 --2- 192.168.123.100:0/1110449066 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f416807fa00 0x7f416807feb0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f415c00d6a0 tx=0x7f415c00db70 comp rx=0 tx=0).stop 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.290+0000 7f416cfbb640 1 -- 192.168.123.100:0/1110449066 reap_dead start 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.294+0000 7f4146ffd640 1 -- 192.168.123.100:0/1110449066 shutdown_connections 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.294+0000 7f4146ffd640 1 -- 192.168.123.100:0/1110449066 >> 192.168.123.100:0/1110449066 conn(0x7f416806d9f0 msgr2=0x7f416810d490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.294+0000 7f4146ffd640 1 -- 192.168.123.100:0/1110449066 shutdown_connections 2026-03-10T07:26:44.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.294+0000 7f4146ffd640 1 -- 192.168.123.100:0/1110449066 wait complete. 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.334+0000 7f7de0d8b640 1 -- 192.168.123.100:0/260806184 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ddc10a6d0 msgr2=0x7f7ddc10aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.334+0000 7f7de0d8b640 1 --2- 192.168.123.100:0/260806184 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ddc10a6d0 0x7f7ddc10aab0 secure :-1 s=READY pgs=165 cs=0 l=1 rev1=1 crypto rx=0x7f7dcc009a80 tx=0x7f7dcc02f2f0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.334+0000 7f7de0d8b640 1 -- 192.168.123.100:0/260806184 shutdown_connections 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.334+0000 7f7de0d8b640 1 --2- 192.168.123.100:0/260806184 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ddc075470 0x7f7ddc07be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.334+0000 7f7de0d8b640 1 --2- 192.168.123.100:0/260806184 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7ddc10b080 0x7f7ddc074d30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.334+0000 7f7de0d8b640 1 --2- 192.168.123.100:0/260806184 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ddc10a6d0 0x7f7ddc10aab0 unknown :-1 s=CLOSED pgs=165 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.334+0000 7f7de0d8b640 1 -- 192.168.123.100:0/260806184 >> 192.168.123.100:0/260806184 conn(0x7f7ddc06d9f0 msgr2=0x7f7ddc06de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.334+0000 7f7de0d8b640 1 -- 192.168.123.100:0/260806184 shutdown_connections 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.334+0000 7f7de0d8b640 1 -- 192.168.123.100:0/260806184 wait complete. 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7de0d8b640 1 Processor -- start 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7de0d8b640 1 -- start start 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7de0d8b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ddc075470 0x7f7ddc07fe50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7de0d8b640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7ddc10b080 0x7f7ddc080390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7de0d8b640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ddc085670 0x7f7ddc085af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7de0d8b640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f7ddc07e590 con 0x7f7ddc075470 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7de0d8b640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f7ddc07e410 con 0x7f7ddc10b080 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7de0d8b640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f7ddc07e710 con 0x7f7ddc085670 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7dda575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ddc075470 0x7f7ddc07fe50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7dd9d74640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7ddc10b080 0x7f7ddc080390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7dd9d74640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7ddc10b080 0x7f7ddc080390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:55656/0 (socket says 192.168.123.100:55656) 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7dd9d74640 1 -- 192.168.123.100:0/387763562 learned_addr learned my addr 192.168.123.100:0/387763562 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7dda575640 1 -- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ddc085670 msgr2=0x7f7ddc085af0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7ddad76640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ddc085670 0x7f7ddc085af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7dda575640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ddc085670 0x7f7ddc085af0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7dda575640 1 -- 192.168.123.100:0/387763562 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7ddc10b080 msgr2=0x7f7ddc080390 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7dda575640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7ddc10b080 0x7f7ddc080390 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7dda575640 1 -- 192.168.123.100:0/387763562 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7ddc0863b0 con 0x7f7ddc075470 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7dd9d74640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7ddc10b080 0x7f7ddc080390 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7dda575640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ddc075470 0x7f7ddc07fe50 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7f7dcc0099a0 tx=0x7f7dcc004600 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.338+0000 7f7ddad76640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7ddc085670 0x7f7ddc085af0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:26:44.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.346+0000 7f7dcb7fe640 1 -- 192.168.123.100:0/387763562 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7dcc02fd10 con 0x7f7ddc075470 2026-03-10T07:26:44.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.346+0000 7f7de0d8b640 1 -- 192.168.123.100:0/387763562 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f7ddc1be950 con 0x7f7ddc075470 2026-03-10T07:26:44.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.346+0000 7f7de0d8b640 1 -- 192.168.123.100:0/387763562 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f7ddc1bed50 con 0x7f7ddc075470 2026-03-10T07:26:44.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.346+0000 7f7dcb7fe640 1 -- 192.168.123.100:0/387763562 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f7dcc03e070 con 0x7f7ddc075470 2026-03-10T07:26:44.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.346+0000 7f7dcb7fe640 1 -- 192.168.123.100:0/387763562 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7dcc005290 con 0x7f7ddc075470 2026-03-10T07:26:44.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.346+0000 7f7de0d8b640 1 -- 192.168.123.100:0/387763562 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f7ddc10ee40 con 0x7f7ddc075470 2026-03-10T07:26:44.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.354+0000 7f7dcb7fe640 1 -- 192.168.123.100:0/387763562 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f7dcc04a070 con 0x7f7ddc075470 2026-03-10T07:26:44.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.354+0000 7f7dcb7fe640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f7dbc0777d0 0x7f7dbc079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.354+0000 7f7dcb7fe640 1 -- 192.168.123.100:0/387763562 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f7dcc0be6d0 con 0x7f7ddc075470 2026-03-10T07:26:44.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.354+0000 7f7dcb7fe640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] conn(0x7f7dbc0816c0 0x7f7dbc083b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.354+0000 7f7dcb7fe640 1 -- 192.168.123.100:0/387763562 --> [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f7dcc04ae50 con 0x7f7dbc0816c0 2026-03-10T07:26:44.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.354+0000 7f7dcb7fe640 1 -- 192.168.123.100:0/387763562 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_get_version_reply(handle=1 version=68) ==== 24+0+0 (secure 0 0 0) 0x7f7dcc0beac0 con 0x7f7ddc075470 2026-03-10T07:26:44.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.354+0000 7f7dd9d74640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f7dbc0777d0 0x7f7dbc079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.354+0000 7f7ddad76640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] conn(0x7f7dbc0816c0 0x7f7dbc083b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.354+0000 7f7ddad76640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] conn(0x7f7dbc0816c0 0x7f7dbc083b20 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.3 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.357 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.354+0000 7f7dd9d74640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f7dbc0777d0 0x7f7dbc079c90 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f7ddc0812b0 tx=0x7f7dd400b040 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.367 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.354+0000 7f7dcb7fe640 1 -- 192.168.123.100:0/387763562 <== osd.3 v2:192.168.123.100:6826/2171328275 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f7dcc04ae50 con 0x7f7dbc0816c0 2026-03-10T07:26:44.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.366+0000 7f7dc97fa640 1 -- 192.168.123.100:0/387763562 --> [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f7d9c000f40 con 0x7f7dbc0816c0 2026-03-10T07:26:44.374 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7dcb7fe640 1 -- 192.168.123.100:0/387763562 <== osd.3 v2:192.168.123.100:6826/2171328275 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7f7d9c000f40 con 0x7f7dbc0816c0 2026-03-10T07:26:44.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7dc97fa640 1 -- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] conn(0x7f7dbc0816c0 msgr2=0x7f7dbc083b20 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7dc97fa640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:6826/2171328275,v1:192.168.123.100:6827/2171328275] conn(0x7f7dbc0816c0 0x7f7dbc083b20 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7dc97fa640 1 -- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f7dbc0777d0 msgr2=0x7f7dbc079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7dc97fa640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f7dbc0777d0 0x7f7dbc079c90 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f7ddc0812b0 tx=0x7f7dd400b040 comp rx=0 tx=0).stop 2026-03-10T07:26:44.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7dc97fa640 1 -- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ddc075470 msgr2=0x7f7ddc07fe50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7dc97fa640 1 --2- 192.168.123.100:0/387763562 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7ddc075470 0x7f7ddc07fe50 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7f7dcc0099a0 tx=0x7f7dcc004600 comp rx=0 tx=0).stop 2026-03-10T07:26:44.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7ddad76640 1 -- 192.168.123.100:0/387763562 reap_dead start 2026-03-10T07:26:44.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7dc97fa640 1 -- 192.168.123.100:0/387763562 shutdown_connections 2026-03-10T07:26:44.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7dc97fa640 1 -- 192.168.123.100:0/387763562 >> 192.168.123.100:0/387763562 conn(0x7f7ddc06d9f0 msgr2=0x7f7ddc10d490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7dc97fa640 1 -- 192.168.123.100:0/387763562 shutdown_connections 2026-03-10T07:26:44.382 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.374+0000 7f7dc97fa640 1 -- 192.168.123.100:0/387763562 wait complete. 2026-03-10T07:26:44.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.410+0000 7feb3fa1c640 1 -- 192.168.123.100:0/2122726040 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb3810a6d0 msgr2=0x7feb3810aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.410+0000 7feb3fa1c640 1 --2- 192.168.123.100:0/2122726040 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb3810a6d0 0x7feb3810aab0 secure :-1 s=READY pgs=167 cs=0 l=1 rev1=1 crypto rx=0x7feb30009f90 tx=0x7feb3002f3d0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.410+0000 7feb3fa1c640 1 -- 192.168.123.100:0/2122726040 shutdown_connections 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.410+0000 7feb3fa1c640 1 --2- 192.168.123.100:0/2122726040 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb38075470 0x7feb3807be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.410+0000 7feb3fa1c640 1 --2- 192.168.123.100:0/2122726040 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7feb3810b080 0x7feb38074d30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.410+0000 7feb3fa1c640 1 --2- 192.168.123.100:0/2122726040 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb3810a6d0 0x7feb3810aab0 unknown :-1 s=CLOSED pgs=167 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.410+0000 7feb3fa1c640 1 -- 192.168.123.100:0/2122726040 >> 192.168.123.100:0/2122726040 conn(0x7feb3806d9f0 msgr2=0x7feb3806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.410+0000 7feb3fa1c640 1 -- 192.168.123.100:0/2122726040 shutdown_connections 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 -- 192.168.123.100:0/2122726040 wait complete. 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 Processor -- start 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 -- start start 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7feb38075470 0x7feb38085c40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb3810a6d0 0x7feb3807fcc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb3810b080 0x7feb38080200 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7feb3807e590 con 0x7feb3810a6d0 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7feb3807e410 con 0x7feb38075470 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7feb3807e710 con 0x7feb3810b080 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3cf90640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb3810a6d0 0x7feb3807fcc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3cf90640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb3810a6d0 0x7feb3807fcc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59590/0 (socket says 192.168.123.100:59590) 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3cf90640 1 -- 192.168.123.100:0/207768486 learned_addr learned my addr 192.168.123.100:0/207768486 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3df92640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb3810b080 0x7feb38080200 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3cf90640 1 -- 192.168.123.100:0/207768486 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb3810b080 msgr2=0x7feb38080200 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3cf90640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7feb3810b080 0x7feb38080200 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3cf90640 1 -- 192.168.123.100:0/207768486 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7feb38075470 msgr2=0x7feb38085c40 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3cf90640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7feb38075470 0x7feb38085c40 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3cf90640 1 -- 192.168.123.100:0/207768486 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7feb38080840 con 0x7feb3810a6d0 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3cf90640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb3810a6d0 0x7feb3807fcc0 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7feb3400b570 tx=0x7feb3400ba40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb2e7fc640 1 -- 192.168.123.100:0/207768486 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7feb34013020 con 0x7feb3810a6d0 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 -- 192.168.123.100:0/207768486 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7feb381be950 con 0x7feb3810a6d0 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 -- 192.168.123.100:0/207768486 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7feb381bee90 con 0x7feb3810a6d0 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb2e7fc640 1 -- 192.168.123.100:0/207768486 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7feb34004480 con 0x7feb3810a6d0 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb2e7fc640 1 -- 192.168.123.100:0/207768486 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7feb3400f980 con 0x7feb3810a6d0 2026-03-10T07:26:44.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.414+0000 7feb3fa1c640 1 -- 192.168.123.100:0/207768486 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7feb38073530 con 0x7feb3810a6d0 2026-03-10T07:26:44.426 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.418+0000 7feb2e7fc640 1 -- 192.168.123.100:0/207768486 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7feb34020020 con 0x7feb3810a6d0 2026-03-10T07:26:44.426 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.418+0000 7feb2e7fc640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7feb140779d0 0x7feb14079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.426 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.418+0000 7feb2e7fc640 1 -- 192.168.123.100:0/207768486 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7feb34099510 con 0x7feb3810a6d0 2026-03-10T07:26:44.426 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.418+0000 7feb3d791640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7feb140779d0 0x7feb14079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.426 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.418+0000 7feb3d791640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7feb140779d0 0x7feb14079e90 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7feb30004a70 tx=0x7feb30002750 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.426 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.422+0000 7feb2e7fc640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] conn(0x7feb14081840 0x7feb14083ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.426 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.422+0000 7feb2e7fc640 1 -- 192.168.123.100:0/207768486 --> [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7feb34011ec0 con 0x7feb14081840 2026-03-10T07:26:44.426 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.422+0000 7feb2e7fc640 1 -- 192.168.123.100:0/207768486 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_get_version_reply(handle=1 version=68) ==== 24+0+0 (secure 0 0 0) 0x7feb340618b0 con 0x7feb3810a6d0 2026-03-10T07:26:44.426 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.422+0000 7feb3df92640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] conn(0x7feb14081840 0x7feb14083ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.434 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.426+0000 7feb3df92640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] conn(0x7feb14081840 0x7feb14083ca0 crc :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.6 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.426+0000 7feb2e7fc640 1 -- 192.168.123.100:0/207768486 <== osd.6 v2:192.168.123.103:6816/665664252 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7feb34011ec0 con 0x7feb14081840 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.442+0000 7feb3fa1c640 1 -- 192.168.123.100:0/207768486 --> [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7feb38073740 con 0x7feb14081840 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.442+0000 7feb2e7fc640 1 -- 192.168.123.100:0/207768486 <== osd.6 v2:192.168.123.103:6816/665664252 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7feb38073740 con 0x7feb14081840 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.442+0000 7feb03fff640 1 -- 192.168.123.100:0/207768486 >> [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] conn(0x7feb14081840 msgr2=0x7feb14083ca0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.442+0000 7feb03fff640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.103:6816/665664252,v1:192.168.123.103:6817/665664252] conn(0x7feb14081840 0x7feb14083ca0 crc :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.442+0000 7feb03fff640 1 -- 192.168.123.100:0/207768486 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7feb140779d0 msgr2=0x7feb14079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.442+0000 7feb03fff640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7feb140779d0 0x7feb14079e90 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7feb30004a70 tx=0x7feb30002750 comp rx=0 tx=0).stop 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.442+0000 7feb03fff640 1 -- 192.168.123.100:0/207768486 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb3810a6d0 msgr2=0x7feb3807fcc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.442+0000 7feb03fff640 1 --2- 192.168.123.100:0/207768486 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7feb3810a6d0 0x7feb3807fcc0 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7feb3400b570 tx=0x7feb3400ba40 comp rx=0 tx=0).stop 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.442+0000 7feb3df92640 1 -- 192.168.123.100:0/207768486 reap_dead start 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.446+0000 7feb03fff640 1 -- 192.168.123.100:0/207768486 shutdown_connections 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.446+0000 7feb03fff640 1 -- 192.168.123.100:0/207768486 >> 192.168.123.100:0/207768486 conn(0x7feb3806d9f0 msgr2=0x7feb38072b90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.446+0000 7feb03fff640 1 -- 192.168.123.100:0/207768486 shutdown_connections 2026-03-10T07:26:44.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.446+0000 7feb03fff640 1 -- 192.168.123.100:0/207768486 wait complete. 2026-03-10T07:26:44.473 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 -- 192.168.123.100:0/4039806858 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f334010a850 msgr2=0x7f334010acd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.511 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 --2- 192.168.123.100:0/4039806858 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f334010a850 0x7f334010acd0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f3338009f90 tx=0x7f333802f390 comp rx=0 tx=0).stop 2026-03-10T07:26:44.511 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 -- 192.168.123.100:0/4039806858 shutdown_connections 2026-03-10T07:26:44.511 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 --2- 192.168.123.100:0/4039806858 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f334011c780 0x7f334011eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.511 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 --2- 192.168.123.100:0/4039806858 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f334010a850 0x7f334010acd0 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.511 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 --2- 192.168.123.100:0/4039806858 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f334010a470 0x7f33401114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.511 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 -- 192.168.123.100:0/4039806858 >> 192.168.123.100:0/4039806858 conn(0x7f334006d9f0 msgr2=0x7f334006de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 -- 192.168.123.100:0/4039806858 shutdown_connections 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 -- 192.168.123.100:0/4039806858 wait complete. 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 Processor -- start 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 -- start start 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f334010a470 0x7f3340112b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f334011c780 0x7f3340113040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f33401bb9e0 0x7f33401bddd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f33401211f0 con 0x7f33401bb9e0 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f3340121070 con 0x7f334010a470 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f3340121370 con 0x7f334011c780 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3344c10640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f334010a470 0x7f3340112b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3344c10640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f334010a470 0x7f3340112b00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:55698/0 (socket says 192.168.123.100:55698) 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3344c10640 1 -- 192.168.123.100:0/548805746 learned_addr learned my addr 192.168.123.100:0/548805746 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3345411640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f33401bb9e0 0x7f33401bddd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f333ffff640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f334011c780 0x7f3340113040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3344c10640 1 -- 192.168.123.100:0/548805746 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f334011c780 msgr2=0x7f3340113040 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3344c10640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f334011c780 0x7f3340113040 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3344c10640 1 -- 192.168.123.100:0/548805746 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f33401bb9e0 msgr2=0x7f33401bddd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3344c10640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f33401bb9e0 0x7f33401bddd0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3344c10640 1 -- 192.168.123.100:0/548805746 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f33401be4a0 con 0x7f334010a470 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3344c10640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f334010a470 0x7f3340112b00 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f333400ea10 tx=0x7f333400eee0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f333dffb640 1 -- 192.168.123.100:0/548805746 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f333400ce50 con 0x7f334010a470 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 -- 192.168.123.100:0/548805746 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f33401be790 con 0x7f334010a470 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.470+0000 7f3346e9b640 1 -- 192.168.123.100:0/548805746 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f33401becd0 con 0x7f334010a470 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.474+0000 7f333dffb640 1 -- 192.168.123.100:0/548805746 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3334004510 con 0x7f334010a470 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.474+0000 7f333dffb640 1 -- 192.168.123.100:0/548805746 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3334010690 con 0x7f334010a470 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.474+0000 7f3346e9b640 1 -- 192.168.123.100:0/548805746 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f334011d540 con 0x7f334010a470 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.474+0000 7f333dffb640 1 -- 192.168.123.100:0/548805746 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f33340040d0 con 0x7f334010a470 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.474+0000 7f333dffb640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3318077970 0x7f3318079e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.474+0000 7f333dffb640 1 -- 192.168.123.100:0/548805746 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f333409a5b0 con 0x7f334010a470 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.474+0000 7f333dffb640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] conn(0x7f3318081860 0x7f3318083cc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.474+0000 7f333dffb640 1 -- 192.168.123.100:0/548805746 --> [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f333401fe50 con 0x7f3318081860 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.474+0000 7f333dffb640 1 -- 192.168.123.100:0/548805746 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_get_version_reply(handle=1 version=68) ==== 24+0+0 (secure 0 0 0) 0x7f333409a9a0 con 0x7f334010a470 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.474+0000 7f333ffff640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3318077970 0x7f3318079e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.474+0000 7f3345411640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] conn(0x7f3318081860 0x7f3318083cc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.478+0000 7f3345411640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] conn(0x7f3318081860 0x7f3318083cc0 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.7 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.478+0000 7f333ffff640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3318077970 0x7f3318079e30 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f33401140f0 tx=0x7f3338002750 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.478+0000 7f333dffb640 1 -- 192.168.123.100:0/548805746 <== osd.7 v2:192.168.123.103:6824/3078297940 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f333401fe50 con 0x7f3318081860 2026-03-10T07:26:44.512 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.486+0000 7f330f7fe640 1 -- 192.168.123.100:0/548805746 --> [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f334011d750 con 0x7f3318081860 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f333dffb640 1 -- 192.168.123.100:0/548805746 <== osd.7 v2:192.168.123.103:6824/3078297940 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7f334011d750 con 0x7f3318081860 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f330f7fe640 1 -- 192.168.123.100:0/548805746 >> [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] conn(0x7f3318081860 msgr2=0x7f3318083cc0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f330f7fe640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.103:6824/3078297940,v1:192.168.123.103:6825/3078297940] conn(0x7f3318081860 0x7f3318083cc0 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f330f7fe640 1 -- 192.168.123.100:0/548805746 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3318077970 msgr2=0x7f3318079e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f330f7fe640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3318077970 0x7f3318079e30 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f33401140f0 tx=0x7f3338002750 comp rx=0 tx=0).stop 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f330f7fe640 1 -- 192.168.123.100:0/548805746 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f334010a470 msgr2=0x7f3340112b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f330f7fe640 1 --2- 192.168.123.100:0/548805746 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f334010a470 0x7f3340112b00 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f333400ea10 tx=0x7f333400eee0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f3345411640 1 -- 192.168.123.100:0/548805746 reap_dead start 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f330f7fe640 1 -- 192.168.123.100:0/548805746 shutdown_connections 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f330f7fe640 1 -- 192.168.123.100:0/548805746 >> 192.168.123.100:0/548805746 conn(0x7f334006d9f0 msgr2=0x7f334011cb60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f330f7fe640 1 -- 192.168.123.100:0/548805746 shutdown_connections 2026-03-10T07:26:44.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.490+0000 7f330f7fe640 1 -- 192.168.123.100:0/548805746 wait complete. 2026-03-10T07:26:44.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 -- 192.168.123.100:0/2905534724 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ddc10b080 msgr2=0x7f6ddc074d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 --2- 192.168.123.100:0/2905534724 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ddc10b080 0x7f6ddc074d30 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7f6dd400b0a0 tx=0x7f6dd402f450 comp rx=0 tx=0).stop 2026-03-10T07:26:44.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 -- 192.168.123.100:0/2905534724 shutdown_connections 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 --2- 192.168.123.100:0/2905534724 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6ddc075470 0x7f6ddc07be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 --2- 192.168.123.100:0/2905534724 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ddc10b080 0x7f6ddc074d30 unknown :-1 s=CLOSED pgs=169 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 --2- 192.168.123.100:0/2905534724 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6ddc10a6d0 0x7f6ddc10aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 -- 192.168.123.100:0/2905534724 >> 192.168.123.100:0/2905534724 conn(0x7f6ddc06d9f0 msgr2=0x7f6ddc06de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 -- 192.168.123.100:0/2905534724 shutdown_connections 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 -- 192.168.123.100:0/2905534724 wait complete. 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 Processor -- start 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 -- start start 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ddc075470 0x7f6ddc07fb50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6ddc10a6d0 0x7f6ddc080090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6ddc084490 0x7f6ddc084940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6ddc07e520 con 0x7f6ddc075470 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f6ddc07e3a0 con 0x7f6ddc10a6d0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f6ddc07e6a0 con 0x7f6ddc084490 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de089b640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6ddc10a6d0 0x7f6ddc080090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de089b640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6ddc10a6d0 0x7f6ddc080090 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:55716/0 (socket says 192.168.123.100:55716) 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de089b640 1 -- 192.168.123.100:0/21605790 learned_addr learned my addr 192.168.123.100:0/21605790 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de089b640 1 -- 192.168.123.100:0/21605790 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6ddc084490 msgr2=0x7f6ddc084940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de089b640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6ddc084490 0x7f6ddc084940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de089b640 1 -- 192.168.123.100:0/21605790 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ddc075470 msgr2=0x7f6ddc07fb50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de089b640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6ddc075470 0x7f6ddc07fb50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de089b640 1 -- 192.168.123.100:0/21605790 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6ddc085070 con 0x7f6ddc10a6d0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de089b640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6ddc10a6d0 0x7f6ddc080090 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f6dd4009300 tx=0x7f6dd4009330 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6dd27fc640 1 -- 192.168.123.100:0/21605790 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6dd40075c0 con 0x7f6ddc10a6d0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 -- 192.168.123.100:0/21605790 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6ddc137d90 con 0x7f6ddc10a6d0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.582+0000 7f6de3327640 1 -- 192.168.123.100:0/21605790 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6ddc138320 con 0x7f6ddc10a6d0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6dd27fc640 1 -- 192.168.123.100:0/21605790 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f6dd4033070 con 0x7f6ddc10a6d0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6dd27fc640 1 -- 192.168.123.100:0/21605790 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6dd4038420 con 0x7f6ddc10a6d0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6de3327640 1 -- 192.168.123.100:0/21605790 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f6ddc10ee40 con 0x7f6ddc10a6d0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6dd27fc640 1 -- 192.168.123.100:0/21605790 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f6dd4049020 con 0x7f6ddc10a6d0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6dd27fc640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6dc0077700 0x7f6dc0079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6de109c640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6dc0077700 0x7f6dc0079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6dd27fc640 1 -- 192.168.123.100:0/21605790 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f6dd40be980 con 0x7f6ddc10a6d0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6dd27fc640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] conn(0x7f6dc00815f0 0x7f6dc0083a50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6dd27fc640 1 -- 192.168.123.100:0/21605790 --> [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f6dd403bec0 con 0x7f6dc00815f0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6dd27fc640 1 -- 192.168.123.100:0/21605790 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_get_version_reply(handle=1 version=68) ==== 24+0+0 (secure 0 0 0) 0x7f6dd40bed70 con 0x7f6ddc10a6d0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6de189d640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] conn(0x7f6dc00815f0 0x7f6dc0083a50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6de189d640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] conn(0x7f6dc00815f0 0x7f6dc0083a50 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.5 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.586+0000 7f6dd27fc640 1 -- 192.168.123.100:0/21605790 <== osd.5 v2:192.168.123.103:6808/3238215945 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f6dd403bec0 con 0x7f6dc00815f0 2026-03-10T07:26:44.597 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.590+0000 7f6de109c640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6dc0077700 0x7f6dc0079bc0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f6dd800ba60 tx=0x7f6dd8005f50 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.604 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.594+0000 7f6da7fff640 1 -- 192.168.123.100:0/21605790 --> [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f6ddc10f050 con 0x7f6dc00815f0 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6dd27fc640 1 -- 192.168.123.100:0/21605790 <== osd.5 v2:192.168.123.103:6808/3238215945 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7f6ddc10f050 con 0x7f6dc00815f0 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6da7fff640 1 -- 192.168.123.100:0/21605790 >> [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] conn(0x7f6dc00815f0 msgr2=0x7f6dc0083a50 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6da7fff640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.103:6808/3238215945,v1:192.168.123.103:6809/3238215945] conn(0x7f6dc00815f0 0x7f6dc0083a50 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6da7fff640 1 -- 192.168.123.100:0/21605790 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6dc0077700 msgr2=0x7f6dc0079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6da7fff640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6dc0077700 0x7f6dc0079bc0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f6dd800ba60 tx=0x7f6dd8005f50 comp rx=0 tx=0).stop 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6da7fff640 1 -- 192.168.123.100:0/21605790 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6ddc10a6d0 msgr2=0x7f6ddc080090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6da7fff640 1 --2- 192.168.123.100:0/21605790 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6ddc10a6d0 0x7f6ddc080090 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f6dd4009300 tx=0x7f6dd4009330 comp rx=0 tx=0).stop 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6de189d640 1 -- 192.168.123.100:0/21605790 reap_dead start 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6da7fff640 1 -- 192.168.123.100:0/21605790 shutdown_connections 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6da7fff640 1 -- 192.168.123.100:0/21605790 >> 192.168.123.100:0/21605790 conn(0x7f6ddc06d9f0 msgr2=0x7f6ddc10d490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6da7fff640 1 -- 192.168.123.100:0/21605790 shutdown_connections 2026-03-10T07:26:44.612 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.598+0000 7f6da7fff640 1 -- 192.168.123.100:0/21605790 wait complete. 2026-03-10T07:26:44.647 INFO:teuthology.orchestra.run.vm00.stdout:55834574907 2026-03-10T07:26:44.647 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd last-stat-seq osd.1 2026-03-10T07:26:44.678 INFO:teuthology.orchestra.run.vm00.stdout:77309411380 2026-03-10T07:26:44.678 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd last-stat-seq osd.2 2026-03-10T07:26:44.694 INFO:teuthology.orchestra.run.vm00.stdout:111669149741 2026-03-10T07:26:44.694 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd last-stat-seq osd.3 2026-03-10T07:26:44.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.714+0000 7fdad142c640 1 -- 192.168.123.100:0/2611428196 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fdacc10a470 msgr2=0x7fdacc1114d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.716 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.714+0000 7fdad142c640 1 --2- 192.168.123.100:0/2611428196 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fdacc10a470 0x7fdacc1114d0 secure :-1 s=READY pgs=170 cs=0 l=1 rev1=1 crypto rx=0x7fdabc01c080 tx=0x7fdabc040410 comp rx=0 tx=0).stop 2026-03-10T07:26:44.726 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.714+0000 7fdad142c640 1 -- 192.168.123.100:0/2611428196 shutdown_connections 2026-03-10T07:26:44.726 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.714+0000 7fdad142c640 1 --2- 192.168.123.100:0/2611428196 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fdacc11c780 0x7fdacc11eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.726 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.714+0000 7fdad142c640 1 --2- 192.168.123.100:0/2611428196 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fdacc10a850 0x7fdacc10acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.726 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.714+0000 7fdad142c640 1 --2- 192.168.123.100:0/2611428196 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fdacc10a470 0x7fdacc1114d0 unknown :-1 s=CLOSED pgs=170 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.726 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.714+0000 7fdad142c640 1 -- 192.168.123.100:0/2611428196 >> 192.168.123.100:0/2611428196 conn(0x7fdacc06d9f0 msgr2=0x7fdacc06de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.726 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.714+0000 7fdad142c640 1 -- 192.168.123.100:0/2611428196 shutdown_connections 2026-03-10T07:26:44.726 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.714+0000 7fdad142c640 1 -- 192.168.123.100:0/2611428196 wait complete. 2026-03-10T07:26:44.726 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.714+0000 7fdad142c640 1 Processor -- start 2026-03-10T07:26:44.726 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.718+0000 7fdad142c640 1 -- start start 2026-03-10T07:26:44.726 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.718+0000 7fdad142c640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fdacc10a470 0x7fdacc11bfa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.718+0000 7fdad142c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fdacc10a850 0x7fdacc117030 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.718+0000 7fdad142c640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fdacc11c780 0x7fdacc117570 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.718+0000 7fdad142c640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fdacc1139c0 con 0x7fdacc10a850 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.718+0000 7fdad142c640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fdacc113840 con 0x7fdacc11c780 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.718+0000 7fdad142c640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fdacc113b40 con 0x7fdacc10a470 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.718+0000 7fdaca7fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fdacc10a850 0x7fdacc117030 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.718+0000 7fdaca7fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fdacc10a850 0x7fdacc117030 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59650/0 (socket says 192.168.123.100:59650) 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.718+0000 7fdaca7fc640 1 -- 192.168.123.100:0/4067639946 learned_addr learned my addr 192.168.123.100:0/4067639946 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdaca7fc640 1 -- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fdacc10a470 msgr2=0x7fdacc11bfa0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdaca7fc640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fdacc10a470 0x7fdacc11bfa0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdaca7fc640 1 -- 192.168.123.100:0/4067639946 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fdacc11c780 msgr2=0x7fdacc117570 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdaca7fc640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fdacc11c780 0x7fdacc117570 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdaca7fc640 1 -- 192.168.123.100:0/4067639946 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fdacc117de0 con 0x7fdacc10a850 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdaca7fc640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fdacc10a850 0x7fdacc117030 secure :-1 s=READY pgs=171 cs=0 l=1 rev1=1 crypto rx=0x7fdac400e3f0 tx=0x7fdac400e8c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdaabfff640 1 -- 192.168.123.100:0/4067639946 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fdac4002c60 con 0x7fdacc10a850 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdad142c640 1 -- 192.168.123.100:0/4067639946 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fdacc07b330 con 0x7fdacc10a850 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdad142c640 1 -- 192.168.123.100:0/4067639946 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fdacc07b840 con 0x7fdacc10a850 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdaabfff640 1 -- 192.168.123.100:0/4067639946 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fdac4018070 con 0x7fdacc10a850 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdaabfff640 1 -- 192.168.123.100:0/4067639946 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fdac4013700 con 0x7fdacc10a850 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdaabfff640 1 -- 192.168.123.100:0/4067639946 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fdac40138e0 con 0x7fdacc10a850 2026-03-10T07:26:44.727 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.722+0000 7fdaabfff640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fdaa4077800 0x7fdaa4079cc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.742 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.738+0000 7fdacaffd640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fdaa4077800 0x7fdaa4079cc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.742 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.738+0000 7fdacaffd640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fdaa4077800 0x7fdaa4079cc0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fdabc0027c0 tx=0x7fdabc0023d0 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.742 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.738+0000 7fdaabfff640 1 -- 192.168.123.100:0/4067639946 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7fdac409af50 con 0x7fdacc10a850 2026-03-10T07:26:44.742 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.738+0000 7fdad142c640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] conn(0x7fda98001650 0x7fda98003b10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.742 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.738+0000 7fdad142c640 1 -- 192.168.123.100:0/4067639946 --> [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7fda98006c00 con 0x7fda98001650 2026-03-10T07:26:44.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.746+0000 7fdacb7fe640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] conn(0x7fda98001650 0x7fda98003b10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.761 INFO:teuthology.orchestra.run.vm00.stdout:188978561050 2026-03-10T07:26:44.762 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd last-stat-seq osd.6 2026-03-10T07:26:44.763 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.746+0000 7fdacb7fe640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] conn(0x7fda98001650 0x7fda98003b10 crc :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.763 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.746+0000 7fdaabfff640 1 -- 192.168.123.100:0/4067639946 <== osd.0 v2:192.168.123.100:6802/944390886 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7fda98006c00 con 0x7fda98001650 2026-03-10T07:26:44.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.782+0000 7fdad142c640 1 -- 192.168.123.100:0/4067639946 --> [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7fda98005ce0 con 0x7fda98001650 2026-03-10T07:26:44.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.782+0000 7fdaabfff640 1 -- 192.168.123.100:0/4067639946 <== osd.0 v2:192.168.123.100:6802/944390886 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7fda98005ce0 con 0x7fda98001650 2026-03-10T07:26:44.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.786+0000 7fdad142c640 1 -- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] conn(0x7fda98001650 msgr2=0x7fda98003b10 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.786+0000 7fdad142c640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:6802/944390886,v1:192.168.123.100:6803/944390886] conn(0x7fda98001650 0x7fda98003b10 crc :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.786+0000 7fdad142c640 1 -- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fdaa4077800 msgr2=0x7fdaa4079cc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.786+0000 7fdad142c640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fdaa4077800 0x7fdaa4079cc0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fdabc0027c0 tx=0x7fdabc0023d0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.790+0000 7fdad142c640 1 -- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fdacc10a850 msgr2=0x7fdacc117030 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.790+0000 7fdad142c640 1 --2- 192.168.123.100:0/4067639946 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fdacc10a850 0x7fdacc117030 secure :-1 s=READY pgs=171 cs=0 l=1 rev1=1 crypto rx=0x7fdac400e3f0 tx=0x7fdac400e8c0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.790+0000 7fdacb7fe640 1 -- 192.168.123.100:0/4067639946 reap_dead start 2026-03-10T07:26:44.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.790+0000 7fdad142c640 1 -- 192.168.123.100:0/4067639946 shutdown_connections 2026-03-10T07:26:44.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.790+0000 7fdad142c640 1 -- 192.168.123.100:0/4067639946 >> 192.168.123.100:0/4067639946 conn(0x7fdacc06d9f0 msgr2=0x7fdacc10b460 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.790+0000 7fdad142c640 1 -- 192.168.123.100:0/4067639946 shutdown_connections 2026-03-10T07:26:44.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.790+0000 7fdad142c640 1 -- 192.168.123.100:0/4067639946 wait complete. 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.822+0000 7fc1c0020640 1 -- 192.168.123.100:0/2349927765 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1b810b080 msgr2=0x7fc1b8074d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.822+0000 7fc1c0020640 1 --2- 192.168.123.100:0/2349927765 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1b810b080 0x7fc1b8074d30 secure :-1 s=READY pgs=172 cs=0 l=1 rev1=1 crypto rx=0x7fc1b000b600 tx=0x7fc1b00305a0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 -- 192.168.123.100:0/2349927765 shutdown_connections 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 --2- 192.168.123.100:0/2349927765 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fc1b8075470 0x7fc1b807be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 --2- 192.168.123.100:0/2349927765 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1b810b080 0x7fc1b8074d30 unknown :-1 s=CLOSED pgs=172 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 --2- 192.168.123.100:0/2349927765 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fc1b810a6d0 0x7fc1b810aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 -- 192.168.123.100:0/2349927765 >> 192.168.123.100:0/2349927765 conn(0x7fc1b806d9f0 msgr2=0x7fc1b806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 -- 192.168.123.100:0/2349927765 shutdown_connections 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 -- 192.168.123.100:0/2349927765 wait complete. 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 Processor -- start 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 -- start start 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fc1b8075470 0x7fc1b80859e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1b810a6d0 0x7fc1b8085f20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fc1b807fab0 0x7fc1b807ff60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fc1b807e270 con 0x7fc1b810a6d0 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fc1b807e0f0 con 0x7fc1b807fab0 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1c0020640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fc1b807e3f0 con 0x7fc1b8075470 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1be596640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fc1b807fab0 0x7fc1b807ff60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1be596640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fc1b807fab0 0x7fc1b807ff60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:55736/0 (socket says 192.168.123.100:55736) 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.834+0000 7fc1be596640 1 -- 192.168.123.100:0/3117772818 learned_addr learned my addr 192.168.123.100:0/3117772818 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1bd594640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1b810a6d0 0x7fc1b8085f20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1be596640 1 -- 192.168.123.100:0/3117772818 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fc1b8075470 msgr2=0x7fc1b80859e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1be596640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fc1b8075470 0x7fc1b80859e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1be596640 1 -- 192.168.123.100:0/3117772818 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1b810a6d0 msgr2=0x7fc1b8085f20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1be596640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc1b810a6d0 0x7fc1b8085f20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1be596640 1 -- 192.168.123.100:0/3117772818 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc1b8080820 con 0x7fc1b807fab0 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1be596640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fc1b807fab0 0x7fc1b807ff60 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7fc1a800c970 tx=0x7fc1a800ce40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.839 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1aeffd640 1 -- 192.168.123.100:0/3117772818 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fc1a8007bf0 con 0x7fc1b807fab0 2026-03-10T07:26:44.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1c0020640 1 -- 192.168.123.100:0/3117772818 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fc1b8137d00 con 0x7fc1b807fab0 2026-03-10T07:26:44.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1c0020640 1 -- 192.168.123.100:0/3117772818 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fc1b8138290 con 0x7fc1b807fab0 2026-03-10T07:26:44.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1aeffd640 1 -- 192.168.123.100:0/3117772818 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fc1a8007d90 con 0x7fc1b807fab0 2026-03-10T07:26:44.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1aeffd640 1 -- 192.168.123.100:0/3117772818 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fc1a8005730 con 0x7fc1b807fab0 2026-03-10T07:26:44.840 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.838+0000 7fc1acff9640 1 -- 192.168.123.100:0/3117772818 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7fc188000f80 con 0x7fc1b807fab0 2026-03-10T07:26:44.843 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.842+0000 7fc1aeffd640 1 -- 192.168.123.100:0/3117772818 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fc1a8020020 con 0x7fc1b807fab0 2026-03-10T07:26:44.843 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.842+0000 7fc1aeffd640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fc19c077700 0x7fc19c079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.844 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.842+0000 7fc1bdd95640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fc19c077700 0x7fc19c079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.844 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.842+0000 7fc1aeffd640 1 -- 192.168.123.100:0/3117772818 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7fc1a8099ce0 con 0x7fc1b807fab0 2026-03-10T07:26:44.850 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.842+0000 7fc1bdd95640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fc19c077700 0x7fc19c079bc0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fc1b40098a0 tx=0x7fc1b4006d90 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.850 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.842+0000 7fc1aeffd640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] conn(0x7fc19c0815f0 0x7fc19c083a50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:44.850 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.842+0000 7fc1aeffd640 1 -- 192.168.123.100:0/3117772818 --> [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7fc1a8012ec0 con 0x7fc19c0815f0 2026-03-10T07:26:44.850 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.842+0000 7fc1aeffd640 1 -- 192.168.123.100:0/3117772818 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_get_version_reply(handle=1 version=68) ==== 24+0+0 (secure 0 0 0) 0x7fc1a809a0d0 con 0x7fc1b807fab0 2026-03-10T07:26:44.851 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.850+0000 7fc1bd594640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] conn(0x7fc19c0815f0 0x7fc19c083a50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:44.852 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.850+0000 7fc1bd594640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] conn(0x7fc19c0815f0 0x7fc19c083a50 crc :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.4 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:44.854 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.850+0000 7fc1aeffd640 1 -- 192.168.123.100:0/3117772818 <== osd.4 v2:192.168.123.103:6800/2627693272 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7fc1a8012ec0 con 0x7fc19c0815f0 2026-03-10T07:26:44.872 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.870+0000 7fc1acff9640 1 -- 192.168.123.100:0/3117772818 --> [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7fc188002cf0 con 0x7fc19c0815f0 2026-03-10T07:26:44.877 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1aeffd640 1 -- 192.168.123.100:0/3117772818 <== osd.4 v2:192.168.123.103:6800/2627693272 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7fc188002cf0 con 0x7fc19c0815f0 2026-03-10T07:26:44.877 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1c0020640 1 -- 192.168.123.100:0/3117772818 >> [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] conn(0x7fc19c0815f0 msgr2=0x7fc19c083a50 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.877 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1c0020640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.103:6800/2627693272,v1:192.168.123.103:6801/2627693272] conn(0x7fc19c0815f0 0x7fc19c083a50 crc :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:44.877 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1c0020640 1 -- 192.168.123.100:0/3117772818 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fc19c077700 msgr2=0x7fc19c079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.877 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1c0020640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fc19c077700 0x7fc19c079bc0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fc1b40098a0 tx=0x7fc1b4006d90 comp rx=0 tx=0).stop 2026-03-10T07:26:44.878 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1c0020640 1 -- 192.168.123.100:0/3117772818 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fc1b807fab0 msgr2=0x7fc1b807ff60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:44.878 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1c0020640 1 --2- 192.168.123.100:0/3117772818 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fc1b807fab0 0x7fc1b807ff60 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7fc1a800c970 tx=0x7fc1a800ce40 comp rx=0 tx=0).stop 2026-03-10T07:26:44.878 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1be596640 1 -- 192.168.123.100:0/3117772818 reap_dead start 2026-03-10T07:26:44.878 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1c0020640 1 -- 192.168.123.100:0/3117772818 shutdown_connections 2026-03-10T07:26:44.878 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1c0020640 1 -- 192.168.123.100:0/3117772818 >> 192.168.123.100:0/3117772818 conn(0x7fc1b806d9f0 msgr2=0x7fc1b810d490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:44.878 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1c0020640 1 -- 192.168.123.100:0/3117772818 shutdown_connections 2026-03-10T07:26:44.878 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:44.874+0000 7fc1c0020640 1 -- 192.168.123.100:0/3117772818 wait complete. 2026-03-10T07:26:45.007 INFO:teuthology.orchestra.run.vm00.stdout:219043332114 2026-03-10T07:26:45.007 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd last-stat-seq osd.7 2026-03-10T07:26:45.023 INFO:teuthology.orchestra.run.vm00.stdout:163208757281 2026-03-10T07:26:45.023 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd last-stat-seq osd.5 2026-03-10T07:26:45.048 INFO:teuthology.orchestra.run.vm00.stdout:34359738435 2026-03-10T07:26:45.048 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd last-stat-seq osd.0 2026-03-10T07:26:45.080 INFO:teuthology.orchestra.run.vm00.stdout:133143986216 2026-03-10T07:26:45.080 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph osd last-stat-seq osd.4 2026-03-10T07:26:45.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:45 vm00 bash[20701]: audit 2026-03-10T07:26:44.256335+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:45.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:45 vm00 bash[20701]: audit 2026-03-10T07:26:44.256335+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:45.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:45 vm00 bash[20701]: audit 2026-03-10T07:26:44.277037+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:45.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:45 vm00 bash[20701]: audit 2026-03-10T07:26:44.277037+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:45.258 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:45 vm00 bash[28005]: audit 2026-03-10T07:26:44.256335+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:45.258 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:45 vm00 bash[28005]: audit 2026-03-10T07:26:44.256335+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:45.258 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:45 vm00 bash[28005]: audit 2026-03-10T07:26:44.277037+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:45.258 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:45 vm00 bash[28005]: audit 2026-03-10T07:26:44.277037+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:45.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:45 vm03 bash[23382]: audit 2026-03-10T07:26:44.256335+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:45.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:45 vm03 bash[23382]: audit 2026-03-10T07:26:44.256335+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:45.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:45 vm03 bash[23382]: audit 2026-03-10T07:26:44.277037+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:45.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:45 vm03 bash[23382]: audit 2026-03-10T07:26:44.277037+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: cluster 2026-03-10T07:26:44.557028+0000 mgr.y (mgr.24407) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:46.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: cluster 2026-03-10T07:26:44.557028+0000 mgr.y (mgr.24407) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:46.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: audit 2026-03-10T07:26:45.366369+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: audit 2026-03-10T07:26:45.366369+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: audit 2026-03-10T07:26:45.377295+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: audit 2026-03-10T07:26:45.377295+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: audit 2026-03-10T07:26:45.378693+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:46.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: audit 2026-03-10T07:26:45.378693+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:46.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: audit 2026-03-10T07:26:45.382545+0000 mon.c (mon.2) 67 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:46.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: audit 2026-03-10T07:26:45.382545+0000 mon.c (mon.2) 67 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:46.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: audit 2026-03-10T07:26:45.388722+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:46 vm03 bash[23382]: audit 2026-03-10T07:26:45.388722+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: cluster 2026-03-10T07:26:44.557028+0000 mgr.y (mgr.24407) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: cluster 2026-03-10T07:26:44.557028+0000 mgr.y (mgr.24407) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: audit 2026-03-10T07:26:45.366369+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: audit 2026-03-10T07:26:45.366369+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: audit 2026-03-10T07:26:45.377295+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: audit 2026-03-10T07:26:45.377295+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: audit 2026-03-10T07:26:45.378693+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: audit 2026-03-10T07:26:45.378693+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: audit 2026-03-10T07:26:45.382545+0000 mon.c (mon.2) 67 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: audit 2026-03-10T07:26:45.382545+0000 mon.c (mon.2) 67 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: audit 2026-03-10T07:26:45.388722+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:46 vm00 bash[28005]: audit 2026-03-10T07:26:45.388722+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: cluster 2026-03-10T07:26:44.557028+0000 mgr.y (mgr.24407) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: cluster 2026-03-10T07:26:44.557028+0000 mgr.y (mgr.24407) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: audit 2026-03-10T07:26:45.366369+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: audit 2026-03-10T07:26:45.366369+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: audit 2026-03-10T07:26:45.377295+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: audit 2026-03-10T07:26:45.377295+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: audit 2026-03-10T07:26:45.378693+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: audit 2026-03-10T07:26:45.378693+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: audit 2026-03-10T07:26:45.382545+0000 mon.c (mon.2) 67 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: audit 2026-03-10T07:26:45.382545+0000 mon.c (mon.2) 67 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: audit 2026-03-10T07:26:45.388722+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:46.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:46 vm00 bash[20701]: audit 2026-03-10T07:26:45.388722+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:26:47.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:47 vm00 bash[28005]: cluster 2026-03-10T07:26:46.557509+0000 mgr.y (mgr.24407) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:47.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:47 vm00 bash[28005]: cluster 2026-03-10T07:26:46.557509+0000 mgr.y (mgr.24407) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:47.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:47 vm00 bash[20701]: cluster 2026-03-10T07:26:46.557509+0000 mgr.y (mgr.24407) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:47.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:47 vm00 bash[20701]: cluster 2026-03-10T07:26:46.557509+0000 mgr.y (mgr.24407) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:47.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:47 vm03 bash[23382]: cluster 2026-03-10T07:26:46.557509+0000 mgr.y (mgr.24407) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:47.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:47 vm03 bash[23382]: cluster 2026-03-10T07:26:46.557509+0000 mgr.y (mgr.24407) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:48.383 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 07:26:47 vm00 bash[56723]: ts=2026-03-10T07:26:47.945Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.00273467s 2026-03-10T07:26:49.765 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:49.766 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:49.769 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:49.771 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:49.771 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:49.771 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:49.772 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:49.775 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:49.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:49 vm00 bash[28005]: cluster 2026-03-10T07:26:48.557776+0000 mgr.y (mgr.24407) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:49.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:49 vm00 bash[28005]: cluster 2026-03-10T07:26:48.557776+0000 mgr.y (mgr.24407) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:49.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:49 vm00 bash[20701]: cluster 2026-03-10T07:26:48.557776+0000 mgr.y (mgr.24407) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:49.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:49 vm00 bash[20701]: cluster 2026-03-10T07:26:48.557776+0000 mgr.y (mgr.24407) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:49 vm03 bash[23382]: cluster 2026-03-10T07:26:48.557776+0000 mgr.y (mgr.24407) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:49 vm03 bash[23382]: cluster 2026-03-10T07:26:48.557776+0000 mgr.y (mgr.24407) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:50.083 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.078+0000 7fe332e23640 1 -- 192.168.123.100:0/2839902338 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe32c10a6d0 msgr2=0x7fe32c10aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.083 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.078+0000 7fe332e23640 1 --2- 192.168.123.100:0/2839902338 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe32c10a6d0 0x7fe32c10aab0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7fe31c009a30 tx=0x7fe31c02f030 comp rx=0 tx=0).stop 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.086+0000 7fe332e23640 1 -- 192.168.123.100:0/2839902338 shutdown_connections 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.086+0000 7fe332e23640 1 --2- 192.168.123.100:0/2839902338 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe32c075580 0x7fe32c07bf30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.086+0000 7fe332e23640 1 --2- 192.168.123.100:0/2839902338 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe32c10b080 0x7fe32c074e40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.086+0000 7fe332e23640 1 --2- 192.168.123.100:0/2839902338 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe32c10a6d0 0x7fe32c10aab0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.086+0000 7fe332e23640 1 -- 192.168.123.100:0/2839902338 >> 192.168.123.100:0/2839902338 conn(0x7fe32c06db00 msgr2=0x7fe32c06df10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.086+0000 7fe332e23640 1 -- 192.168.123.100:0/2839902338 shutdown_connections 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe332e23640 1 -- 192.168.123.100:0/2839902338 wait complete. 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe332e23640 1 Processor -- start 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe332e23640 1 -- start start 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe332e23640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe32c075580 0x7fe32c085d00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe332e23640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe32c10a6d0 0x7fe32c07fd90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe332e23640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe32c10b080 0x7fe32c0802d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe332e23640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fe32c07e600 con 0x7fe32c075580 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe332e23640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fe32c07e480 con 0x7fe32c10a6d0 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe332e23640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fe32c07e780 con 0x7fe32c10b080 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe331399640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe32c10b080 0x7fe32c0802d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe330b98640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe32c075580 0x7fe32c085d00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe330b98640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe32c075580 0x7fe32c085d00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59688/0 (socket says 192.168.123.100:59688) 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe330b98640 1 -- 192.168.123.100:0/3330457087 learned_addr learned my addr 192.168.123.100:0/3330457087 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe330b98640 1 -- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe32c10b080 msgr2=0x7fe32c0802d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe32bfff640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe32c10a6d0 0x7fe32c07fd90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe330b98640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe32c10b080 0x7fe32c0802d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe330b98640 1 -- 192.168.123.100:0/3330457087 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe32c10a6d0 msgr2=0x7fe32c07fd90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe330b98640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe32c10a6d0 0x7fe32c07fd90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe330b98640 1 -- 192.168.123.100:0/3330457087 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe32c080a00 con 0x7fe32c075580 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.094+0000 7fe330b98640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe32c075580 0x7fe32c085d00 secure :-1 s=READY pgs=173 cs=0 l=1 rev1=1 crypto rx=0x7fe31c009b60 tx=0x7fe31c02f850 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.098+0000 7fe32bfff640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe32c10a6d0 0x7fe32c07fd90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.098+0000 7fe331399640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe32c10b080 0x7fe32c0802d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.098+0000 7fe329ffb640 1 -- 192.168.123.100:0/3330457087 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe31c046070 con 0x7fe32c075580 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.098+0000 7fe332e23640 1 -- 192.168.123.100:0/3330457087 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe32c1c7360 con 0x7fe32c075580 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.098+0000 7fe332e23640 1 -- 192.168.123.100:0/3330457087 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe32c1c7700 con 0x7fe32c075580 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.098+0000 7fe329ffb640 1 -- 192.168.123.100:0/3330457087 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fe31c03e070 con 0x7fe32c075580 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.098+0000 7fe329ffb640 1 -- 192.168.123.100:0/3330457087 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fe31c0042f0 con 0x7fe32c075580 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.098+0000 7fe329ffb640 1 -- 192.168.123.100:0/3330457087 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fe31c041400 con 0x7fe32c075580 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.098+0000 7fe329ffb640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fe300077750 0x7fe300079c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.098+0000 7fe32bfff640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fe300077750 0x7fe300079c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.098+0000 7fe329ffb640 1 -- 192.168.123.100:0/3330457087 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7fe31c0bed90 con 0x7fe32c075580 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.102+0000 7fe332e23640 1 -- 192.168.123.100:0/3330457087 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe2f4005180 con 0x7fe32c075580 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.106+0000 7fe32bfff640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fe300077750 0x7fe300079c10 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fe32c081540 tx=0x7fe324008910 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.106+0000 7fe329ffb640 1 -- 192.168.123.100:0/3330457087 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe31c08b710 con 0x7fe32c075580 2026-03-10T07:26:50.122 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.118+0000 7efd69189640 1 -- 192.168.123.100:0/2591892839 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd6410a850 msgr2=0x7efd6410acd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.122 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.118+0000 7efd69189640 1 --2- 192.168.123.100:0/2591892839 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd6410a850 0x7efd6410acd0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7efd54009a30 tx=0x7efd5402f260 comp rx=0 tx=0).stop 2026-03-10T07:26:50.122 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.118+0000 7efd69189640 1 -- 192.168.123.100:0/2591892839 shutdown_connections 2026-03-10T07:26:50.122 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.118+0000 7efd69189640 1 --2- 192.168.123.100:0/2591892839 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd6411c780 0x7efd6411eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.122 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.118+0000 7efd69189640 1 --2- 192.168.123.100:0/2591892839 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd6410a850 0x7efd6410acd0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.122 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.118+0000 7efd69189640 1 --2- 192.168.123.100:0/2591892839 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd6410a470 0x7efd641114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.122 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.118+0000 7efd69189640 1 -- 192.168.123.100:0/2591892839 >> 192.168.123.100:0/2591892839 conn(0x7efd6406d9f0 msgr2=0x7efd6406de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.122 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.118+0000 7efd69189640 1 -- 192.168.123.100:0/2591892839 shutdown_connections 2026-03-10T07:26:50.122 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd69189640 1 -- 192.168.123.100:0/2591892839 wait complete. 2026-03-10T07:26:50.122 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd69189640 1 Processor -- start 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd69189640 1 -- start start 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd69189640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd6410a470 0x7efd64112970 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd69189640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd6410a850 0x7efd64112eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd69189640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd6411c780 0x7efd641bdf40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd69189640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7efd641211f0 con 0x7efd6410a850 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd69189640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7efd64121070 con 0x7efd6411c780 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd69189640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7efd64121370 con 0x7efd6410a470 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd63577640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd6411c780 0x7efd641bdf40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd63577640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd6411c780 0x7efd641bdf40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:55772/0 (socket says 192.168.123.100:55772) 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd63577640 1 -- 192.168.123.100:0/1423248671 learned_addr learned my addr 192.168.123.100:0/1423248671 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd62575640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd6410a850 0x7efd64112eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd62d76640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd6410a470 0x7efd64112970 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd62575640 1 -- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd6410a470 msgr2=0x7efd64112970 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd62575640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd6410a470 0x7efd64112970 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd62575640 1 -- 192.168.123.100:0/1423248671 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd6411c780 msgr2=0x7efd641bdf40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd62575640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd6411c780 0x7efd641bdf40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.122+0000 7efd62575640 1 -- 192.168.123.100:0/1423248671 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efd641be480 con 0x7efd6410a850 2026-03-10T07:26:50.126 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.126+0000 7efd62575640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd6410a850 0x7efd64112eb0 secure :-1 s=READY pgs=174 cs=0 l=1 rev1=1 crypto rx=0x7efd54002410 tx=0x7efd54031d10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.127 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.126+0000 7efd43fff640 1 -- 192.168.123.100:0/1423248671 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efd5402fcd0 con 0x7efd6410a850 2026-03-10T07:26:50.127 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.126+0000 7efd69189640 1 -- 192.168.123.100:0/1423248671 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7efd641be710 con 0x7efd6410a850 2026-03-10T07:26:50.127 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.126+0000 7efd69189640 1 -- 192.168.123.100:0/1423248671 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7efd641bebf0 con 0x7efd6410a850 2026-03-10T07:26:50.129 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.126+0000 7efd43fff640 1 -- 192.168.123.100:0/1423248671 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7efd54033040 con 0x7efd6410a850 2026-03-10T07:26:50.129 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.126+0000 7efd43fff640 1 -- 192.168.123.100:0/1423248671 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efd540314b0 con 0x7efd6410a850 2026-03-10T07:26:50.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.130+0000 7efd43fff640 1 -- 192.168.123.100:0/1423248671 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7efd540316d0 con 0x7efd6410a850 2026-03-10T07:26:50.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.130+0000 7efd43fff640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd34077800 0x7efd34079cc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.130+0000 7efd62d76640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd34077800 0x7efd34079cc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.130+0000 7efd43fff640 1 -- 192.168.123.100:0/1423248671 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7efd540bf450 con 0x7efd6410a850 2026-03-10T07:26:50.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.130+0000 7efd62d76640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd34077800 0x7efd34079cc0 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7efd58004500 tx=0x7efd58009290 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.137 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.130+0000 7efd69189640 1 -- 192.168.123.100:0/1423248671 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7efd30005180 con 0x7efd6410a850 2026-03-10T07:26:50.138 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.138+0000 7efd43fff640 1 -- 192.168.123.100:0/1423248671 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7efd5408bdd0 con 0x7efd6410a850 2026-03-10T07:26:50.226 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.218+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3558922362 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbba810a6d0 msgr2=0x7fbba810aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.226 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.218+0000 7fbbaf63d640 1 --2- 192.168.123.100:0/3558922362 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbba810a6d0 0x7fbba810aab0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fbba400c2f0 tx=0x7fbba4030690 comp rx=0 tx=0).stop 2026-03-10T07:26:50.226 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.226+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3558922362 shutdown_connections 2026-03-10T07:26:50.226 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.226+0000 7fbbaf63d640 1 --2- 192.168.123.100:0/3558922362 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbba8113b50 0x7fbba8115fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.226 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.226+0000 7fbbaf63d640 1 --2- 192.168.123.100:0/3558922362 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fbba810b080 0x7fbba81134e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.226 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.226+0000 7fbbaf63d640 1 --2- 192.168.123.100:0/3558922362 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbba810a6d0 0x7fbba810aab0 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.226 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.226+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3558922362 >> 192.168.123.100:0/3558922362 conn(0x7fbba806deb0 msgr2=0x7fbba806e2c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.226 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.226+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3558922362 shutdown_connections 2026-03-10T07:26:50.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3558922362 wait complete. 2026-03-10T07:26:50.230 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbaf63d640 1 Processor -- start 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbaf63d640 1 -- start start 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbaf63d640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fbba810a6d0 0x7fbba81a0c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbaf63d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbba810b080 0x7fbba81a11d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbaf63d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbba8113b50 0x7fbba81bdec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbaf63d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fbba8118720 con 0x7fbba810b080 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbaf63d640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fbba81185a0 con 0x7fbba810a6d0 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbaf63d640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fbba81188a0 con 0x7fbba8113b50 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbacbb1640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbba810b080 0x7fbba81a11d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbadbb3640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbba8113b50 0x7fbba81bdec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbacbb1640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbba810b080 0x7fbba81a11d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59724/0 (socket says 192.168.123.100:59724) 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbacbb1640 1 -- 192.168.123.100:0/3520712859 learned_addr learned my addr 192.168.123.100:0/3520712859 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbadbb3640 1 -- 192.168.123.100:0/3520712859 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fbba810a6d0 msgr2=0x7fbba81a0c90 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:26:50.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbad3b2640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fbba810a6d0 0x7fbba81a0c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbadbb3640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fbba810a6d0 0x7fbba81a0c90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbadbb3640 1 -- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbba810b080 msgr2=0x7fbba81a11d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbadbb3640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbba810b080 0x7fbba81a11d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbadbb3640 1 -- 192.168.123.100:0/3520712859 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbba81be400 con 0x7fbba8113b50 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbadbb3640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbba8113b50 0x7fbba81bdec0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7fbb9c002990 tx=0x7fbb9c002e60 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbacbb1640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbba810b080 0x7fbba81a11d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbad3b2640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fbba810a6d0 0x7fbba81a0c90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbb967fc640 1 -- 192.168.123.100:0/3520712859 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fbb9c00eb90 con 0x7fbba8113b50 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3520712859 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fbba81be6f0 con 0x7fbba8113b50 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3520712859 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fbba81bec30 con 0x7fbba8113b50 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbb967fc640 1 -- 192.168.123.100:0/3520712859 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fbb9c00ed30 con 0x7fbba8113b50 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.230+0000 7fbb967fc640 1 -- 192.168.123.100:0/3520712859 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fbb9c00f620 con 0x7fbba8113b50 2026-03-10T07:26:50.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.238+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3520712859 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fbba8074a90 con 0x7fbba8113b50 2026-03-10T07:26:50.248 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.246+0000 7fbb967fc640 1 -- 192.168.123.100:0/3520712859 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fbb9c010430 con 0x7fbba8113b50 2026-03-10T07:26:50.248 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.246+0000 7fbb967fc640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fbb780777d0 0x7fbb78079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.248 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.246+0000 7fbbad3b2640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fbb780777d0 0x7fbb78079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.248 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.246+0000 7fbbad3b2640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fbb780777d0 0x7fbb78079c90 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fbba40062a0 tx=0x7fbba4002750 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.248 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.246+0000 7fbb967fc640 1 -- 192.168.123.100:0/3520712859 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7fbb9c09a970 con 0x7fbba8113b50 2026-03-10T07:26:50.248 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.246+0000 7fbb967fc640 1 -- 192.168.123.100:0/3520712859 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fbb9c014070 con 0x7fbba8113b50 2026-03-10T07:26:50.347 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 -- 192.168.123.100:0/1549860704 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3a48113b50 msgr2=0x7f3a48115fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.347 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 --2- 192.168.123.100:0/1549860704 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3a48113b50 0x7f3a48115fa0 secure :-1 s=READY pgs=175 cs=0 l=1 rev1=1 crypto rx=0x7f3a3c00c2f0 tx=0x7f3a3c030670 comp rx=0 tx=0).stop 2026-03-10T07:26:50.347 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 -- 192.168.123.100:0/1549860704 shutdown_connections 2026-03-10T07:26:50.347 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 --2- 192.168.123.100:0/1549860704 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3a48113b50 0x7f3a48115fa0 unknown :-1 s=CLOSED pgs=175 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.347 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 --2- 192.168.123.100:0/1549860704 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3a4810b080 0x7f3a481134e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 --2- 192.168.123.100:0/1549860704 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3a4810a6d0 0x7f3a4810aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 -- 192.168.123.100:0/1549860704 >> 192.168.123.100:0/1549860704 conn(0x7f3a4806deb0 msgr2=0x7f3a4806e2c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 -- 192.168.123.100:0/1549860704 shutdown_connections 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 -- 192.168.123.100:0/1549860704 wait complete. 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 Processor -- start 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 -- start start 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3a4810a6d0 0x7f3a481a09c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3a4810b080 0x7f3a481a0f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3a48113b50 0x7f3a481bdec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f3a48118740 con 0x7f3a4810b080 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f3a481185c0 con 0x7f3a4810a6d0 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a4e17e640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f3a481188c0 con 0x7f3a48113b50 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a477fe640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3a4810a6d0 0x7f3a481a09c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a46ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3a4810b080 0x7f3a481a0f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.348 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a46ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3a4810b080 0x7f3a481a0f00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59742/0 (socket says 192.168.123.100:59742) 2026-03-10T07:26:50.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a46ffd640 1 -- 192.168.123.100:0/2200884082 learned_addr learned my addr 192.168.123.100:0/2200884082 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:50.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a47fff640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3a48113b50 0x7f3a481bdec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a46ffd640 1 -- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3a48113b50 msgr2=0x7f3a481bdec0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a46ffd640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3a48113b50 0x7f3a481bdec0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a46ffd640 1 -- 192.168.123.100:0/2200884082 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3a4810a6d0 msgr2=0x7f3a481a09c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a46ffd640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3a4810a6d0 0x7f3a481a09c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a46ffd640 1 -- 192.168.123.100:0/2200884082 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3a481be400 con 0x7f3a4810b080 2026-03-10T07:26:50.349 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a47fff640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3a48113b50 0x7f3a481bdec0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:26:50.350 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a46ffd640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3a4810b080 0x7f3a481a0f00 secure :-1 s=READY pgs=176 cs=0 l=1 rev1=1 crypto rx=0x7f3a3400e9e0 tx=0x7f3a3400eeb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.350 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.346+0000 7f3a44ff9640 1 -- 192.168.123.100:0/2200884082 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3a3400cde0 con 0x7f3a4810b080 2026-03-10T07:26:50.350 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.350+0000 7f3a4e17e640 1 -- 192.168.123.100:0/2200884082 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3a481be630 con 0x7f3a4810b080 2026-03-10T07:26:50.350 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.350+0000 7f3a4e17e640 1 -- 192.168.123.100:0/2200884082 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3a481beb90 con 0x7f3a4810b080 2026-03-10T07:26:50.350 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.350+0000 7f3a477fe640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3a4810a6d0 0x7f3a481a09c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:26:50.351 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.350+0000 7f3a44ff9640 1 -- 192.168.123.100:0/2200884082 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f3a34004ce0 con 0x7f3a4810b080 2026-03-10T07:26:50.351 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.350+0000 7f3a44ff9640 1 -- 192.168.123.100:0/2200884082 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3a34005700 con 0x7f3a4810b080 2026-03-10T07:26:50.354 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.354+0000 7f3a44ff9640 1 -- 192.168.123.100:0/2200884082 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f3a340047c0 con 0x7f3a4810b080 2026-03-10T07:26:50.354 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.354+0000 7f3a44ff9640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3a18077700 0x7f3a18079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.355 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.354+0000 7f3a477fe640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3a18077700 0x7f3a18079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.362 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.362+0000 7f3a44ff9640 1 -- 192.168.123.100:0/2200884082 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f3a3409a5a0 con 0x7f3a4810b080 2026-03-10T07:26:50.362 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.362+0000 7f3a477fe640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3a18077700 0x7f3a18079bc0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f3a380059c0 tx=0x7f3a38005950 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.362 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.362+0000 7f3a2a7fc640 1 -- 192.168.123.100:0/2200884082 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3a14005180 con 0x7f3a4810b080 2026-03-10T07:26:50.366 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.366+0000 7f3a44ff9640 1 -- 192.168.123.100:0/2200884082 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3a34066ea0 con 0x7f3a4810b080 2026-03-10T07:26:50.387 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.386+0000 7ff28f591640 1 -- 192.168.123.100:0/1836600184 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff288113b50 msgr2=0x7ff288115fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.386+0000 7ff28f591640 1 --2- 192.168.123.100:0/1836600184 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff288113b50 0x7ff288115fa0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7ff284009f90 tx=0x7ff28402f3d0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.390+0000 7ff28f591640 1 -- 192.168.123.100:0/1836600184 shutdown_connections 2026-03-10T07:26:50.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.390+0000 7ff28f591640 1 --2- 192.168.123.100:0/1836600184 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff288113b50 0x7ff288115fa0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.390+0000 7ff28f591640 1 --2- 192.168.123.100:0/1836600184 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff28810b080 0x7ff2881134e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.390+0000 7ff28f591640 1 --2- 192.168.123.100:0/1836600184 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff28810a6d0 0x7ff28810aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.390+0000 7ff28f591640 1 -- 192.168.123.100:0/1836600184 >> 192.168.123.100:0/1836600184 conn(0x7ff28806deb0 msgr2=0x7ff28806e2c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.390+0000 7ff28f591640 1 -- 192.168.123.100:0/1836600184 shutdown_connections 2026-03-10T07:26:50.394 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.394+0000 7f8b39ff1640 1 -- 192.168.123.100:0/1448345320 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8b34113b50 msgr2=0x7f8b34115fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.394+0000 7f8b39ff1640 1 --2- 192.168.123.100:0/1448345320 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8b34113b50 0x7f8b34115fa0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f8b28009f90 tx=0x7f8b2802f390 comp rx=0 tx=0).stop 2026-03-10T07:26:50.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.390+0000 7ff28f591640 1 -- 192.168.123.100:0/1836600184 wait complete. 2026-03-10T07:26:50.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 -- 192.168.123.100:0/1448345320 shutdown_connections 2026-03-10T07:26:50.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 --2- 192.168.123.100:0/1448345320 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8b34113b50 0x7f8b34115fa0 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 --2- 192.168.123.100:0/1448345320 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f8b3410b080 0x7f8b341134e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7ff28f591640 1 Processor -- start 2026-03-10T07:26:50.399 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 --2- 192.168.123.100:0/1448345320 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8b3410a6d0 0x7f8b3410aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.399 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 -- 192.168.123.100:0/1448345320 >> 192.168.123.100:0/1448345320 conn(0x7f8b3406deb0 msgr2=0x7f8b3406e2c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.399 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 -- 192.168.123.100:0/1448345320 shutdown_connections 2026-03-10T07:26:50.399 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7ff28f591640 1 -- start start 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 -- 192.168.123.100:0/1448345320 wait complete. 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 Processor -- start 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7ff28f591640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff28810a6d0 0x7ff28810db30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7ff28f591640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff28810b080 0x7ff28810e070 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7ff28f591640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff2881b2fe0 0x7ff2881b53d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7ff28f591640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff288118740 con 0x7ff28810a6d0 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7ff28f591640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7ff2881185c0 con 0x7ff2881b2fe0 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7ff28f591640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7ff2881188c0 con 0x7ff28810b080 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 -- start start 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8b3410a6d0 0x7f8b3410fe60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f8b3410b080 0x7f8b341103a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8b341b2f30 0x7f8b341b5320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f8b341185d0 con 0x7f8b3410a6d0 2026-03-10T07:26:50.401 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.398+0000 7f8b39ff1640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f8b34118450 con 0x7f8b3410b080 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b32ffd640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f8b3410b080 0x7f8b341103a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28cb05640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff28810b080 0x7ff28810e070 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b39ff1640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f8b34118750 con 0x7f8b341b2f30 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b32ffd640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f8b3410b080 0x7f8b341103a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:55836/0 (socket says 192.168.123.100:55836) 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b32ffd640 1 -- 192.168.123.100:0/3414056613 learned_addr learned my addr 192.168.123.100:0/3414056613 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b32ffd640 1 -- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8b341b2f30 msgr2=0x7f8b341b5320 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b32ffd640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8b341b2f30 0x7f8b341b5320 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28d306640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff28810a6d0 0x7ff28810db30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b32ffd640 1 -- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8b3410a6d0 msgr2=0x7f8b3410fe60 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b32ffd640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8b3410a6d0 0x7f8b3410fe60 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b32ffd640 1 -- 192.168.123.100:0/3414056613 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8b3411d9c0 con 0x7f8b3410b080 2026-03-10T07:26:50.402 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b32ffd640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f8b3410b080 0x7f8b341103a0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f8b2400e9e0 tx=0x7f8b2400eeb0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.403 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28cb05640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff28810b080 0x7ff28810e070 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:52412/0 (socket says 192.168.123.100:52412) 2026-03-10T07:26:50.403 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b30ff9640 1 -- 192.168.123.100:0/3414056613 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8b2400cde0 con 0x7f8b3410b080 2026-03-10T07:26:50.403 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28cb05640 1 -- 192.168.123.100:0/3654255621 learned_addr learned my addr 192.168.123.100:0/3654255621 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:50.403 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28cb05640 1 -- 192.168.123.100:0/3654255621 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff2881b2fe0 msgr2=0x7ff2881b53d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:26:50.403 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28cb05640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff2881b2fe0 0x7ff2881b53d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.403 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28cb05640 1 -- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff28810a6d0 msgr2=0x7ff28810db30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.403 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28cb05640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff28810a6d0 0x7ff28810db30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.403 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28cb05640 1 -- 192.168.123.100:0/3654255621 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff2881a46c0 con 0x7ff28810b080 2026-03-10T07:26:50.404 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28d306640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff28810a6d0 0x7ff28810db30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:26:50.405 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28cb05640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff28810b080 0x7ff28810e070 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7ff27c002a50 tx=0x7ff27c002f20 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.405 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff2767fc640 1 -- 192.168.123.100:0/3654255621 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff27c00ecc0 con 0x7ff28810b080 2026-03-10T07:26:50.405 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b39ff1640 1 -- 192.168.123.100:0/3414056613 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f8b3411dcb0 con 0x7f8b3410b080 2026-03-10T07:26:50.405 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7f8b39ff1640 1 -- 192.168.123.100:0/3414056613 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f8b3411e1f0 con 0x7f8b3410b080 2026-03-10T07:26:50.406 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.406+0000 7f8b30ff9640 1 -- 192.168.123.100:0/3414056613 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f8b24004ce0 con 0x7f8b3410b080 2026-03-10T07:26:50.406 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.406+0000 7f8b30ff9640 1 -- 192.168.123.100:0/3414056613 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8b24010430 con 0x7f8b3410b080 2026-03-10T07:26:50.406 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.406+0000 7f8b30ff9640 1 -- 192.168.123.100:0/3414056613 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f8b24010650 con 0x7f8b3410b080 2026-03-10T07:26:50.407 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28f591640 1 -- 192.168.123.100:0/3654255621 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff2881a49b0 con 0x7ff28810b080 2026-03-10T07:26:50.407 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.402+0000 7ff28f591640 1 -- 192.168.123.100:0/3654255621 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7ff2881a4ef0 con 0x7ff28810b080 2026-03-10T07:26:50.408 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.406+0000 7f8b30ff9640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8b0c077830 0x7f8b0c079cf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.409 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.406+0000 7ff2767fc640 1 -- 192.168.123.100:0/3654255621 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ff27c00ee60 con 0x7ff28810b080 2026-03-10T07:26:50.410 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.410+0000 7f8b337fe640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8b0c077830 0x7f8b0c079cf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.410 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.410+0000 7f8b30ff9640 1 -- 192.168.123.100:0/3414056613 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f8b2409b370 con 0x7f8b3410b080 2026-03-10T07:26:50.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.410+0000 7f8b337fe640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8b0c077830 0x7f8b0c079cf0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f8b20005ec0 tx=0x7f8b2000c040 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.414+0000 7f8b39ff1640 1 -- 192.168.123.100:0/3414056613 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8b00005180 con 0x7f8b3410b080 2026-03-10T07:26:50.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.414+0000 7ff2767fc640 1 -- 192.168.123.100:0/3654255621 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff27c0106c0 con 0x7ff28810b080 2026-03-10T07:26:50.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.414+0000 7ff2767fc640 1 -- 192.168.123.100:0/3654255621 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7ff27c010860 con 0x7ff28810b080 2026-03-10T07:26:50.418 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.418+0000 7ff2767fc640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff260077800 0x7ff260079cc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.419 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.418+0000 7ff28d306640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff260077800 0x7ff260079cc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.419 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.418+0000 7ff28d306640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff260077800 0x7ff260079cc0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7ff278007380 tx=0x7ff27800c040 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.419 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.418+0000 7ff2767fc640 1 -- 192.168.123.100:0/3654255621 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7ff27c09b550 con 0x7ff28810b080 2026-03-10T07:26:50.420 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.418+0000 7ff28f591640 1 -- 192.168.123.100:0/3654255621 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff250005180 con 0x7ff28810b080 2026-03-10T07:26:50.423 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.422+0000 7ff2767fc640 1 -- 192.168.123.100:0/3654255621 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff27c014070 con 0x7ff28810b080 2026-03-10T07:26:50.447 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.442+0000 7f8b30ff9640 1 -- 192.168.123.100:0/3414056613 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f8b24005000 con 0x7f8b3410b080 2026-03-10T07:26:50.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 -- 192.168.123.100:0/2988068566 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fda0010e4d0 msgr2=0x7fda00116050 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 --2- 192.168.123.100:0/2988068566 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fda0010e4d0 0x7fda00116050 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7fd9f4009f90 tx=0x7fd9f402f3b0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 -- 192.168.123.100:0/2988068566 shutdown_connections 2026-03-10T07:26:50.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 --2- 192.168.123.100:0/2988068566 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fda0010e4d0 0x7fda00116050 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 --2- 192.168.123.100:0/2988068566 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fda00109e30 0x7fda0010dda0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 --2- 192.168.123.100:0/2988068566 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fda00109480 0x7fda00109860 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 -- 192.168.123.100:0/2988068566 >> 192.168.123.100:0/2988068566 conn(0x7fda00078920 msgr2=0x7fda0007ad40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 -- 192.168.123.100:0/2988068566 shutdown_connections 2026-03-10T07:26:50.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 -- 192.168.123.100:0/2988068566 wait complete. 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 Processor -- start 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 -- start start 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fda00109480 0x7fda0006ced0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fda00109e30 0x7fda0006d410 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fda0006d950 0x7fda001a4100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fda001187e0 con 0x7fda0006d950 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fda00118660 con 0x7fda00109480 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7fda05ed9640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fda00118960 con 0x7fda00109e30 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 -- 192.168.123.100:0/90463908 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b00aaa30 msgr2=0x7f58b00aae10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 --2- 192.168.123.100:0/90463908 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b00aaa30 0x7f58b00aae10 secure :-1 s=READY pgs=177 cs=0 l=1 rev1=1 crypto rx=0x7f58ac009e90 tx=0x7f58ac02f6b0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 -- 192.168.123.100:0/90463908 shutdown_connections 2026-03-10T07:26:50.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 --2- 192.168.123.100:0/90463908 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f58b00a5760 0x7f58b000b920 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 --2- 192.168.123.100:0/90463908 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f58b00a4de0 0x7f58b00a5220 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 --2- 192.168.123.100:0/90463908 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b00aaa30 0x7f58b00aae10 unknown :-1 s=CLOSED pgs=177 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 -- 192.168.123.100:0/90463908 >> 192.168.123.100:0/90463908 conn(0x7f58b001a730 msgr2=0x7f58b001ab40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 -- 192.168.123.100:0/90463908 shutdown_connections 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 -- 192.168.123.100:0/90463908 wait complete. 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 Processor -- start 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 -- start start 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f58b00a4de0 0x7f58b0016630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b00a5760 0x7f58b000f6c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f58b000fc00 0x7f58b00100b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f58b000e0a0 con 0x7f58b00a5760 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f58b000df20 con 0x7f58b000fc00 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bdc1d640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f58b000e220 con 0x7f58b00a4de0 2026-03-10T07:26:50.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58b7fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b00a5760 0x7f58b000f6c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7fd9ff7fe640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fda00109480 0x7fda0006ced0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7fd9ff7fe640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fda00109480 0x7fda0006ced0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:55862/0 (socket says 192.168.123.100:55862) 2026-03-10T07:26:50.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7fd9ff7fe640 1 -- 192.168.123.100:0/2956207474 learned_addr learned my addr 192.168.123.100:0/2956207474 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:50.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7fd9feffd640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fda00109e30 0x7fda0006d410 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bd41c640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f58b000fc00 0x7f58b00100b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.450+0000 7f58bd41c640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f58b000fc00 0x7f58b00100b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:55876/0 (socket says 192.168.123.100:55876) 2026-03-10T07:26:50.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7fd9ff7fe640 1 -- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fda00109e30 msgr2=0x7fda0006d410 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7fd9ff7fe640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fda00109e30 0x7fda0006d410 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7fd9ff7fe640 1 -- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fda0006d950 msgr2=0x7fda001a4100 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:26:50.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7fd9ff7fe640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fda0006d950 0x7fda001a4100 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7fd9ff7fe640 1 -- 192.168.123.100:0/2956207474 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fda001a4640 con 0x7fda00109480 2026-03-10T07:26:50.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7fd9feffd640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fda00109e30 0x7fda0006d410 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:26:50.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58bd41c640 1 -- 192.168.123.100:0/3426861038 learned_addr learned my addr 192.168.123.100:0/3426861038 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:50.463 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58bd41c640 1 -- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f58b00a4de0 msgr2=0x7f58b0016630 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:26:50.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58bcc1b640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f58b00a4de0 0x7f58b0016630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58bd41c640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f58b00a4de0 0x7f58b0016630 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.462+0000 7fd9ff7fe640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fda00109480 0x7fda0006ced0 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fd9ec002fd0 tx=0x7fd9ec00daf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.462+0000 7fd9fcff9640 1 -- 192.168.123.100:0/2956207474 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd9ec014070 con 0x7fda00109480 2026-03-10T07:26:50.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.462+0000 7fda05ed9640 1 -- 192.168.123.100:0/2956207474 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fda001a4930 con 0x7fda00109480 2026-03-10T07:26:50.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58bd41c640 1 -- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b00a5760 msgr2=0x7f58b000f6c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.462+0000 7fda05ed9640 1 -- 192.168.123.100:0/2956207474 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fda001a4e70 con 0x7fda00109480 2026-03-10T07:26:50.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58bd41c640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b00a5760 0x7f58b000f6c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.462+0000 7fd9fcff9640 1 -- 192.168.123.100:0/2956207474 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fd9ec009dd0 con 0x7fda00109480 2026-03-10T07:26:50.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.462+0000 7fd9fcff9640 1 -- 192.168.123.100:0/2956207474 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd9ec010dd0 con 0x7fda00109480 2026-03-10T07:26:50.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58bd41c640 1 -- 192.168.123.100:0/3426861038 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f58b0014350 con 0x7f58b000fc00 2026-03-10T07:26:50.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58bd41c640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f58b000fc00 0x7f58b00100b0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f58b80517c0 tx=0x7f58b80747f0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58b5ffb640 1 -- 192.168.123.100:0/3426861038 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f58b8076840 con 0x7f58b000fc00 2026-03-10T07:26:50.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58bdc1d640 1 -- 192.168.123.100:0/3426861038 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f58b0014640 con 0x7f58b000fc00 2026-03-10T07:26:50.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58bdc1d640 1 -- 192.168.123.100:0/3426861038 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f58b0014ba0 con 0x7f58b000fc00 2026-03-10T07:26:50.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58b5ffb640 1 -- 192.168.123.100:0/3426861038 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f58b8076e20 con 0x7f58b000fc00 2026-03-10T07:26:50.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58b5ffb640 1 -- 192.168.123.100:0/3426861038 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f58b8075d80 con 0x7f58b000fc00 2026-03-10T07:26:50.467 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58b5ffb640 1 -- 192.168.123.100:0/3426861038 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f58b80769e0 con 0x7f58b000fc00 2026-03-10T07:26:50.468 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.454+0000 7f58b5ffb640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f5898077700 0x7f5898079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.468 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.462+0000 7fd9e27fc640 1 -- 192.168.123.100:0/2956207474 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd9bc005180 con 0x7fda00109480 2026-03-10T07:26:50.468 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.466+0000 7f58bcc1b640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f5898077700 0x7f5898079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.468 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.466+0000 7f58bcc1b640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f5898077700 0x7f5898079bc0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f58ac009950 tx=0x7f58ac039040 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.470 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.466+0000 7f58b5ffb640 1 -- 192.168.123.100:0/3426861038 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f58b80fefc0 con 0x7f58b000fc00 2026-03-10T07:26:50.470 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.466+0000 7f588b7fe640 1 -- 192.168.123.100:0/3426861038 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5880005180 con 0x7f58b000fc00 2026-03-10T07:26:50.471 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.466+0000 7fd9fcff9640 1 -- 192.168.123.100:0/2956207474 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fd9ec021020 con 0x7fda00109480 2026-03-10T07:26:50.471 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.466+0000 7fd9fcff9640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd9d40777d0 0x7fd9d4079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:50.474 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.474+0000 7fd9feffd640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd9d40777d0 0x7fd9d4079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:50.474 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.474+0000 7f58b5ffb640 1 -- 192.168.123.100:0/3426861038 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f58b807a030 con 0x7f58b000fc00 2026-03-10T07:26:50.474 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.474+0000 7fd9feffd640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd9d40777d0 0x7fd9d4079c90 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fda0006e690 tx=0x7fd9f0009290 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:50.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.474+0000 7fd9fcff9640 1 -- 192.168.123.100:0/2956207474 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7fd9ec099ef0 con 0x7fda00109480 2026-03-10T07:26:50.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.474+0000 7fd9fcff9640 1 -- 192.168.123.100:0/2956207474 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd9ec09a310 con 0x7fda00109480 2026-03-10T07:26:50.524 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.522+0000 7fe332e23640 1 -- 192.168.123.100:0/3330457087 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 7} v 0) -- 0x7fe2f4005470 con 0x7fe32c075580 2026-03-10T07:26:50.524 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.522+0000 7fe329ffb640 1 -- 192.168.123.100:0/3330457087 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 7}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7fe31c0905c0 con 0x7fe32c075580 2026-03-10T07:26:50.524 INFO:teuthology.orchestra.run.vm00.stdout:219043332115 2026-03-10T07:26:50.534 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.534+0000 7fe2fb7fe640 1 -- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fe300077750 msgr2=0x7fe300079c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.534 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.534+0000 7fe2fb7fe640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fe300077750 0x7fe300079c10 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fe32c081540 tx=0x7fe324008910 comp rx=0 tx=0).stop 2026-03-10T07:26:50.534 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.534+0000 7fe2fb7fe640 1 -- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe32c075580 msgr2=0x7fe32c085d00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.534 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.534+0000 7fe2fb7fe640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe32c075580 0x7fe32c085d00 secure :-1 s=READY pgs=173 cs=0 l=1 rev1=1 crypto rx=0x7fe31c009b60 tx=0x7fe31c02f850 comp rx=0 tx=0).stop 2026-03-10T07:26:50.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.534+0000 7fe2fb7fe640 1 -- 192.168.123.100:0/3330457087 shutdown_connections 2026-03-10T07:26:50.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.534+0000 7fe2fb7fe640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fe300077750 0x7fe300079c10 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.534+0000 7fe2fb7fe640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fe32c10b080 0x7fe32c0802d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.534+0000 7fe2fb7fe640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fe32c10a6d0 0x7fe32c07fd90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.534+0000 7fe2fb7fe640 1 --2- 192.168.123.100:0/3330457087 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fe32c075580 0x7fe32c085d00 unknown :-1 s=CLOSED pgs=173 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.534+0000 7fe2fb7fe640 1 -- 192.168.123.100:0/3330457087 >> 192.168.123.100:0/3330457087 conn(0x7fe32c06db00 msgr2=0x7fe32c072b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.534+0000 7fe2fb7fe640 1 -- 192.168.123.100:0/3330457087 shutdown_connections 2026-03-10T07:26:50.542 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.542+0000 7fe2fb7fe640 1 -- 192.168.123.100:0/3330457087 wait complete. 2026-03-10T07:26:50.670 INFO:tasks.cephadm.ceph_manager.ceph:need seq 219043332114 got 219043332115 for osd.7 2026-03-10T07:26:50.670 DEBUG:teuthology.parallel:result is None 2026-03-10T07:26:50.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.746+0000 7efd69189640 1 -- 192.168.123.100:0/1423248671 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 5} v 0) -- 0x7efd30005470 con 0x7efd6410a850 2026-03-10T07:26:50.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.746+0000 7efd43fff640 1 -- 192.168.123.100:0/1423248671 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 5}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7efd54090c80 con 0x7efd6410a850 2026-03-10T07:26:50.747 INFO:teuthology.orchestra.run.vm00.stdout:163208757282 2026-03-10T07:26:50.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.746+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3520712859 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) -- 0x7fbba81a1ff0 con 0x7fbba8113b50 2026-03-10T07:26:50.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.750+0000 7efd41ffb640 1 -- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd34077800 msgr2=0x7efd34079cc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.750+0000 7efd41ffb640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd34077800 0x7efd34079cc0 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7efd58004500 tx=0x7efd58009290 comp rx=0 tx=0).stop 2026-03-10T07:26:50.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.750+0000 7efd41ffb640 1 -- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd6410a850 msgr2=0x7efd64112eb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.750 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.750+0000 7efd41ffb640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd6410a850 0x7efd64112eb0 secure :-1 s=READY pgs=174 cs=0 l=1 rev1=1 crypto rx=0x7efd54002410 tx=0x7efd54031d10 comp rx=0 tx=0).stop 2026-03-10T07:26:50.754 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.754+0000 7efd41ffb640 1 -- 192.168.123.100:0/1423248671 shutdown_connections 2026-03-10T07:26:50.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.754+0000 7efd41ffb640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd34077800 0x7efd34079cc0 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.754+0000 7efd41ffb640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd6411c780 0x7efd641bdf40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.754+0000 7efd41ffb640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd6410a850 0x7efd64112eb0 unknown :-1 s=CLOSED pgs=174 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.754+0000 7efd41ffb640 1 --2- 192.168.123.100:0/1423248671 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd6410a470 0x7efd64112970 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.754+0000 7fbb967fc640 1 -- 192.168.123.100:0/3520712859 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7fbb9c0672f0 con 0x7fbba8113b50 2026-03-10T07:26:50.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.754+0000 7efd41ffb640 1 -- 192.168.123.100:0/1423248671 >> 192.168.123.100:0/1423248671 conn(0x7efd6406d9f0 msgr2=0x7efd6411cb60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.754+0000 7efd41ffb640 1 -- 192.168.123.100:0/1423248671 shutdown_connections 2026-03-10T07:26:50.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.754+0000 7efd41ffb640 1 -- 192.168.123.100:0/1423248671 wait complete. 2026-03-10T07:26:50.756 INFO:teuthology.orchestra.run.vm00.stdout:34359738436 2026-03-10T07:26:50.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fbb780777d0 msgr2=0x7fbb78079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.769 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fbb780777d0 0x7fbb78079c90 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fbba40062a0 tx=0x7fbba4002750 comp rx=0 tx=0).stop 2026-03-10T07:26:50.769 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbba8113b50 msgr2=0x7fbba81bdec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.769 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbba8113b50 0x7fbba81bdec0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7fbb9c002990 tx=0x7fbb9c002e60 comp rx=0 tx=0).stop 2026-03-10T07:26:50.769 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3520712859 shutdown_connections 2026-03-10T07:26:50.769 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fbb780777d0 0x7fbb78079c90 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.769 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fbba8113b50 0x7fbba81bdec0 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.770 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fbba810b080 0x7fbba81a11d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.770 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 --2- 192.168.123.100:0/3520712859 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fbba810a6d0 0x7fbba81a0c90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.770 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3520712859 >> 192.168.123.100:0/3520712859 conn(0x7fbba806deb0 msgr2=0x7fbba810b480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.770 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3520712859 shutdown_connections 2026-03-10T07:26:50.770 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.766+0000 7fbbaf63d640 1 -- 192.168.123.100:0/3520712859 wait complete. 2026-03-10T07:26:50.781 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.770+0000 7f3a2a7fc640 1 -- 192.168.123.100:0/2200884082 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) -- 0x7f3a14005470 con 0x7f3a4810b080 2026-03-10T07:26:50.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.786+0000 7f3a44ff9640 1 -- 192.168.123.100:0/2200884082 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f3a34063e20 con 0x7f3a4810b080 2026-03-10T07:26:50.789 INFO:teuthology.orchestra.run.vm00.stdout:55834574909 2026-03-10T07:26:50.805 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.802+0000 7f3a2a7fc640 1 -- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3a18077700 msgr2=0x7f3a18079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.806+0000 7f3a2a7fc640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3a18077700 0x7f3a18079bc0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f3a380059c0 tx=0x7f3a38005950 comp rx=0 tx=0).stop 2026-03-10T07:26:50.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.806+0000 7f3a2a7fc640 1 -- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3a4810b080 msgr2=0x7f3a481a0f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.806+0000 7f3a2a7fc640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3a4810b080 0x7f3a481a0f00 secure :-1 s=READY pgs=176 cs=0 l=1 rev1=1 crypto rx=0x7f3a3400e9e0 tx=0x7f3a3400eeb0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.810 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.810+0000 7f3a2a7fc640 1 -- 192.168.123.100:0/2200884082 shutdown_connections 2026-03-10T07:26:50.810 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.810+0000 7f3a2a7fc640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f3a18077700 0x7f3a18079bc0 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.810 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.810+0000 7f3a2a7fc640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f3a48113b50 0x7f3a481bdec0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.810 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.810+0000 7f3a2a7fc640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f3a4810b080 0x7f3a481a0f00 unknown :-1 s=CLOSED pgs=176 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.810 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.810+0000 7f3a2a7fc640 1 --2- 192.168.123.100:0/2200884082 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f3a4810a6d0 0x7f3a481a09c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.810 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.810+0000 7f3a2a7fc640 1 -- 192.168.123.100:0/2200884082 >> 192.168.123.100:0/2200884082 conn(0x7f3a4806deb0 msgr2=0x7f3a4810b480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.810 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.810+0000 7f3a2a7fc640 1 -- 192.168.123.100:0/2200884082 shutdown_connections 2026-03-10T07:26:50.814 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.814+0000 7f3a2a7fc640 1 -- 192.168.123.100:0/2200884082 wait complete. 2026-03-10T07:26:50.820 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:50 vm00 bash[20701]: audit 2026-03-10T07:26:50.527619+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.100:0/3330457087' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T07:26:50.820 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:50 vm00 bash[20701]: audit 2026-03-10T07:26:50.527619+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.100:0/3330457087' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T07:26:50.820 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:50 vm00 bash[28005]: audit 2026-03-10T07:26:50.527619+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.100:0/3330457087' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T07:26:50.820 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:50 vm00 bash[28005]: audit 2026-03-10T07:26:50.527619+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.100:0/3330457087' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T07:26:50.863 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.862+0000 7f8b39ff1640 1 -- 192.168.123.100:0/3414056613 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 3} v 0) -- 0x7f8b00005470 con 0x7f8b3410b080 2026-03-10T07:26:50.866 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.862+0000 7f8b30ff9640 1 -- 192.168.123.100:0/3414056613 <== mon.1 v2:192.168.123.103:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 3}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7f8b24067cf0 con 0x7f8b3410b080 2026-03-10T07:26:50.866 INFO:teuthology.orchestra.run.vm00.stdout:111669149743 2026-03-10T07:26:50.875 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.874+0000 7f8b39ff1640 1 -- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8b0c077830 msgr2=0x7f8b0c079cf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.875 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.874+0000 7f8b39ff1640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8b0c077830 0x7f8b0c079cf0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f8b20005ec0 tx=0x7f8b2000c040 comp rx=0 tx=0).stop 2026-03-10T07:26:50.875 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.874+0000 7f8b39ff1640 1 -- 192.168.123.100:0/3414056613 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f8b3410b080 msgr2=0x7f8b341103a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.875 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.874+0000 7f8b39ff1640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f8b3410b080 0x7f8b341103a0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f8b2400e9e0 tx=0x7f8b2400eeb0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.891 INFO:teuthology.orchestra.run.vm00.stdout:133143986217 2026-03-10T07:26:50.892 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.878+0000 7ff28f591640 1 -- 192.168.123.100:0/3654255621 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 4} v 0) -- 0x7ff250005470 con 0x7ff28810b080 2026-03-10T07:26:50.892 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.882+0000 7ff2767fc640 1 -- 192.168.123.100:0/3654255621 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 4}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7ff27c067ed0 con 0x7ff28810b080 2026-03-10T07:26:50.892 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.882+0000 7f8b39ff1640 1 -- 192.168.123.100:0/3414056613 shutdown_connections 2026-03-10T07:26:50.892 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.882+0000 7f8b39ff1640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f8b0c077830 0x7f8b0c079cf0 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.892 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.882+0000 7f8b39ff1640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f8b341b2f30 0x7f8b341b5320 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.892 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.882+0000 7f8b39ff1640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f8b3410b080 0x7f8b341103a0 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.892 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.882+0000 7f8b39ff1640 1 --2- 192.168.123.100:0/3414056613 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8b3410a6d0 0x7f8b3410fe60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.892 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.882+0000 7f8b39ff1640 1 -- 192.168.123.100:0/3414056613 >> 192.168.123.100:0/3414056613 conn(0x7f8b3406deb0 msgr2=0x7f8b34119330 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.892 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.882+0000 7f8b39ff1640 1 -- 192.168.123.100:0/3414056613 shutdown_connections 2026-03-10T07:26:50.893 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.890+0000 7f588b7fe640 1 -- 192.168.123.100:0/3426861038 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) -- 0x7f5880005470 con 0x7f58b000fc00 2026-03-10T07:26:50.894 INFO:tasks.cephadm.ceph_manager.ceph:need seq 163208757281 got 163208757282 for osd.5 2026-03-10T07:26:50.894 DEBUG:teuthology.parallel:result is None 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 -- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff260077800 msgr2=0x7ff260079cc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff260077800 0x7ff260079cc0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7ff278007380 tx=0x7ff27800c040 comp rx=0 tx=0).stop 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 -- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff28810b080 msgr2=0x7ff28810e070 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff28810b080 0x7ff28810e070 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7ff27c002a50 tx=0x7ff27c002f20 comp rx=0 tx=0).stop 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 -- 192.168.123.100:0/3654255621 shutdown_connections 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff260077800 0x7ff260079cc0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff2881b2fe0 0x7ff2881b53d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff28810b080 0x7ff28810e070 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 --2- 192.168.123.100:0/3654255621 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff28810a6d0 0x7ff28810db30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 -- 192.168.123.100:0/3654255621 >> 192.168.123.100:0/3654255621 conn(0x7ff28806deb0 msgr2=0x7ff288114370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 -- 192.168.123.100:0/3654255621 shutdown_connections 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7ff257fff640 1 -- 192.168.123.100:0/3654255621 wait complete. 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.890+0000 7f8b39ff1640 1 -- 192.168.123.100:0/3414056613 wait complete. 2026-03-10T07:26:50.897 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.894+0000 7f58b5ffb640 1 -- 192.168.123.100:0/3426861038 <== mon.1 v2:192.168.123.103:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f58b80fe070 con 0x7f58b000fc00 2026-03-10T07:26:50.899 INFO:teuthology.orchestra.run.vm00.stdout:77309411382 2026-03-10T07:26:50.908 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 -- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f5898077700 msgr2=0x7f5898079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.908 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f5898077700 0x7f5898079bc0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f58ac009950 tx=0x7f58ac039040 comp rx=0 tx=0).stop 2026-03-10T07:26:50.908 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 -- 192.168.123.100:0/3426861038 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f58b000fc00 msgr2=0x7f58b00100b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.908 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f58b000fc00 0x7f58b00100b0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f58b80517c0 tx=0x7f58b80747f0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 -- 192.168.123.100:0/3426861038 shutdown_connections 2026-03-10T07:26:50.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f5898077700 0x7f5898079bc0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f58b000fc00 0x7f58b00100b0 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b00a5760 0x7f58b000f6c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 --2- 192.168.123.100:0/3426861038 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f58b00a4de0 0x7f58b0016630 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 -- 192.168.123.100:0/3426861038 >> 192.168.123.100:0/3426861038 conn(0x7f58b001a730 msgr2=0x7f58b00a65c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 -- 192.168.123.100:0/3426861038 shutdown_connections 2026-03-10T07:26:50.909 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.906+0000 7f588b7fe640 1 -- 192.168.123.100:0/3426861038 wait complete. 2026-03-10T07:26:50.922 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.918+0000 7fd9e27fc640 1 -- 192.168.123.100:0/2956207474 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 6} v 0) -- 0x7fd9bc005740 con 0x7fda00109480 2026-03-10T07:26:50.922 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.922+0000 7fd9fcff9640 1 -- 192.168.123.100:0/2956207474 <== mon.1 v2:192.168.123.103:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 6}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7fd9ec066870 con 0x7fda00109480 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stdout:188978561052 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 -- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd9d40777d0 msgr2=0x7fd9d4079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd9d40777d0 0x7fd9d4079c90 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fda0006e690 tx=0x7fd9f0009290 comp rx=0 tx=0).stop 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 -- 192.168.123.100:0/2956207474 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fda00109480 msgr2=0x7fda0006ced0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fda00109480 0x7fda0006ced0 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fd9ec002fd0 tx=0x7fd9ec00daf0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 -- 192.168.123.100:0/2956207474 shutdown_connections 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd9d40777d0 0x7fd9d4079c90 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fda0006d950 0x7fda001a4100 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fda00109e30 0x7fda0006d410 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 --2- 192.168.123.100:0/2956207474 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fda00109480 0x7fda0006ced0 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 -- 192.168.123.100:0/2956207474 >> 192.168.123.100:0/2956207474 conn(0x7fda00078920 msgr2=0x7fda0007a390 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 -- 192.168.123.100:0/2956207474 shutdown_connections 2026-03-10T07:26:50.946 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:50.930+0000 7fd9e27fc640 1 -- 192.168.123.100:0/2956207474 wait complete. 2026-03-10T07:26:50.951 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738435 got 34359738436 for osd.0 2026-03-10T07:26:50.951 DEBUG:teuthology.parallel:result is None 2026-03-10T07:26:50.994 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574907 got 55834574909 for osd.1 2026-03-10T07:26:50.994 DEBUG:teuthology.parallel:result is None 2026-03-10T07:26:51.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:50 vm03 bash[23382]: audit 2026-03-10T07:26:50.527619+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.100:0/3330457087' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T07:26:51.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:50 vm03 bash[23382]: audit 2026-03-10T07:26:50.527619+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.100:0/3330457087' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T07:26:51.060 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411380 got 77309411382 for osd.2 2026-03-10T07:26:51.060 DEBUG:teuthology.parallel:result is None 2026-03-10T07:26:51.082 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:26:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:26:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:26:51.088 INFO:tasks.cephadm.ceph_manager.ceph:need seq 133143986216 got 133143986217 for osd.4 2026-03-10T07:26:51.088 DEBUG:teuthology.parallel:result is None 2026-03-10T07:26:51.093 INFO:tasks.cephadm.ceph_manager.ceph:need seq 111669149741 got 111669149743 for osd.3 2026-03-10T07:26:51.093 DEBUG:teuthology.parallel:result is None 2026-03-10T07:26:51.100 INFO:tasks.cephadm.ceph_manager.ceph:need seq 188978561050 got 188978561052 for osd.6 2026-03-10T07:26:51.100 DEBUG:teuthology.parallel:result is None 2026-03-10T07:26:51.101 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T07:26:51.101 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph pg dump --format=json 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: cluster 2026-03-10T07:26:50.558274+0000 mgr.y (mgr.24407) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: cluster 2026-03-10T07:26:50.558274+0000 mgr.y (mgr.24407) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.750088+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.100:0/1423248671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.750088+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.100:0/1423248671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.757996+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.100:0/3520712859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.757996+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.100:0/3520712859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.782400+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.100:0/2200884082' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.782400+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.100:0/2200884082' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.866517+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/3414056613' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.866517+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/3414056613' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.885871+0000 mon.c (mon.2) 69 : audit [DBG] from='client.? 192.168.123.100:0/3654255621' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.885871+0000 mon.c (mon.2) 69 : audit [DBG] from='client.? 192.168.123.100:0/3654255621' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.897088+0000 mon.b (mon.1) 27 : audit [DBG] from='client.? 192.168.123.100:0/3426861038' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.897088+0000 mon.b (mon.1) 27 : audit [DBG] from='client.? 192.168.123.100:0/3426861038' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.925194+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.100:0/2956207474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:51 vm00 bash[28005]: audit 2026-03-10T07:26:50.925194+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.100:0/2956207474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: cluster 2026-03-10T07:26:50.558274+0000 mgr.y (mgr.24407) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: cluster 2026-03-10T07:26:50.558274+0000 mgr.y (mgr.24407) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.750088+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.100:0/1423248671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.750088+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.100:0/1423248671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.757996+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.100:0/3520712859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.757996+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.100:0/3520712859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.782400+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.100:0/2200884082' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.782400+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.100:0/2200884082' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.866517+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/3414056613' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.866517+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/3414056613' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.885871+0000 mon.c (mon.2) 69 : audit [DBG] from='client.? 192.168.123.100:0/3654255621' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.885871+0000 mon.c (mon.2) 69 : audit [DBG] from='client.? 192.168.123.100:0/3654255621' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.897088+0000 mon.b (mon.1) 27 : audit [DBG] from='client.? 192.168.123.100:0/3426861038' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.897088+0000 mon.b (mon.1) 27 : audit [DBG] from='client.? 192.168.123.100:0/3426861038' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.925194+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.100:0/2956207474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T07:26:51.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:51 vm00 bash[20701]: audit 2026-03-10T07:26:50.925194+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.100:0/2956207474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: cluster 2026-03-10T07:26:50.558274+0000 mgr.y (mgr.24407) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: cluster 2026-03-10T07:26:50.558274+0000 mgr.y (mgr.24407) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.750088+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.100:0/1423248671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.750088+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.100:0/1423248671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.757996+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.100:0/3520712859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.757996+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.100:0/3520712859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.782400+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.100:0/2200884082' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.782400+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.100:0/2200884082' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.866517+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/3414056613' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.866517+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.100:0/3414056613' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.885871+0000 mon.c (mon.2) 69 : audit [DBG] from='client.? 192.168.123.100:0/3654255621' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.885871+0000 mon.c (mon.2) 69 : audit [DBG] from='client.? 192.168.123.100:0/3654255621' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.897088+0000 mon.b (mon.1) 27 : audit [DBG] from='client.? 192.168.123.100:0/3426861038' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.897088+0000 mon.b (mon.1) 27 : audit [DBG] from='client.? 192.168.123.100:0/3426861038' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.925194+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.100:0/2956207474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T07:26:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:51 vm03 bash[23382]: audit 2026-03-10T07:26:50.925194+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.100:0/2956207474' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T07:26:53.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:26:52 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:26:53.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:53 vm00 bash[20701]: cluster 2026-03-10T07:26:52.558623+0000 mgr.y (mgr.24407) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:53.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:53 vm00 bash[20701]: cluster 2026-03-10T07:26:52.558623+0000 mgr.y (mgr.24407) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:53.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:53 vm00 bash[20701]: audit 2026-03-10T07:26:52.877254+0000 mgr.y (mgr.24407) 61 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:53.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:53 vm00 bash[20701]: audit 2026-03-10T07:26:52.877254+0000 mgr.y (mgr.24407) 61 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:53.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:53 vm00 bash[28005]: cluster 2026-03-10T07:26:52.558623+0000 mgr.y (mgr.24407) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:53.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:53 vm00 bash[28005]: cluster 2026-03-10T07:26:52.558623+0000 mgr.y (mgr.24407) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:53.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:53 vm00 bash[28005]: audit 2026-03-10T07:26:52.877254+0000 mgr.y (mgr.24407) 61 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:53.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:53 vm00 bash[28005]: audit 2026-03-10T07:26:52.877254+0000 mgr.y (mgr.24407) 61 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:54.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:53 vm03 bash[23382]: cluster 2026-03-10T07:26:52.558623+0000 mgr.y (mgr.24407) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:54.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:53 vm03 bash[23382]: cluster 2026-03-10T07:26:52.558623+0000 mgr.y (mgr.24407) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:54.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:53 vm03 bash[23382]: audit 2026-03-10T07:26:52.877254+0000 mgr.y (mgr.24407) 61 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:54.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:53 vm03 bash[23382]: audit 2026-03-10T07:26:52.877254+0000 mgr.y (mgr.24407) 61 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:26:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:54 vm03 bash[23382]: audit 2026-03-10T07:26:53.679448+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:54 vm03 bash[23382]: audit 2026-03-10T07:26:53.679448+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:55.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:54 vm00 bash[28005]: audit 2026-03-10T07:26:53.679448+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:55.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:54 vm00 bash[28005]: audit 2026-03-10T07:26:53.679448+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:55.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:54 vm00 bash[20701]: audit 2026-03-10T07:26:53.679448+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:55.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:54 vm00 bash[20701]: audit 2026-03-10T07:26:53.679448+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:26:55.785 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:55.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 -- 192.168.123.100:0/2691716817 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd1b8101760 msgr2=0x7fd1b8101b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:55.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 --2- 192.168.123.100:0/2691716817 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd1b8101760 0x7fd1b8101b40 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fd1a4009a30 tx=0x7fd1a402f220 comp rx=0 tx=0).stop 2026-03-10T07:26:55.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 -- 192.168.123.100:0/2691716817 shutdown_connections 2026-03-10T07:26:55.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 --2- 192.168.123.100:0/2691716817 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd1b810f490 0x7fd1b8111880 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:55.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 --2- 192.168.123.100:0/2691716817 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd1b8102080 0x7fd1b810ef50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:55.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 --2- 192.168.123.100:0/2691716817 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd1b8101760 0x7fd1b8101b40 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:55.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 -- 192.168.123.100:0/2691716817 >> 192.168.123.100:0/2691716817 conn(0x7fd1b80fd630 msgr2=0x7fd1b80ffa50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:55.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 -- 192.168.123.100:0/2691716817 shutdown_connections 2026-03-10T07:26:55.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 -- 192.168.123.100:0/2691716817 wait complete. 2026-03-10T07:26:55.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 Processor -- start 2026-03-10T07:26:55.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 -- start start 2026-03-10T07:26:55.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd1b8101760 0x7fd1b81a2660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:55.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd1b8102080 0x7fd1b81a2ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:55.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd1b810f490 0x7fd1b819c7e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1b7fff640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd1b8101760 0x7fd1b81a2660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1b7fff640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd1b8101760 0x7fd1b81a2660 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:49864/0 (socket says 192.168.123.100:49864) 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1b7fff640 1 -- 192.168.123.100:0/3221780231 learned_addr learned my addr 192.168.123.100:0/3221780231 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 -- 192.168.123.100:0/3221780231 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd1b81142c0 con 0x7fd1b8102080 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 -- 192.168.123.100:0/3221780231 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fd1b8114140 con 0x7fd1b8101760 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1be5fb640 1 -- 192.168.123.100:0/3221780231 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fd1b8114440 con 0x7fd1b810f490 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1bcb71640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd1b810f490 0x7fd1b819c7e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.926+0000 7fd1b77fe640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd1b8102080 0x7fd1b81a2ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1bcb71640 1 -- 192.168.123.100:0/3221780231 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd1b8101760 msgr2=0x7fd1b81a2660 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1bcb71640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd1b8101760 0x7fd1b81a2660 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1bcb71640 1 -- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd1b8102080 msgr2=0x7fd1b81a2ba0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:55.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1bcb71640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd1b8102080 0x7fd1b81a2ba0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:55.930 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1bcb71640 1 -- 192.168.123.100:0/3221780231 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd1b819d010 con 0x7fd1b810f490 2026-03-10T07:26:55.930 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1b77fe640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd1b8102080 0x7fd1b81a2ba0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:26:55.930 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1bcb71640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd1b810f490 0x7fd1b819c7e0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fd1ac007ed0 tx=0x7fd1ac00e510 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:55.930 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1b57fa640 1 -- 192.168.123.100:0/3221780231 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd1ac002a60 con 0x7fd1b810f490 2026-03-10T07:26:55.930 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1be5fb640 1 -- 192.168.123.100:0/3221780231 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd1b819d2a0 con 0x7fd1b810f490 2026-03-10T07:26:55.930 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1be5fb640 1 -- 192.168.123.100:0/3221780231 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fd1b81a94c0 con 0x7fd1b810f490 2026-03-10T07:26:55.931 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1b57fa640 1 -- 192.168.123.100:0/3221780231 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fd1ac002c00 con 0x7fd1b810f490 2026-03-10T07:26:55.931 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1b57fa640 1 -- 192.168.123.100:0/3221780231 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd1ac013660 con 0x7fd1b810f490 2026-03-10T07:26:55.931 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd18affd640 1 -- 192.168.123.100:0/3221780231 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd180005180 con 0x7fd1b810f490 2026-03-10T07:26:55.933 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1b57fa640 1 -- 192.168.123.100:0/3221780231 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fd1ac0040d0 con 0x7fd1b810f490 2026-03-10T07:26:55.933 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1b57fa640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd198077670 0x7fd198079b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:55.934 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.930+0000 7fd1b57fa640 1 -- 192.168.123.100:0/3221780231 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7fd1ac099960 con 0x7fd1b810f490 2026-03-10T07:26:55.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.934+0000 7fd1b57fa640 1 -- 192.168.123.100:0/3221780231 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd1ac010040 con 0x7fd1b810f490 2026-03-10T07:26:55.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.934+0000 7fd1b7fff640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd198077670 0x7fd198079b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:55.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:55.934+0000 7fd1b7fff640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd198077670 0x7fd198079b30 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fd1a40097c0 tx=0x7fd1a4005e50 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:56.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:55 vm03 bash[23382]: cluster 2026-03-10T07:26:54.558991+0000 mgr.y (mgr.24407) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:56.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:55 vm03 bash[23382]: cluster 2026-03-10T07:26:54.558991+0000 mgr.y (mgr.24407) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:56.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.030+0000 7fd18affd640 1 -- 192.168.123.100:0/3221780231 --> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7fd180002bf0 con 0x7fd198077670 2026-03-10T07:26:56.039 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.038+0000 7fd1b57fa640 1 -- 192.168.123.100:0/3221780231 <== mgr.24407 v2:192.168.123.100:6800/3339031114 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+346482 (secure 0 0 0) 0x7fd180002bf0 con 0x7fd198077670 2026-03-10T07:26:56.039 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:26:56.041 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T07:26:56.043 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 -- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd198077670 msgr2=0x7fd198079b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:56.043 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd198077670 0x7fd198079b30 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fd1a40097c0 tx=0x7fd1a4005e50 comp rx=0 tx=0).stop 2026-03-10T07:26:56.043 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 -- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd1b810f490 msgr2=0x7fd1b819c7e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:56.043 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd1b810f490 0x7fd1b819c7e0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fd1ac007ed0 tx=0x7fd1ac00e510 comp rx=0 tx=0).stop 2026-03-10T07:26:56.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 -- 192.168.123.100:0/3221780231 shutdown_connections 2026-03-10T07:26:56.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fd198077670 0x7fd198079b30 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:56.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fd1b810f490 0x7fd1b819c7e0 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:56.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd1b8102080 0x7fd1b81a2ba0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:56.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 --2- 192.168.123.100:0/3221780231 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fd1b8101760 0x7fd1b81a2660 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:56.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 -- 192.168.123.100:0/3221780231 >> 192.168.123.100:0/3221780231 conn(0x7fd1b80fd630 msgr2=0x7fd1b810fd40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:56.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 -- 192.168.123.100:0/3221780231 shutdown_connections 2026-03-10T07:26:56.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:56.042+0000 7fd18affd640 1 -- 192.168.123.100:0/3221780231 wait complete. 2026-03-10T07:26:56.052 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:55 vm00 bash[28005]: cluster 2026-03-10T07:26:54.558991+0000 mgr.y (mgr.24407) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:56.052 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:55 vm00 bash[28005]: cluster 2026-03-10T07:26:54.558991+0000 mgr.y (mgr.24407) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:56.052 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:55 vm00 bash[20701]: cluster 2026-03-10T07:26:54.558991+0000 mgr.y (mgr.24407) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:56.052 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:55 vm00 bash[20701]: cluster 2026-03-10T07:26:54.558991+0000 mgr.y (mgr.24407) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:56.096 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":26,"stamp":"2026-03-10T07:26:54.558797+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":906,"num_read_kb":765,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":222028,"kb_used_data":7324,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167517364,"statfs":{"total":171765137408,"available":171537780736,"internally_reserved":0,"allocated":7499776,"data_stored":3924625,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002424"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537359+0000","last_change":"2026-03-10T07:25:43.202330+0000","last_active":"2026-03-10T07:26:08.537359+0000","last_peered":"2026-03-10T07:26:08.537359+0000","last_clean":"2026-03-10T07:26:08.537359+0000","last_became_active":"2026-03-10T07:25:43.201823+0000","last_became_peered":"2026-03-10T07:25:43.201823+0000","last_unstale":"2026-03-10T07:26:08.537359+0000","last_undegraded":"2026-03-10T07:26:08.537359+0000","last_fullsized":"2026-03-10T07:26:08.537359+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:15:41.777780+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.638866+0000","last_change":"2026-03-10T07:25:36.944849+0000","last_active":"2026-03-10T07:26:09.638866+0000","last_peered":"2026-03-10T07:26:09.638866+0000","last_clean":"2026-03-10T07:26:09.638866+0000","last_became_active":"2026-03-10T07:25:36.944724+0000","last_became_peered":"2026-03-10T07:25:36.944724+0000","last_unstale":"2026-03-10T07:26:09.638866+0000","last_undegraded":"2026-03-10T07:26:09.638866+0000","last_fullsized":"2026-03-10T07:26:09.638866+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:51:06.434127+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"62'10","reported_seq":48,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537400+0000","last_change":"2026-03-10T07:25:38.955716+0000","last_active":"2026-03-10T07:26:08.537400+0000","last_peered":"2026-03-10T07:26:08.537400+0000","last_clean":"2026-03-10T07:26:08.537400+0000","last_became_active":"2026-03-10T07:25:38.955307+0000","last_became_peered":"2026-03-10T07:25:38.955307+0000","last_unstale":"2026-03-10T07:26:08.537400+0000","last_undegraded":"2026-03-10T07:26:08.537400+0000","last_fullsized":"2026-03-10T07:26:08.537400+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:42:45.356494+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.668802+0000","last_change":"2026-03-10T07:25:40.962768+0000","last_active":"2026-03-10T07:26:08.668802+0000","last_peered":"2026-03-10T07:26:08.668802+0000","last_clean":"2026-03-10T07:26:08.668802+0000","last_became_active":"2026-03-10T07:25:40.962623+0000","last_became_peered":"2026-03-10T07:25:40.962623+0000","last_unstale":"2026-03-10T07:26:08.668802+0000","last_undegraded":"2026-03-10T07:26:08.668802+0000","last_fullsized":"2026-03-10T07:26:08.668802+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:40:46.353878+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537062+0000","last_change":"2026-03-10T07:25:36.937025+0000","last_active":"2026-03-10T07:26:08.537062+0000","last_peered":"2026-03-10T07:26:08.537062+0000","last_clean":"2026-03-10T07:26:08.537062+0000","last_became_active":"2026-03-10T07:25:36.936915+0000","last_became_peered":"2026-03-10T07:25:36.936915+0000","last_unstale":"2026-03-10T07:26:08.537062+0000","last_undegraded":"2026-03-10T07:26:08.537062+0000","last_fullsized":"2026-03-10T07:26:08.537062+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:29:30.586622+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"62'11","reported_seq":52,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639399+0000","last_change":"2026-03-10T07:25:38.942598+0000","last_active":"2026-03-10T07:26:09.639399+0000","last_peered":"2026-03-10T07:26:09.639399+0000","last_clean":"2026-03-10T07:26:09.639399+0000","last_became_active":"2026-03-10T07:25:38.942521+0000","last_became_peered":"2026-03-10T07:25:38.942521+0000","last_unstale":"2026-03-10T07:26:09.639399+0000","last_undegraded":"2026-03-10T07:26:09.639399+0000","last_fullsized":"2026-03-10T07:26:09.639399+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:12:12.132136+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643354+0000","last_change":"2026-03-10T07:25:40.956195+0000","last_active":"2026-03-10T07:26:08.643354+0000","last_peered":"2026-03-10T07:26:08.643354+0000","last_clean":"2026-03-10T07:26:08.643354+0000","last_became_active":"2026-03-10T07:25:40.956106+0000","last_became_peered":"2026-03-10T07:25:40.956106+0000","last_unstale":"2026-03-10T07:26:08.643354+0000","last_undegraded":"2026-03-10T07:26:08.643354+0000","last_fullsized":"2026-03-10T07:26:08.643354+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:30:47.995875+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.668311+0000","last_change":"2026-03-10T07:25:42.973744+0000","last_active":"2026-03-10T07:26:08.668311+0000","last_peered":"2026-03-10T07:26:08.668311+0000","last_clean":"2026-03-10T07:26:08.668311+0000","last_became_active":"2026-03-10T07:25:42.973548+0000","last_became_peered":"2026-03-10T07:25:42.973548+0000","last_unstale":"2026-03-10T07:26:08.668311+0000","last_undegraded":"2026-03-10T07:26:08.668311+0000","last_fullsized":"2026-03-10T07:26:08.668311+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:10:53.823226+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037097+0000","last_change":"2026-03-10T07:25:36.957998+0000","last_active":"2026-03-10T07:26:09.037097+0000","last_peered":"2026-03-10T07:26:09.037097+0000","last_clean":"2026-03-10T07:26:09.037097+0000","last_became_active":"2026-03-10T07:25:36.956931+0000","last_became_peered":"2026-03-10T07:25:36.956931+0000","last_unstale":"2026-03-10T07:26:09.037097+0000","last_undegraded":"2026-03-10T07:26:09.037097+0000","last_fullsized":"2026-03-10T07:26:09.037097+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:57:32.973481+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"62'15","reported_seq":58,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537438+0000","last_change":"2026-03-10T07:25:38.963336+0000","last_active":"2026-03-10T07:26:08.537438+0000","last_peered":"2026-03-10T07:26:08.537438+0000","last_clean":"2026-03-10T07:26:08.537438+0000","last_became_active":"2026-03-10T07:25:38.963168+0000","last_became_peered":"2026-03-10T07:25:38.963168+0000","last_unstale":"2026-03-10T07:26:08.537438+0000","last_undegraded":"2026-03-10T07:26:08.537438+0000","last_fullsized":"2026-03-10T07:26:08.537438+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:12:38.948372+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039332+0000","last_change":"2026-03-10T07:25:40.967300+0000","last_active":"2026-03-10T07:26:09.039332+0000","last_peered":"2026-03-10T07:26:09.039332+0000","last_clean":"2026-03-10T07:26:09.039332+0000","last_became_active":"2026-03-10T07:25:40.967173+0000","last_became_peered":"2026-03-10T07:25:40.967173+0000","last_unstale":"2026-03-10T07:26:09.039332+0000","last_undegraded":"2026-03-10T07:26:09.039332+0000","last_fullsized":"2026-03-10T07:26:09.039332+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:18:52.767495+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537396+0000","last_change":"2026-03-10T07:25:42.976658+0000","last_active":"2026-03-10T07:26:08.537396+0000","last_peered":"2026-03-10T07:26:08.537396+0000","last_clean":"2026-03-10T07:26:08.537396+0000","last_became_active":"2026-03-10T07:25:42.976337+0000","last_became_peered":"2026-03-10T07:25:42.976337+0000","last_unstale":"2026-03-10T07:26:08.537396+0000","last_undegraded":"2026-03-10T07:26:08.537396+0000","last_fullsized":"2026-03-10T07:26:08.537396+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:35:20.695170+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037512+0000","last_change":"2026-03-10T07:25:36.954845+0000","last_active":"2026-03-10T07:26:09.037512+0000","last_peered":"2026-03-10T07:26:09.037512+0000","last_clean":"2026-03-10T07:26:09.037512+0000","last_became_active":"2026-03-10T07:25:36.951106+0000","last_became_peered":"2026-03-10T07:25:36.951106+0000","last_unstale":"2026-03-10T07:26:09.037512+0000","last_undegraded":"2026-03-10T07:26:09.037512+0000","last_fullsized":"2026-03-10T07:26:09.037512+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:48:45.687745+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"62'12","reported_seq":56,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536303+0000","last_change":"2026-03-10T07:25:38.958439+0000","last_active":"2026-03-10T07:26:08.536303+0000","last_peered":"2026-03-10T07:26:08.536303+0000","last_clean":"2026-03-10T07:26:08.536303+0000","last_became_active":"2026-03-10T07:25:38.958251+0000","last_became_peered":"2026-03-10T07:25:38.958251+0000","last_unstale":"2026-03-10T07:26:08.536303+0000","last_undegraded":"2026-03-10T07:26:08.536303+0000","last_fullsized":"2026-03-10T07:26:08.536303+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:00:37.389406+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536260+0000","last_change":"2026-03-10T07:25:40.960067+0000","last_active":"2026-03-10T07:26:08.536260+0000","last_peered":"2026-03-10T07:26:08.536260+0000","last_clean":"2026-03-10T07:26:08.536260+0000","last_became_active":"2026-03-10T07:25:40.954903+0000","last_became_peered":"2026-03-10T07:25:40.954903+0000","last_unstale":"2026-03-10T07:26:08.536260+0000","last_undegraded":"2026-03-10T07:26:08.536260+0000","last_fullsized":"2026-03-10T07:26:08.536260+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:16:03.315943+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639150+0000","last_change":"2026-03-10T07:25:42.971903+0000","last_active":"2026-03-10T07:26:09.639150+0000","last_peered":"2026-03-10T07:26:09.639150+0000","last_clean":"2026-03-10T07:26:09.639150+0000","last_became_active":"2026-03-10T07:25:42.971771+0000","last_became_peered":"2026-03-10T07:25:42.971771+0000","last_unstale":"2026-03-10T07:26:09.639150+0000","last_undegraded":"2026-03-10T07:26:09.639150+0000","last_fullsized":"2026-03-10T07:26:09.639150+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:07:21.256438+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"62'19","reported_seq":64,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588617+0000","last_change":"2026-03-10T07:25:38.973814+0000","last_active":"2026-03-10T07:26:08.588617+0000","last_peered":"2026-03-10T07:26:08.588617+0000","last_clean":"2026-03-10T07:26:08.588617+0000","last_became_active":"2026-03-10T07:25:38.973682+0000","last_became_peered":"2026-03-10T07:25:38.973682+0000","last_unstale":"2026-03-10T07:26:08.588617+0000","last_undegraded":"2026-03-10T07:26:08.588617+0000","last_fullsized":"2026-03-10T07:26:08.588617+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:38:42.857194+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.036904+0000","last_change":"2026-03-10T07:25:36.954925+0000","last_active":"2026-03-10T07:26:09.036904+0000","last_peered":"2026-03-10T07:26:09.036904+0000","last_clean":"2026-03-10T07:26:09.036904+0000","last_became_active":"2026-03-10T07:25:36.954558+0000","last_became_peered":"2026-03-10T07:25:36.954558+0000","last_unstale":"2026-03-10T07:26:09.036904+0000","last_undegraded":"2026-03-10T07:26:09.036904+0000","last_fullsized":"2026-03-10T07:26:09.036904+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:44:38.495748+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644422+0000","last_change":"2026-03-10T07:25:40.965642+0000","last_active":"2026-03-10T07:26:08.644422+0000","last_peered":"2026-03-10T07:26:08.644422+0000","last_clean":"2026-03-10T07:26:08.644422+0000","last_became_active":"2026-03-10T07:25:40.965338+0000","last_became_peered":"2026-03-10T07:25:40.965338+0000","last_unstale":"2026-03-10T07:26:08.644422+0000","last_undegraded":"2026-03-10T07:26:08.644422+0000","last_fullsized":"2026-03-10T07:26:08.644422+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:20:10.509873+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.534068+0000","last_change":"2026-03-10T07:25:42.971467+0000","last_active":"2026-03-10T07:26:08.534068+0000","last_peered":"2026-03-10T07:26:08.534068+0000","last_clean":"2026-03-10T07:26:08.534068+0000","last_became_active":"2026-03-10T07:25:42.969267+0000","last_became_peered":"2026-03-10T07:25:42.969267+0000","last_unstale":"2026-03-10T07:26:08.534068+0000","last_undegraded":"2026-03-10T07:26:08.534068+0000","last_fullsized":"2026-03-10T07:26:08.534068+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:13:43.845735+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536866+0000","last_change":"2026-03-10T07:25:38.955595+0000","last_active":"2026-03-10T07:26:08.536866+0000","last_peered":"2026-03-10T07:26:08.536866+0000","last_clean":"2026-03-10T07:26:08.536866+0000","last_became_active":"2026-03-10T07:25:38.955505+0000","last_became_peered":"2026-03-10T07:25:38.955505+0000","last_unstale":"2026-03-10T07:26:08.536866+0000","last_undegraded":"2026-03-10T07:26:08.536866+0000","last_fullsized":"2026-03-10T07:26:08.536866+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:45:40.697478+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"55'1","reported_seq":45,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643588+0000","last_change":"2026-03-10T07:25:36.944402+0000","last_active":"2026-03-10T07:26:08.643588+0000","last_peered":"2026-03-10T07:26:08.643588+0000","last_clean":"2026-03-10T07:26:08.643588+0000","last_became_active":"2026-03-10T07:25:36.944272+0000","last_became_peered":"2026-03-10T07:25:36.944272+0000","last_unstale":"2026-03-10T07:26:08.643588+0000","last_undegraded":"2026-03-10T07:26:08.643588+0000","last_fullsized":"2026-03-10T07:26:08.643588+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:55:37.483284+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"63'11","reported_seq":54,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:42.004122+0000","last_change":"2026-03-10T07:25:40.970998+0000","last_active":"2026-03-10T07:26:42.004122+0000","last_peered":"2026-03-10T07:26:42.004122+0000","last_clean":"2026-03-10T07:26:42.004122+0000","last_became_active":"2026-03-10T07:25:40.970797+0000","last_became_peered":"2026-03-10T07:25:40.970797+0000","last_unstale":"2026-03-10T07:26:42.004122+0000","last_undegraded":"2026-03-10T07:26:42.004122+0000","last_fullsized":"2026-03-10T07:26:42.004122+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:05:21.503300+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669644+0000","last_change":"2026-03-10T07:25:42.966911+0000","last_active":"2026-03-10T07:26:08.669644+0000","last_peered":"2026-03-10T07:26:08.669644+0000","last_clean":"2026-03-10T07:26:08.669644+0000","last_became_active":"2026-03-10T07:25:42.966790+0000","last_became_peered":"2026-03-10T07:25:42.966790+0000","last_unstale":"2026-03-10T07:26:08.669644+0000","last_undegraded":"2026-03-10T07:26:08.669644+0000","last_fullsized":"2026-03-10T07:26:08.669644+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:01:01.647071+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"62'15","reported_seq":58,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536906+0000","last_change":"2026-03-10T07:25:38.940123+0000","last_active":"2026-03-10T07:26:08.536906+0000","last_peered":"2026-03-10T07:26:08.536906+0000","last_clean":"2026-03-10T07:26:08.536906+0000","last_became_active":"2026-03-10T07:25:38.940054+0000","last_became_peered":"2026-03-10T07:25:38.940054+0000","last_unstale":"2026-03-10T07:26:08.536906+0000","last_undegraded":"2026-03-10T07:26:08.536906+0000","last_fullsized":"2026-03-10T07:26:08.536906+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:29:42.593084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643672+0000","last_change":"2026-03-10T07:25:36.943880+0000","last_active":"2026-03-10T07:26:08.643672+0000","last_peered":"2026-03-10T07:26:08.643672+0000","last_clean":"2026-03-10T07:26:08.643672+0000","last_became_active":"2026-03-10T07:25:36.943741+0000","last_became_peered":"2026-03-10T07:25:36.943741+0000","last_unstale":"2026-03-10T07:26:08.643672+0000","last_undegraded":"2026-03-10T07:26:08.643672+0000","last_fullsized":"2026-03-10T07:26:08.643672+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:23:44.430096+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"63'11","reported_seq":57,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:42.003548+0000","last_change":"2026-03-10T07:25:40.964279+0000","last_active":"2026-03-10T07:26:42.003548+0000","last_peered":"2026-03-10T07:26:42.003548+0000","last_clean":"2026-03-10T07:26:42.003548+0000","last_became_active":"2026-03-10T07:25:40.964088+0000","last_became_peered":"2026-03-10T07:25:40.964088+0000","last_unstale":"2026-03-10T07:26:42.003548+0000","last_undegraded":"2026-03-10T07:26:42.003548+0000","last_fullsized":"2026-03-10T07:26:42.003548+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:08:38.681787+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536116+0000","last_change":"2026-03-10T07:25:42.977842+0000","last_active":"2026-03-10T07:26:08.536116+0000","last_peered":"2026-03-10T07:26:08.536116+0000","last_clean":"2026-03-10T07:26:08.536116+0000","last_became_active":"2026-03-10T07:25:42.976504+0000","last_became_peered":"2026-03-10T07:25:42.976504+0000","last_unstale":"2026-03-10T07:26:08.536116+0000","last_undegraded":"2026-03-10T07:26:08.536116+0000","last_fullsized":"2026-03-10T07:26:08.536116+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:14:17.578558+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"62'12","reported_seq":56,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.668606+0000","last_change":"2026-03-10T07:25:38.966196+0000","last_active":"2026-03-10T07:26:08.668606+0000","last_peered":"2026-03-10T07:26:08.668606+0000","last_clean":"2026-03-10T07:26:08.668606+0000","last_became_active":"2026-03-10T07:25:38.965999+0000","last_became_peered":"2026-03-10T07:25:38.965999+0000","last_unstale":"2026-03-10T07:26:08.668606+0000","last_undegraded":"2026-03-10T07:26:08.668606+0000","last_fullsized":"2026-03-10T07:26:08.668606+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:57:31.664387+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":15,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533770+0000","last_change":"2026-03-10T07:25:53.619324+0000","last_active":"2026-03-10T07:26:08.533770+0000","last_peered":"2026-03-10T07:26:08.533770+0000","last_clean":"2026-03-10T07:26:08.533770+0000","last_became_active":"2026-03-10T07:25:53.619200+0000","last_became_peered":"2026-03-10T07:25:53.619200+0000","last_unstale":"2026-03-10T07:26:08.533770+0000","last_undegraded":"2026-03-10T07:26:08.533770+0000","last_fullsized":"2026-03-10T07:26:08.533770+0000","mapping_epoch":65,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":66,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:25:24.107480+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537716+0000","last_change":"2026-03-10T07:25:40.951544+0000","last_active":"2026-03-10T07:26:08.537716+0000","last_peered":"2026-03-10T07:26:08.537716+0000","last_clean":"2026-03-10T07:26:08.537716+0000","last_became_active":"2026-03-10T07:25:40.951459+0000","last_became_peered":"2026-03-10T07:25:40.951459+0000","last_unstale":"2026-03-10T07:26:08.537716+0000","last_undegraded":"2026-03-10T07:26:08.537716+0000","last_fullsized":"2026-03-10T07:26:08.537716+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:07:27.746825+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536421+0000","last_change":"2026-03-10T07:25:43.202393+0000","last_active":"2026-03-10T07:26:08.536421+0000","last_peered":"2026-03-10T07:26:08.536421+0000","last_clean":"2026-03-10T07:26:08.536421+0000","last_became_active":"2026-03-10T07:25:43.201952+0000","last_became_peered":"2026-03-10T07:25:43.201952+0000","last_unstale":"2026-03-10T07:26:08.536421+0000","last_undegraded":"2026-03-10T07:26:08.536421+0000","last_fullsized":"2026-03-10T07:26:08.536421+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:43:55.701446+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"62'12","reported_seq":51,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.640065+0000","last_change":"2026-03-10T07:25:38.961561+0000","last_active":"2026-03-10T07:26:09.640065+0000","last_peered":"2026-03-10T07:26:09.640065+0000","last_clean":"2026-03-10T07:26:09.640065+0000","last_became_active":"2026-03-10T07:25:38.961399+0000","last_became_peered":"2026-03-10T07:25:38.961399+0000","last_unstale":"2026-03-10T07:26:09.640065+0000","last_undegraded":"2026-03-10T07:26:09.640065+0000","last_fullsized":"2026-03-10T07:26:09.640065+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:54:56.962298+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588752+0000","last_change":"2026-03-10T07:25:36.952085+0000","last_active":"2026-03-10T07:26:08.588752+0000","last_peered":"2026-03-10T07:26:08.588752+0000","last_clean":"2026-03-10T07:26:08.588752+0000","last_became_active":"2026-03-10T07:25:36.943264+0000","last_became_peered":"2026-03-10T07:25:36.943264+0000","last_unstale":"2026-03-10T07:26:08.588752+0000","last_undegraded":"2026-03-10T07:26:08.588752+0000","last_fullsized":"2026-03-10T07:26:08.588752+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:53:46.385863+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"62'1","reported_seq":39,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669957+0000","last_change":"2026-03-10T07:25:46.009402+0000","last_active":"2026-03-10T07:26:08.669957+0000","last_peered":"2026-03-10T07:26:08.669957+0000","last_clean":"2026-03-10T07:26:08.669957+0000","last_became_active":"2026-03-10T07:25:39.941081+0000","last_became_peered":"2026-03-10T07:25:39.941081+0000","last_unstale":"2026-03-10T07:26:08.669957+0000","last_undegraded":"2026-03-10T07:26:08.669957+0000","last_fullsized":"2026-03-10T07:26:08.669957+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_clean_scrub_stamp":"2026-03-10T07:25:38.919122+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:13:21.320933+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00027072000000000001,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"63'11","reported_seq":54,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:42.004528+0000","last_change":"2026-03-10T07:25:40.942899+0000","last_active":"2026-03-10T07:26:42.004528+0000","last_peered":"2026-03-10T07:26:42.004528+0000","last_clean":"2026-03-10T07:26:42.004528+0000","last_became_active":"2026-03-10T07:25:40.942773+0000","last_became_peered":"2026-03-10T07:25:40.942773+0000","last_unstale":"2026-03-10T07:26:42.004528+0000","last_undegraded":"2026-03-10T07:26:42.004528+0000","last_fullsized":"2026-03-10T07:26:42.004528+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:25:17.256099+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.038914+0000","last_change":"2026-03-10T07:25:43.200990+0000","last_active":"2026-03-10T07:26:09.038914+0000","last_peered":"2026-03-10T07:26:09.038914+0000","last_clean":"2026-03-10T07:26:09.038914+0000","last_became_active":"2026-03-10T07:25:43.200846+0000","last_became_peered":"2026-03-10T07:25:43.200846+0000","last_unstale":"2026-03-10T07:26:09.038914+0000","last_undegraded":"2026-03-10T07:26:09.038914+0000","last_fullsized":"2026-03-10T07:26:09.038914+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:54:04.388963+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"62'13","reported_seq":60,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536667+0000","last_change":"2026-03-10T07:25:38.933781+0000","last_active":"2026-03-10T07:26:08.536667+0000","last_peered":"2026-03-10T07:26:08.536667+0000","last_clean":"2026-03-10T07:26:08.536667+0000","last_became_active":"2026-03-10T07:25:38.933661+0000","last_became_peered":"2026-03-10T07:25:38.933661+0000","last_unstale":"2026-03-10T07:26:08.536667+0000","last_undegraded":"2026-03-10T07:26:08.536667+0000","last_fullsized":"2026-03-10T07:26:08.536667+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:48:37.834539+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"55'1","reported_seq":38,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643705+0000","last_change":"2026-03-10T07:25:36.931556+0000","last_active":"2026-03-10T07:26:08.643705+0000","last_peered":"2026-03-10T07:26:08.643705+0000","last_clean":"2026-03-10T07:26:08.643705+0000","last_became_active":"2026-03-10T07:25:36.931409+0000","last_became_peered":"2026-03-10T07:25:36.931409+0000","last_unstale":"2026-03-10T07:26:08.643705+0000","last_undegraded":"2026-03-10T07:26:08.643705+0000","last_fullsized":"2026-03-10T07:26:08.643705+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:09:12.898569+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"65'5","reported_seq":103,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:47.909246+0000","last_change":"2026-03-10T07:25:46.007814+0000","last_active":"2026-03-10T07:26:47.909246+0000","last_peered":"2026-03-10T07:26:47.909246+0000","last_clean":"2026-03-10T07:26:47.909246+0000","last_became_active":"2026-03-10T07:25:39.958901+0000","last_became_peered":"2026-03-10T07:25:39.958901+0000","last_unstale":"2026-03-10T07:26:47.909246+0000","last_undegraded":"2026-03-10T07:26:47.909246+0000","last_fullsized":"2026-03-10T07:26:47.909246+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_clean_scrub_stamp":"2026-03-10T07:25:38.919122+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:50:10.105303+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.001194566,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":62,"num_read_kb":57,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669180+0000","last_change":"2026-03-10T07:25:40.963847+0000","last_active":"2026-03-10T07:26:08.669180+0000","last_peered":"2026-03-10T07:26:08.669180+0000","last_clean":"2026-03-10T07:26:08.669180+0000","last_became_active":"2026-03-10T07:25:40.963603+0000","last_became_peered":"2026-03-10T07:25:40.963603+0000","last_unstale":"2026-03-10T07:26:08.669180+0000","last_undegraded":"2026-03-10T07:26:08.669180+0000","last_fullsized":"2026-03-10T07:26:08.669180+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:11:38.631378+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669113+0000","last_change":"2026-03-10T07:25:42.965054+0000","last_active":"2026-03-10T07:26:08.669113+0000","last_peered":"2026-03-10T07:26:08.669113+0000","last_clean":"2026-03-10T07:26:08.669113+0000","last_became_active":"2026-03-10T07:25:42.964850+0000","last_became_peered":"2026-03-10T07:25:42.964850+0000","last_unstale":"2026-03-10T07:26:08.669113+0000","last_undegraded":"2026-03-10T07:26:08.669113+0000","last_fullsized":"2026-03-10T07:26:08.669113+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:03:12.928825+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"63'30","reported_seq":98,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:42.004527+0000","last_change":"2026-03-10T07:25:38.960669+0000","last_active":"2026-03-10T07:26:42.004527+0000","last_peered":"2026-03-10T07:26:42.004527+0000","last_clean":"2026-03-10T07:26:42.004527+0000","last_became_active":"2026-03-10T07:25:38.960502+0000","last_became_peered":"2026-03-10T07:25:38.960502+0000","last_unstale":"2026-03-10T07:26:42.004527+0000","last_undegraded":"2026-03-10T07:26:42.004527+0000","last_fullsized":"2026-03-10T07:26:42.004527+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:30:30.665007+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037002+0000","last_change":"2026-03-10T07:25:36.948081+0000","last_active":"2026-03-10T07:26:09.037002+0000","last_peered":"2026-03-10T07:26:09.037002+0000","last_clean":"2026-03-10T07:26:09.037002+0000","last_became_active":"2026-03-10T07:25:36.947999+0000","last_became_peered":"2026-03-10T07:25:36.947999+0000","last_unstale":"2026-03-10T07:26:09.037002+0000","last_undegraded":"2026-03-10T07:26:09.037002+0000","last_fullsized":"2026-03-10T07:26:09.037002+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:00:21.576970+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.589081+0000","last_change":"2026-03-10T07:25:40.958648+0000","last_active":"2026-03-10T07:26:08.589081+0000","last_peered":"2026-03-10T07:26:08.589081+0000","last_clean":"2026-03-10T07:26:08.589081+0000","last_became_active":"2026-03-10T07:25:40.958332+0000","last_became_peered":"2026-03-10T07:25:40.958332+0000","last_unstale":"2026-03-10T07:26:08.589081+0000","last_undegraded":"2026-03-10T07:26:08.589081+0000","last_fullsized":"2026-03-10T07:26:08.589081+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:03:25.286525+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644209+0000","last_change":"2026-03-10T07:25:43.200748+0000","last_active":"2026-03-10T07:26:08.644209+0000","last_peered":"2026-03-10T07:26:08.644209+0000","last_clean":"2026-03-10T07:26:08.644209+0000","last_became_active":"2026-03-10T07:25:43.200458+0000","last_became_peered":"2026-03-10T07:25:43.200458+0000","last_unstale":"2026-03-10T07:26:08.644209+0000","last_undegraded":"2026-03-10T07:26:08.644209+0000","last_fullsized":"2026-03-10T07:26:08.644209+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:02:22.146589+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"62'16","reported_seq":70,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:42.004399+0000","last_change":"2026-03-10T07:25:38.963876+0000","last_active":"2026-03-10T07:26:42.004399+0000","last_peered":"2026-03-10T07:26:42.004399+0000","last_clean":"2026-03-10T07:26:42.004399+0000","last_became_active":"2026-03-10T07:25:38.959289+0000","last_became_peered":"2026-03-10T07:25:38.959289+0000","last_unstale":"2026-03-10T07:26:42.004399+0000","last_undegraded":"2026-03-10T07:26:42.004399+0000","last_fullsized":"2026-03-10T07:26:42.004399+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:49:14.638829+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644046+0000","last_change":"2026-03-10T07:25:36.943600+0000","last_active":"2026-03-10T07:26:08.644046+0000","last_peered":"2026-03-10T07:26:08.644046+0000","last_clean":"2026-03-10T07:26:08.644046+0000","last_became_active":"2026-03-10T07:25:36.943506+0000","last_became_peered":"2026-03-10T07:25:36.943506+0000","last_unstale":"2026-03-10T07:26:08.644046+0000","last_undegraded":"2026-03-10T07:26:08.644046+0000","last_fullsized":"2026-03-10T07:26:08.644046+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:49:52.148563+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"64'2","reported_seq":40,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644098+0000","last_change":"2026-03-10T07:25:46.013902+0000","last_active":"2026-03-10T07:26:08.644098+0000","last_peered":"2026-03-10T07:26:08.644098+0000","last_clean":"2026-03-10T07:26:08.644098+0000","last_became_active":"2026-03-10T07:25:39.942792+0000","last_became_peered":"2026-03-10T07:25:39.942792+0000","last_unstale":"2026-03-10T07:26:08.644098+0000","last_undegraded":"2026-03-10T07:26:08.644098+0000","last_fullsized":"2026-03-10T07:26:08.644098+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_clean_scrub_stamp":"2026-03-10T07:25:38.919122+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:08:45.303294+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00038671099999999998,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"63'11","reported_seq":54,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:42.004122+0000","last_change":"2026-03-10T07:25:40.960641+0000","last_active":"2026-03-10T07:26:42.004122+0000","last_peered":"2026-03-10T07:26:42.004122+0000","last_clean":"2026-03-10T07:26:42.004122+0000","last_became_active":"2026-03-10T07:25:40.960493+0000","last_became_peered":"2026-03-10T07:25:40.960493+0000","last_unstale":"2026-03-10T07:26:42.004122+0000","last_undegraded":"2026-03-10T07:26:42.004122+0000","last_fullsized":"2026-03-10T07:26:42.004122+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:31:14.671878+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639788+0000","last_change":"2026-03-10T07:25:42.965443+0000","last_active":"2026-03-10T07:26:09.639788+0000","last_peered":"2026-03-10T07:26:09.639788+0000","last_clean":"2026-03-10T07:26:09.639788+0000","last_became_active":"2026-03-10T07:25:42.965119+0000","last_became_peered":"2026-03-10T07:25:42.965119+0000","last_unstale":"2026-03-10T07:26:09.639788+0000","last_undegraded":"2026-03-10T07:26:09.639788+0000","last_fullsized":"2026-03-10T07:26:09.639788+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:48:03.889308+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"62'19","reported_seq":69,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.668934+0000","last_change":"2026-03-10T07:25:38.965676+0000","last_active":"2026-03-10T07:26:08.668934+0000","last_peered":"2026-03-10T07:26:08.668934+0000","last_clean":"2026-03-10T07:26:08.668934+0000","last_became_active":"2026-03-10T07:25:38.965566+0000","last_became_peered":"2026-03-10T07:25:38.965566+0000","last_unstale":"2026-03-10T07:26:08.668934+0000","last_undegraded":"2026-03-10T07:26:08.668934+0000","last_fullsized":"2026-03-10T07:26:08.668934+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:49:42.698967+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537299+0000","last_change":"2026-03-10T07:25:36.953145+0000","last_active":"2026-03-10T07:26:08.537299+0000","last_peered":"2026-03-10T07:26:08.537299+0000","last_clean":"2026-03-10T07:26:08.537299+0000","last_became_active":"2026-03-10T07:25:36.952663+0000","last_became_peered":"2026-03-10T07:25:36.952663+0000","last_unstale":"2026-03-10T07:26:08.537299+0000","last_undegraded":"2026-03-10T07:26:08.537299+0000","last_fullsized":"2026-03-10T07:26:08.537299+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:54:36.535214+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639707+0000","last_change":"2026-03-10T07:25:40.969529+0000","last_active":"2026-03-10T07:26:09.639707+0000","last_peered":"2026-03-10T07:26:09.639707+0000","last_clean":"2026-03-10T07:26:09.639707+0000","last_became_active":"2026-03-10T07:25:40.969355+0000","last_became_peered":"2026-03-10T07:25:40.969355+0000","last_unstale":"2026-03-10T07:26:09.639707+0000","last_undegraded":"2026-03-10T07:26:09.639707+0000","last_fullsized":"2026-03-10T07:26:09.639707+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:08:47.424379+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536492+0000","last_change":"2026-03-10T07:25:42.974239+0000","last_active":"2026-03-10T07:26:08.536492+0000","last_peered":"2026-03-10T07:26:08.536492+0000","last_clean":"2026-03-10T07:26:08.536492+0000","last_became_active":"2026-03-10T07:25:42.974128+0000","last_became_peered":"2026-03-10T07:25:42.974128+0000","last_unstale":"2026-03-10T07:26:08.536492+0000","last_undegraded":"2026-03-10T07:26:08.536492+0000","last_fullsized":"2026-03-10T07:26:08.536492+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:43:22.280218+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"62'18","reported_seq":65,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644168+0000","last_change":"2026-03-10T07:25:38.951573+0000","last_active":"2026-03-10T07:26:08.644168+0000","last_peered":"2026-03-10T07:26:08.644168+0000","last_clean":"2026-03-10T07:26:08.644168+0000","last_became_active":"2026-03-10T07:25:38.951483+0000","last_became_peered":"2026-03-10T07:25:38.951483+0000","last_unstale":"2026-03-10T07:26:08.644168+0000","last_undegraded":"2026-03-10T07:26:08.644168+0000","last_fullsized":"2026-03-10T07:26:08.644168+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:51:32.866863+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533380+0000","last_change":"2026-03-10T07:25:36.932854+0000","last_active":"2026-03-10T07:26:08.533380+0000","last_peered":"2026-03-10T07:26:08.533380+0000","last_clean":"2026-03-10T07:26:08.533380+0000","last_became_active":"2026-03-10T07:25:36.932505+0000","last_became_peered":"2026-03-10T07:25:36.932505+0000","last_unstale":"2026-03-10T07:26:08.533380+0000","last_undegraded":"2026-03-10T07:26:08.533380+0000","last_fullsized":"2026-03-10T07:26:08.533380+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:26:59.726222+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533449+0000","last_change":"2026-03-10T07:25:40.970950+0000","last_active":"2026-03-10T07:26:08.533449+0000","last_peered":"2026-03-10T07:26:08.533449+0000","last_clean":"2026-03-10T07:26:08.533449+0000","last_became_active":"2026-03-10T07:25:40.970698+0000","last_became_peered":"2026-03-10T07:25:40.970698+0000","last_unstale":"2026-03-10T07:26:08.533449+0000","last_undegraded":"2026-03-10T07:26:08.533449+0000","last_fullsized":"2026-03-10T07:26:08.533449+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:19:57.146422+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039897+0000","last_change":"2026-03-10T07:25:43.200928+0000","last_active":"2026-03-10T07:26:09.039897+0000","last_peered":"2026-03-10T07:26:09.039897+0000","last_clean":"2026-03-10T07:26:09.039897+0000","last_became_active":"2026-03-10T07:25:43.200702+0000","last_became_peered":"2026-03-10T07:25:43.200702+0000","last_unstale":"2026-03-10T07:26:09.039897+0000","last_undegraded":"2026-03-10T07:26:09.039897+0000","last_fullsized":"2026-03-10T07:26:09.039897+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:24:17.176040+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"62'14","reported_seq":54,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639921+0000","last_change":"2026-03-10T07:25:38.961264+0000","last_active":"2026-03-10T07:26:09.639921+0000","last_peered":"2026-03-10T07:26:09.639921+0000","last_clean":"2026-03-10T07:26:09.639921+0000","last_became_active":"2026-03-10T07:25:38.961139+0000","last_became_peered":"2026-03-10T07:25:38.961139+0000","last_unstale":"2026-03-10T07:26:09.639921+0000","last_undegraded":"2026-03-10T07:26:09.639921+0000","last_fullsized":"2026-03-10T07:26:09.639921+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:17:52.133465+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037048+0000","last_change":"2026-03-10T07:25:36.949403+0000","last_active":"2026-03-10T07:26:09.037048+0000","last_peered":"2026-03-10T07:26:09.037048+0000","last_clean":"2026-03-10T07:26:09.037048+0000","last_became_active":"2026-03-10T07:25:36.949202+0000","last_became_peered":"2026-03-10T07:25:36.949202+0000","last_unstale":"2026-03-10T07:26:09.037048+0000","last_undegraded":"2026-03-10T07:26:09.037048+0000","last_fullsized":"2026-03-10T07:26:09.037048+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:28:17.458227+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537518+0000","last_change":"2026-03-10T07:25:40.956067+0000","last_active":"2026-03-10T07:26:08.537518+0000","last_peered":"2026-03-10T07:26:08.537518+0000","last_clean":"2026-03-10T07:26:08.537518+0000","last_became_active":"2026-03-10T07:25:40.955956+0000","last_became_peered":"2026-03-10T07:25:40.955956+0000","last_unstale":"2026-03-10T07:26:08.537518+0000","last_undegraded":"2026-03-10T07:26:08.537518+0000","last_fullsized":"2026-03-10T07:26:08.537518+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:58:58.918939+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643951+0000","last_change":"2026-03-10T07:25:42.980305+0000","last_active":"2026-03-10T07:26:08.643951+0000","last_peered":"2026-03-10T07:26:08.643951+0000","last_clean":"2026-03-10T07:26:08.643951+0000","last_became_active":"2026-03-10T07:25:42.980211+0000","last_became_peered":"2026-03-10T07:25:42.980211+0000","last_unstale":"2026-03-10T07:26:08.643951+0000","last_undegraded":"2026-03-10T07:26:08.643951+0000","last_fullsized":"2026-03-10T07:26:08.643951+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:31:00.382876+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"62'10","reported_seq":48,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537235+0000","last_change":"2026-03-10T07:25:38.958950+0000","last_active":"2026-03-10T07:26:08.537235+0000","last_peered":"2026-03-10T07:26:08.537235+0000","last_clean":"2026-03-10T07:26:08.537235+0000","last_became_active":"2026-03-10T07:25:38.955334+0000","last_became_peered":"2026-03-10T07:25:38.955334+0000","last_unstale":"2026-03-10T07:26:08.537235+0000","last_undegraded":"2026-03-10T07:26:08.537235+0000","last_fullsized":"2026-03-10T07:26:08.537235+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:00:09.253457+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536386+0000","last_change":"2026-03-10T07:25:36.947198+0000","last_active":"2026-03-10T07:26:08.536386+0000","last_peered":"2026-03-10T07:26:08.536386+0000","last_clean":"2026-03-10T07:26:08.536386+0000","last_became_active":"2026-03-10T07:25:36.946061+0000","last_became_peered":"2026-03-10T07:25:36.946061+0000","last_unstale":"2026-03-10T07:26:08.536386+0000","last_undegraded":"2026-03-10T07:26:08.536386+0000","last_fullsized":"2026-03-10T07:26:08.536386+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:01:05.581883+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"68'39","reported_seq":72,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:10.646635+0000","last_change":"2026-03-10T07:25:16.846892+0000","last_active":"2026-03-10T07:26:10.646635+0000","last_peered":"2026-03-10T07:26:10.646635+0000","last_clean":"2026-03-10T07:26:10.646635+0000","last_became_active":"2026-03-10T07:25:16.841838+0000","last_became_peered":"2026-03-10T07:25:16.841838+0000","last_unstale":"2026-03-10T07:26:10.646635+0000","last_undegraded":"2026-03-10T07:26:10.646635+0000","last_fullsized":"2026-03-10T07:26:10.646635+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:22:28.664661+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:22:28.664661+0000","last_clean_scrub_stamp":"2026-03-10T07:22:28.664661+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:40:47.538404+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.038864+0000","last_change":"2026-03-10T07:25:40.955630+0000","last_active":"2026-03-10T07:26:09.038864+0000","last_peered":"2026-03-10T07:26:09.038864+0000","last_clean":"2026-03-10T07:26:09.038864+0000","last_became_active":"2026-03-10T07:25:40.955539+0000","last_became_peered":"2026-03-10T07:25:40.955539+0000","last_unstale":"2026-03-10T07:26:09.038864+0000","last_undegraded":"2026-03-10T07:26:09.038864+0000","last_fullsized":"2026-03-10T07:26:09.038864+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:43:05.712356+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536341+0000","last_change":"2026-03-10T07:25:42.973285+0000","last_active":"2026-03-10T07:26:08.536341+0000","last_peered":"2026-03-10T07:26:08.536341+0000","last_clean":"2026-03-10T07:26:08.536341+0000","last_became_active":"2026-03-10T07:25:42.973174+0000","last_became_peered":"2026-03-10T07:25:42.973174+0000","last_unstale":"2026-03-10T07:26:08.536341+0000","last_undegraded":"2026-03-10T07:26:08.536341+0000","last_fullsized":"2026-03-10T07:26:08.536341+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:59:05.440102+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"62'17","reported_seq":61,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.038025+0000","last_change":"2026-03-10T07:25:38.949174+0000","last_active":"2026-03-10T07:26:09.038025+0000","last_peered":"2026-03-10T07:26:09.038025+0000","last_clean":"2026-03-10T07:26:09.038025+0000","last_became_active":"2026-03-10T07:25:38.949040+0000","last_became_peered":"2026-03-10T07:25:38.949040+0000","last_unstale":"2026-03-10T07:26:09.038025+0000","last_undegraded":"2026-03-10T07:26:09.038025+0000","last_fullsized":"2026-03-10T07:26:09.038025+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:29:52.963218+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533535+0000","last_change":"2026-03-10T07:25:36.936648+0000","last_active":"2026-03-10T07:26:08.533535+0000","last_peered":"2026-03-10T07:26:08.533535+0000","last_clean":"2026-03-10T07:26:08.533535+0000","last_became_active":"2026-03-10T07:25:36.936290+0000","last_became_peered":"2026-03-10T07:25:36.936290+0000","last_unstale":"2026-03-10T07:26:08.533535+0000","last_undegraded":"2026-03-10T07:26:08.533535+0000","last_fullsized":"2026-03-10T07:26:08.533535+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:46:33.718844+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533540+0000","last_change":"2026-03-10T07:25:40.970881+0000","last_active":"2026-03-10T07:26:08.533540+0000","last_peered":"2026-03-10T07:26:08.533540+0000","last_clean":"2026-03-10T07:26:08.533540+0000","last_became_active":"2026-03-10T07:25:40.970590+0000","last_became_peered":"2026-03-10T07:25:40.970590+0000","last_unstale":"2026-03-10T07:26:08.533540+0000","last_undegraded":"2026-03-10T07:26:08.533540+0000","last_fullsized":"2026-03-10T07:26:08.533540+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:28:46.961059+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"62'1","reported_seq":26,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.038095+0000","last_change":"2026-03-10T07:25:42.975336+0000","last_active":"2026-03-10T07:26:09.038095+0000","last_peered":"2026-03-10T07:26:09.038095+0000","last_clean":"2026-03-10T07:26:09.038095+0000","last_became_active":"2026-03-10T07:25:42.975159+0000","last_became_peered":"2026-03-10T07:25:42.975159+0000","last_unstale":"2026-03-10T07:26:09.038095+0000","last_undegraded":"2026-03-10T07:26:09.038095+0000","last_fullsized":"2026-03-10T07:26:09.038095+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:18:25.609708+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"62'10","reported_seq":48,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537338+0000","last_change":"2026-03-10T07:25:38.949184+0000","last_active":"2026-03-10T07:26:08.537338+0000","last_peered":"2026-03-10T07:26:08.537338+0000","last_clean":"2026-03-10T07:26:08.537338+0000","last_became_active":"2026-03-10T07:25:38.949098+0000","last_became_peered":"2026-03-10T07:25:38.949098+0000","last_unstale":"2026-03-10T07:26:08.537338+0000","last_undegraded":"2026-03-10T07:26:08.537338+0000","last_fullsized":"2026-03-10T07:26:08.537338+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:28:15.649500+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643955+0000","last_change":"2026-03-10T07:25:36.935418+0000","last_active":"2026-03-10T07:26:08.643955+0000","last_peered":"2026-03-10T07:26:08.643955+0000","last_clean":"2026-03-10T07:26:08.643955+0000","last_became_active":"2026-03-10T07:25:36.935317+0000","last_became_peered":"2026-03-10T07:25:36.935317+0000","last_unstale":"2026-03-10T07:26:08.643955+0000","last_undegraded":"2026-03-10T07:26:08.643955+0000","last_fullsized":"2026-03-10T07:26:08.643955+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:17:20.453634+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533932+0000","last_change":"2026-03-10T07:25:40.943934+0000","last_active":"2026-03-10T07:26:08.533932+0000","last_peered":"2026-03-10T07:26:08.533932+0000","last_clean":"2026-03-10T07:26:08.533932+0000","last_became_active":"2026-03-10T07:25:40.943855+0000","last_became_peered":"2026-03-10T07:25:40.943855+0000","last_unstale":"2026-03-10T07:26:08.533932+0000","last_undegraded":"2026-03-10T07:26:08.533932+0000","last_fullsized":"2026-03-10T07:26:08.533932+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:34:39.624347+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639038+0000","last_change":"2026-03-10T07:25:42.972089+0000","last_active":"2026-03-10T07:26:09.639038+0000","last_peered":"2026-03-10T07:26:09.639038+0000","last_clean":"2026-03-10T07:26:09.639038+0000","last_became_active":"2026-03-10T07:25:42.971881+0000","last_became_peered":"2026-03-10T07:25:42.971881+0000","last_unstale":"2026-03-10T07:26:09.639038+0000","last_undegraded":"2026-03-10T07:26:09.639038+0000","last_fullsized":"2026-03-10T07:26:09.639038+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:32:11.777461+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"62'15","reported_seq":58,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.038143+0000","last_change":"2026-03-10T07:25:38.961667+0000","last_active":"2026-03-10T07:26:09.038143+0000","last_peered":"2026-03-10T07:26:09.038143+0000","last_clean":"2026-03-10T07:26:09.038143+0000","last_became_active":"2026-03-10T07:25:38.961190+0000","last_became_peered":"2026-03-10T07:25:38.961190+0000","last_unstale":"2026-03-10T07:26:09.038143+0000","last_undegraded":"2026-03-10T07:26:09.038143+0000","last_fullsized":"2026-03-10T07:26:09.038143+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:48:26.032255+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533338+0000","last_change":"2026-03-10T07:25:36.943492+0000","last_active":"2026-03-10T07:26:08.533338+0000","last_peered":"2026-03-10T07:26:08.533338+0000","last_clean":"2026-03-10T07:26:08.533338+0000","last_became_active":"2026-03-10T07:25:36.943414+0000","last_became_peered":"2026-03-10T07:25:36.943414+0000","last_unstale":"2026-03-10T07:26:08.533338+0000","last_undegraded":"2026-03-10T07:26:08.533338+0000","last_fullsized":"2026-03-10T07:26:08.533338+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:21:45.284904+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"63'11","reported_seq":54,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:42.003548+0000","last_change":"2026-03-10T07:25:40.967840+0000","last_active":"2026-03-10T07:26:42.003548+0000","last_peered":"2026-03-10T07:26:42.003548+0000","last_clean":"2026-03-10T07:26:42.003548+0000","last_became_active":"2026-03-10T07:25:40.966975+0000","last_became_peered":"2026-03-10T07:25:40.966975+0000","last_unstale":"2026-03-10T07:26:42.003548+0000","last_undegraded":"2026-03-10T07:26:42.003548+0000","last_fullsized":"2026-03-10T07:26:42.003548+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:08:03.362729+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536036+0000","last_change":"2026-03-10T07:25:43.203990+0000","last_active":"2026-03-10T07:26:08.536036+0000","last_peered":"2026-03-10T07:26:08.536036+0000","last_clean":"2026-03-10T07:26:08.536036+0000","last_became_active":"2026-03-10T07:25:43.203824+0000","last_became_peered":"2026-03-10T07:25:43.203824+0000","last_unstale":"2026-03-10T07:26:08.536036+0000","last_undegraded":"2026-03-10T07:26:08.536036+0000","last_fullsized":"2026-03-10T07:26:08.536036+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:31:33.451326+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"62'11","reported_seq":52,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039162+0000","last_change":"2026-03-10T07:25:38.961478+0000","last_active":"2026-03-10T07:26:09.039162+0000","last_peered":"2026-03-10T07:26:09.039162+0000","last_clean":"2026-03-10T07:26:09.039162+0000","last_became_active":"2026-03-10T07:25:38.960621+0000","last_became_peered":"2026-03-10T07:25:38.960621+0000","last_unstale":"2026-03-10T07:26:09.039162+0000","last_undegraded":"2026-03-10T07:26:09.039162+0000","last_fullsized":"2026-03-10T07:26:09.039162+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:08:58.671231+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"55'2","reported_seq":53,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669467+0000","last_change":"2026-03-10T07:25:36.944295+0000","last_active":"2026-03-10T07:26:08.669467+0000","last_peered":"2026-03-10T07:26:08.669467+0000","last_clean":"2026-03-10T07:26:08.669467+0000","last_became_active":"2026-03-10T07:25:36.943976+0000","last_became_peered":"2026-03-10T07:25:36.943976+0000","last_unstale":"2026-03-10T07:26:08.669467+0000","last_undegraded":"2026-03-10T07:26:08.669467+0000","last_fullsized":"2026-03-10T07:26:08.669467+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:33:28.098706+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533891+0000","last_change":"2026-03-10T07:25:40.955113+0000","last_active":"2026-03-10T07:26:08.533891+0000","last_peered":"2026-03-10T07:26:08.533891+0000","last_clean":"2026-03-10T07:26:08.533891+0000","last_became_active":"2026-03-10T07:25:40.955022+0000","last_became_peered":"2026-03-10T07:25:40.955022+0000","last_unstale":"2026-03-10T07:26:08.533891+0000","last_undegraded":"2026-03-10T07:26:08.533891+0000","last_fullsized":"2026-03-10T07:26:08.533891+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:23:59.987139+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536484+0000","last_change":"2026-03-10T07:25:42.975466+0000","last_active":"2026-03-10T07:26:08.536484+0000","last_peered":"2026-03-10T07:26:08.536484+0000","last_clean":"2026-03-10T07:26:08.536484+0000","last_became_active":"2026-03-10T07:25:42.975100+0000","last_became_peered":"2026-03-10T07:25:42.975100+0000","last_unstale":"2026-03-10T07:26:08.536484+0000","last_undegraded":"2026-03-10T07:26:08.536484+0000","last_fullsized":"2026-03-10T07:26:08.536484+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:18:39.078295+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"62'11","reported_seq":52,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037906+0000","last_change":"2026-03-10T07:25:38.961581+0000","last_active":"2026-03-10T07:26:09.037906+0000","last_peered":"2026-03-10T07:26:09.037906+0000","last_clean":"2026-03-10T07:26:09.037906+0000","last_became_active":"2026-03-10T07:25:38.961037+0000","last_became_peered":"2026-03-10T07:25:38.961037+0000","last_unstale":"2026-03-10T07:26:09.037906+0000","last_undegraded":"2026-03-10T07:26:09.037906+0000","last_fullsized":"2026-03-10T07:26:09.037906+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:20:28.812410+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"55'1","reported_seq":45,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533602+0000","last_change":"2026-03-10T07:25:36.934795+0000","last_active":"2026-03-10T07:26:08.533602+0000","last_peered":"2026-03-10T07:26:08.533602+0000","last_clean":"2026-03-10T07:26:08.533602+0000","last_became_active":"2026-03-10T07:25:36.932716+0000","last_became_peered":"2026-03-10T07:25:36.932716+0000","last_unstale":"2026-03-10T07:26:08.533602+0000","last_undegraded":"2026-03-10T07:26:08.533602+0000","last_fullsized":"2026-03-10T07:26:08.533602+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:44:53.657616+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537606+0000","last_change":"2026-03-10T07:25:40.949370+0000","last_active":"2026-03-10T07:26:08.537606+0000","last_peered":"2026-03-10T07:26:08.537606+0000","last_clean":"2026-03-10T07:26:08.537606+0000","last_became_active":"2026-03-10T07:25:40.949240+0000","last_became_peered":"2026-03-10T07:25:40.949240+0000","last_unstale":"2026-03-10T07:26:08.537606+0000","last_undegraded":"2026-03-10T07:26:08.537606+0000","last_fullsized":"2026-03-10T07:26:08.537606+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:23:09.954288+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533634+0000","last_change":"2026-03-10T07:25:42.973305+0000","last_active":"2026-03-10T07:26:08.533634+0000","last_peered":"2026-03-10T07:26:08.533634+0000","last_clean":"2026-03-10T07:26:08.533634+0000","last_became_active":"2026-03-10T07:25:42.973188+0000","last_became_peered":"2026-03-10T07:25:42.973188+0000","last_unstale":"2026-03-10T07:26:08.533634+0000","last_undegraded":"2026-03-10T07:26:08.533634+0000","last_fullsized":"2026-03-10T07:26:08.533634+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:41:02.387664+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"62'4","reported_seq":39,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588703+0000","last_change":"2026-03-10T07:25:38.957721+0000","last_active":"2026-03-10T07:26:08.588703+0000","last_peered":"2026-03-10T07:26:08.588703+0000","last_clean":"2026-03-10T07:26:08.588703+0000","last_became_active":"2026-03-10T07:25:38.957530+0000","last_became_peered":"2026-03-10T07:25:38.957530+0000","last_unstale":"2026-03-10T07:26:08.588703+0000","last_undegraded":"2026-03-10T07:26:08.588703+0000","last_fullsized":"2026-03-10T07:26:08.588703+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:57:51.095488+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588679+0000","last_change":"2026-03-10T07:25:36.933671+0000","last_active":"2026-03-10T07:26:08.588679+0000","last_peered":"2026-03-10T07:26:08.588679+0000","last_clean":"2026-03-10T07:26:08.588679+0000","last_became_active":"2026-03-10T07:25:36.933544+0000","last_became_peered":"2026-03-10T07:25:36.933544+0000","last_unstale":"2026-03-10T07:26:08.588679+0000","last_undegraded":"2026-03-10T07:26:08.588679+0000","last_fullsized":"2026-03-10T07:26:08.588679+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:36:03.268805+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537656+0000","last_change":"2026-03-10T07:25:40.952470+0000","last_active":"2026-03-10T07:26:08.537656+0000","last_peered":"2026-03-10T07:26:08.537656+0000","last_clean":"2026-03-10T07:26:08.537656+0000","last_became_active":"2026-03-10T07:25:40.950383+0000","last_became_peered":"2026-03-10T07:25:40.950383+0000","last_unstale":"2026-03-10T07:26:08.537656+0000","last_undegraded":"2026-03-10T07:26:08.537656+0000","last_fullsized":"2026-03-10T07:26:08.537656+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:58:11.138488+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039967+0000","last_change":"2026-03-10T07:25:43.201671+0000","last_active":"2026-03-10T07:26:09.039967+0000","last_peered":"2026-03-10T07:26:09.039967+0000","last_clean":"2026-03-10T07:26:09.039967+0000","last_became_active":"2026-03-10T07:25:43.201330+0000","last_became_peered":"2026-03-10T07:25:43.201330+0000","last_unstale":"2026-03-10T07:26:09.039967+0000","last_undegraded":"2026-03-10T07:26:09.039967+0000","last_fullsized":"2026-03-10T07:26:09.039967+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:33:00.205062+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"62'11","reported_seq":52,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039039+0000","last_change":"2026-03-10T07:25:38.961975+0000","last_active":"2026-03-10T07:26:09.039039+0000","last_peered":"2026-03-10T07:26:09.039039+0000","last_clean":"2026-03-10T07:26:09.039039+0000","last_became_active":"2026-03-10T07:25:38.961334+0000","last_became_peered":"2026-03-10T07:25:38.961334+0000","last_unstale":"2026-03-10T07:26:09.039039+0000","last_undegraded":"2026-03-10T07:26:09.039039+0000","last_fullsized":"2026-03-10T07:26:09.039039+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:12:18.306521+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536714+0000","last_change":"2026-03-10T07:25:36.946853+0000","last_active":"2026-03-10T07:26:08.536714+0000","last_peered":"2026-03-10T07:26:08.536714+0000","last_clean":"2026-03-10T07:26:08.536714+0000","last_became_active":"2026-03-10T07:25:36.945627+0000","last_became_peered":"2026-03-10T07:25:36.945627+0000","last_unstale":"2026-03-10T07:26:08.536714+0000","last_undegraded":"2026-03-10T07:26:08.536714+0000","last_fullsized":"2026-03-10T07:26:08.536714+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:54:56.690638+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"63'11","reported_seq":54,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:42.004328+0000","last_change":"2026-03-10T07:25:40.956343+0000","last_active":"2026-03-10T07:26:42.004328+0000","last_peered":"2026-03-10T07:26:42.004328+0000","last_clean":"2026-03-10T07:26:42.004328+0000","last_became_active":"2026-03-10T07:25:40.955608+0000","last_became_peered":"2026-03-10T07:25:40.955608+0000","last_unstale":"2026-03-10T07:26:42.004328+0000","last_undegraded":"2026-03-10T07:26:42.004328+0000","last_fullsized":"2026-03-10T07:26:42.004328+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:50:28.121766+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639301+0000","last_change":"2026-03-10T07:25:42.966964+0000","last_active":"2026-03-10T07:26:09.639301+0000","last_peered":"2026-03-10T07:26:09.639301+0000","last_clean":"2026-03-10T07:26:09.639301+0000","last_became_active":"2026-03-10T07:25:42.966831+0000","last_became_peered":"2026-03-10T07:25:42.966831+0000","last_unstale":"2026-03-10T07:26:09.639301+0000","last_undegraded":"2026-03-10T07:26:09.639301+0000","last_fullsized":"2026-03-10T07:26:09.639301+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:51:07.331594+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639524+0000","last_change":"2026-03-10T07:25:38.935508+0000","last_active":"2026-03-10T07:26:09.639524+0000","last_peered":"2026-03-10T07:26:09.639524+0000","last_clean":"2026-03-10T07:26:09.639524+0000","last_became_active":"2026-03-10T07:25:38.935404+0000","last_became_peered":"2026-03-10T07:25:38.935404+0000","last_unstale":"2026-03-10T07:26:09.639524+0000","last_undegraded":"2026-03-10T07:26:09.639524+0000","last_fullsized":"2026-03-10T07:26:09.639524+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:59:49.592651+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.638808+0000","last_change":"2026-03-10T07:25:36.935149+0000","last_active":"2026-03-10T07:26:09.638808+0000","last_peered":"2026-03-10T07:26:09.638808+0000","last_clean":"2026-03-10T07:26:09.638808+0000","last_became_active":"2026-03-10T07:25:36.935007+0000","last_became_peered":"2026-03-10T07:25:36.935007+0000","last_unstale":"2026-03-10T07:26:09.638808+0000","last_undegraded":"2026-03-10T07:26:09.638808+0000","last_fullsized":"2026-03-10T07:26:09.638808+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:54:49.458298+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"63'11","reported_seq":54,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:42.004610+0000","last_change":"2026-03-10T07:25:40.949838+0000","last_active":"2026-03-10T07:26:42.004610+0000","last_peered":"2026-03-10T07:26:42.004610+0000","last_clean":"2026-03-10T07:26:42.004610+0000","last_became_active":"2026-03-10T07:25:40.949708+0000","last_became_peered":"2026-03-10T07:25:40.949708+0000","last_unstale":"2026-03-10T07:26:42.004610+0000","last_undegraded":"2026-03-10T07:26:42.004610+0000","last_fullsized":"2026-03-10T07:26:42.004610+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:07:59.191575+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669047+0000","last_change":"2026-03-10T07:25:42.973849+0000","last_active":"2026-03-10T07:26:08.669047+0000","last_peered":"2026-03-10T07:26:08.669047+0000","last_clean":"2026-03-10T07:26:08.669047+0000","last_became_active":"2026-03-10T07:25:42.973676+0000","last_became_peered":"2026-03-10T07:25:42.973676+0000","last_unstale":"2026-03-10T07:26:08.669047+0000","last_undegraded":"2026-03-10T07:26:08.669047+0000","last_fullsized":"2026-03-10T07:26:08.669047+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:40:38.682029+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037847+0000","last_change":"2026-03-10T07:25:38.961755+0000","last_active":"2026-03-10T07:26:09.037847+0000","last_peered":"2026-03-10T07:26:09.037847+0000","last_clean":"2026-03-10T07:26:09.037847+0000","last_became_active":"2026-03-10T07:25:38.960755+0000","last_became_peered":"2026-03-10T07:25:38.960755+0000","last_unstale":"2026-03-10T07:26:09.037847+0000","last_undegraded":"2026-03-10T07:26:09.037847+0000","last_fullsized":"2026-03-10T07:26:09.037847+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:06:44.905997+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588630+0000","last_change":"2026-03-10T07:25:36.937727+0000","last_active":"2026-03-10T07:26:08.588630+0000","last_peered":"2026-03-10T07:26:08.588630+0000","last_clean":"2026-03-10T07:26:08.588630+0000","last_became_active":"2026-03-10T07:25:36.936312+0000","last_became_peered":"2026-03-10T07:25:36.936312+0000","last_unstale":"2026-03-10T07:26:08.588630+0000","last_undegraded":"2026-03-10T07:26:08.588630+0000","last_fullsized":"2026-03-10T07:26:08.588630+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:11:40.468598+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536825+0000","last_change":"2026-03-10T07:25:40.955794+0000","last_active":"2026-03-10T07:26:08.536825+0000","last_peered":"2026-03-10T07:26:08.536825+0000","last_clean":"2026-03-10T07:26:08.536825+0000","last_became_active":"2026-03-10T07:25:40.955671+0000","last_became_peered":"2026-03-10T07:25:40.955671+0000","last_unstale":"2026-03-10T07:26:08.536825+0000","last_undegraded":"2026-03-10T07:26:08.536825+0000","last_fullsized":"2026-03-10T07:26:08.536825+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:41:30.295536+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639073+0000","last_change":"2026-03-10T07:25:42.971300+0000","last_active":"2026-03-10T07:26:09.639073+0000","last_peered":"2026-03-10T07:26:09.639073+0000","last_clean":"2026-03-10T07:26:09.639073+0000","last_became_active":"2026-03-10T07:25:42.971206+0000","last_became_peered":"2026-03-10T07:25:42.971206+0000","last_unstale":"2026-03-10T07:26:09.639073+0000","last_undegraded":"2026-03-10T07:26:09.639073+0000","last_fullsized":"2026-03-10T07:26:09.639073+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:45:57.050615+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"62'10","reported_seq":48,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.668522+0000","last_change":"2026-03-10T07:25:38.972133+0000","last_active":"2026-03-10T07:26:08.668522+0000","last_peered":"2026-03-10T07:26:08.668522+0000","last_clean":"2026-03-10T07:26:08.668522+0000","last_became_active":"2026-03-10T07:25:38.965840+0000","last_became_peered":"2026-03-10T07:25:38.965840+0000","last_unstale":"2026-03-10T07:26:08.668522+0000","last_undegraded":"2026-03-10T07:26:08.668522+0000","last_fullsized":"2026-03-10T07:26:08.668522+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:22:40.667148+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643792+0000","last_change":"2026-03-10T07:25:36.936409+0000","last_active":"2026-03-10T07:26:08.643792+0000","last_peered":"2026-03-10T07:26:08.643792+0000","last_clean":"2026-03-10T07:26:08.643792+0000","last_became_active":"2026-03-10T07:25:36.936327+0000","last_became_peered":"2026-03-10T07:25:36.936327+0000","last_unstale":"2026-03-10T07:26:08.643792+0000","last_undegraded":"2026-03-10T07:26:08.643792+0000","last_fullsized":"2026-03-10T07:26:08.643792+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:37:57.866689+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643764+0000","last_change":"2026-03-10T07:25:40.949966+0000","last_active":"2026-03-10T07:26:08.643764+0000","last_peered":"2026-03-10T07:26:08.643764+0000","last_clean":"2026-03-10T07:26:08.643764+0000","last_became_active":"2026-03-10T07:25:40.949869+0000","last_became_peered":"2026-03-10T07:25:40.949869+0000","last_unstale":"2026-03-10T07:26:08.643764+0000","last_undegraded":"2026-03-10T07:26:08.643764+0000","last_fullsized":"2026-03-10T07:26:08.643764+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:54:54.626867+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536587+0000","last_change":"2026-03-10T07:25:42.971021+0000","last_active":"2026-03-10T07:26:08.536587+0000","last_peered":"2026-03-10T07:26:08.536587+0000","last_clean":"2026-03-10T07:26:08.536587+0000","last_became_active":"2026-03-10T07:25:42.970947+0000","last_became_peered":"2026-03-10T07:25:42.970947+0000","last_unstale":"2026-03-10T07:26:08.536587+0000","last_undegraded":"2026-03-10T07:26:08.536587+0000","last_fullsized":"2026-03-10T07:26:08.536587+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:52:41.859306+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"62'6","reported_seq":42,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.640112+0000","last_change":"2026-03-10T07:25:38.944574+0000","last_active":"2026-03-10T07:26:09.640112+0000","last_peered":"2026-03-10T07:26:09.640112+0000","last_clean":"2026-03-10T07:26:09.640112+0000","last_became_active":"2026-03-10T07:25:38.944280+0000","last_became_peered":"2026-03-10T07:25:38.944280+0000","last_unstale":"2026-03-10T07:26:09.640112+0000","last_undegraded":"2026-03-10T07:26:09.640112+0000","last_fullsized":"2026-03-10T07:26:09.640112+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:35:54.118521+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536663+0000","last_change":"2026-03-10T07:25:36.953116+0000","last_active":"2026-03-10T07:26:08.536663+0000","last_peered":"2026-03-10T07:26:08.536663+0000","last_clean":"2026-03-10T07:26:08.536663+0000","last_became_active":"2026-03-10T07:25:36.952709+0000","last_became_peered":"2026-03-10T07:25:36.952709+0000","last_unstale":"2026-03-10T07:26:08.536663+0000","last_undegraded":"2026-03-10T07:26:08.536663+0000","last_fullsized":"2026-03-10T07:26:08.536663+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:21:09.373601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588191+0000","last_change":"2026-03-10T07:25:40.965383+0000","last_active":"2026-03-10T07:26:08.588191+0000","last_peered":"2026-03-10T07:26:08.588191+0000","last_clean":"2026-03-10T07:26:08.588191+0000","last_became_active":"2026-03-10T07:25:40.965097+0000","last_became_peered":"2026-03-10T07:25:40.965097+0000","last_unstale":"2026-03-10T07:26:08.588191+0000","last_undegraded":"2026-03-10T07:26:08.588191+0000","last_fullsized":"2026-03-10T07:26:08.588191+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:39:01.892358+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039808+0000","last_change":"2026-03-10T07:25:42.973181+0000","last_active":"2026-03-10T07:26:09.039808+0000","last_peered":"2026-03-10T07:26:09.039808+0000","last_clean":"2026-03-10T07:26:09.039808+0000","last_became_active":"2026-03-10T07:25:42.973082+0000","last_became_peered":"2026-03-10T07:25:42.973082+0000","last_unstale":"2026-03-10T07:26:09.039808+0000","last_undegraded":"2026-03-10T07:26:09.039808+0000","last_fullsized":"2026-03-10T07:26:09.039808+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:41:35.388945+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536575+0000","last_change":"2026-03-10T07:25:38.963414+0000","last_active":"2026-03-10T07:26:08.536575+0000","last_peered":"2026-03-10T07:26:08.536575+0000","last_clean":"2026-03-10T07:26:08.536575+0000","last_became_active":"2026-03-10T07:25:38.963049+0000","last_became_peered":"2026-03-10T07:25:38.963049+0000","last_unstale":"2026-03-10T07:26:08.536575+0000","last_undegraded":"2026-03-10T07:26:08.536575+0000","last_fullsized":"2026-03-10T07:26:08.536575+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:43:44.102685+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588563+0000","last_change":"2026-03-10T07:25:36.939528+0000","last_active":"2026-03-10T07:26:08.588563+0000","last_peered":"2026-03-10T07:26:08.588563+0000","last_clean":"2026-03-10T07:26:08.588563+0000","last_became_active":"2026-03-10T07:25:36.937955+0000","last_became_peered":"2026-03-10T07:25:36.937955+0000","last_unstale":"2026-03-10T07:26:08.588563+0000","last_undegraded":"2026-03-10T07:26:08.588563+0000","last_fullsized":"2026-03-10T07:26:08.588563+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:24:35.327401+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039744+0000","last_change":"2026-03-10T07:25:40.967257+0000","last_active":"2026-03-10T07:26:09.039744+0000","last_peered":"2026-03-10T07:26:09.039744+0000","last_clean":"2026-03-10T07:26:09.039744+0000","last_became_active":"2026-03-10T07:25:40.966757+0000","last_became_peered":"2026-03-10T07:25:40.966757+0000","last_unstale":"2026-03-10T07:26:09.039744+0000","last_undegraded":"2026-03-10T07:26:09.039744+0000","last_fullsized":"2026-03-10T07:26:09.039744+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:43:41.363337+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536384+0000","last_change":"2026-03-10T07:25:43.202041+0000","last_active":"2026-03-10T07:26:08.536384+0000","last_peered":"2026-03-10T07:26:08.536384+0000","last_clean":"2026-03-10T07:26:08.536384+0000","last_became_active":"2026-03-10T07:25:43.201200+0000","last_became_peered":"2026-03-10T07:25:43.201200+0000","last_unstale":"2026-03-10T07:26:08.536384+0000","last_undegraded":"2026-03-10T07:26:08.536384+0000","last_fullsized":"2026-03-10T07:26:08.536384+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:51:50.787234+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"62'1","reported_seq":27,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039268+0000","last_change":"2026-03-10T07:25:42.978185+0000","last_active":"2026-03-10T07:26:09.039268+0000","last_peered":"2026-03-10T07:26:09.039268+0000","last_clean":"2026-03-10T07:26:09.039268+0000","last_became_active":"2026-03-10T07:25:42.978084+0000","last_became_peered":"2026-03-10T07:25:42.978084+0000","last_unstale":"2026-03-10T07:26:09.039268+0000","last_undegraded":"2026-03-10T07:26:09.039268+0000","last_fullsized":"2026-03-10T07:26:09.039268+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:43:12.892532+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"62'15","reported_seq":58,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644282+0000","last_change":"2026-03-10T07:25:38.963537+0000","last_active":"2026-03-10T07:26:08.644282+0000","last_peered":"2026-03-10T07:26:08.644282+0000","last_clean":"2026-03-10T07:26:08.644282+0000","last_became_active":"2026-03-10T07:25:38.963373+0000","last_became_peered":"2026-03-10T07:25:38.963373+0000","last_unstale":"2026-03-10T07:26:08.644282+0000","last_undegraded":"2026-03-10T07:26:08.644282+0000","last_fullsized":"2026-03-10T07:26:08.644282+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:23:13.871434+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537259+0000","last_change":"2026-03-10T07:25:36.946640+0000","last_active":"2026-03-10T07:26:08.537259+0000","last_peered":"2026-03-10T07:26:08.537259+0000","last_clean":"2026-03-10T07:26:08.537259+0000","last_became_active":"2026-03-10T07:25:36.945831+0000","last_became_peered":"2026-03-10T07:25:36.945831+0000","last_unstale":"2026-03-10T07:26:08.537259+0000","last_undegraded":"2026-03-10T07:26:08.537259+0000","last_fullsized":"2026-03-10T07:26:08.537259+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:27:06.610514+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"63'11","reported_seq":57,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:42.004534+0000","last_change":"2026-03-10T07:25:40.965271+0000","last_active":"2026-03-10T07:26:42.004534+0000","last_peered":"2026-03-10T07:26:42.004534+0000","last_clean":"2026-03-10T07:26:42.004534+0000","last_became_active":"2026-03-10T07:25:40.964821+0000","last_became_peered":"2026-03-10T07:25:40.964821+0000","last_unstale":"2026-03-10T07:26:42.004534+0000","last_undegraded":"2026-03-10T07:26:42.004534+0000","last_fullsized":"2026-03-10T07:26:42.004534+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:17:15.429481+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643401+0000","last_change":"2026-03-10T07:25:42.980060+0000","last_active":"2026-03-10T07:26:08.643401+0000","last_peered":"2026-03-10T07:26:08.643401+0000","last_clean":"2026-03-10T07:26:08.643401+0000","last_became_active":"2026-03-10T07:25:42.979985+0000","last_became_peered":"2026-03-10T07:25:42.979985+0000","last_unstale":"2026-03-10T07:26:08.643401+0000","last_undegraded":"2026-03-10T07:26:08.643401+0000","last_fullsized":"2026-03-10T07:26:08.643401+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:04:13.624981+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537312+0000","last_change":"2026-03-10T07:25:38.940303+0000","last_active":"2026-03-10T07:26:08.537312+0000","last_peered":"2026-03-10T07:26:08.537312+0000","last_clean":"2026-03-10T07:26:08.537312+0000","last_became_active":"2026-03-10T07:25:38.940123+0000","last_became_peered":"2026-03-10T07:25:38.940123+0000","last_unstale":"2026-03-10T07:26:08.537312+0000","last_undegraded":"2026-03-10T07:26:08.537312+0000","last_fullsized":"2026-03-10T07:26:08.537312+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:01:47.129134+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"55'1","reported_seq":38,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537277+0000","last_change":"2026-03-10T07:25:36.931081+0000","last_active":"2026-03-10T07:26:08.537277+0000","last_peered":"2026-03-10T07:26:08.537277+0000","last_clean":"2026-03-10T07:26:08.537277+0000","last_became_active":"2026-03-10T07:25:36.930935+0000","last_became_peered":"2026-03-10T07:25:36.930935+0000","last_unstale":"2026-03-10T07:26:08.537277+0000","last_undegraded":"2026-03-10T07:26:08.537277+0000","last_fullsized":"2026-03-10T07:26:08.537277+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:43:38.866828+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639565+0000","last_change":"2026-03-10T07:25:40.958175+0000","last_active":"2026-03-10T07:26:09.639565+0000","last_peered":"2026-03-10T07:26:09.639565+0000","last_clean":"2026-03-10T07:26:09.639565+0000","last_became_active":"2026-03-10T07:25:40.958020+0000","last_became_peered":"2026-03-10T07:25:40.958020+0000","last_unstale":"2026-03-10T07:26:09.639565+0000","last_undegraded":"2026-03-10T07:26:09.639565+0000","last_fullsized":"2026-03-10T07:26:09.639565+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:12:06.304347+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669269+0000","last_change":"2026-03-10T07:25:43.200020+0000","last_active":"2026-03-10T07:26:08.669269+0000","last_peered":"2026-03-10T07:26:08.669269+0000","last_clean":"2026-03-10T07:26:08.669269+0000","last_became_active":"2026-03-10T07:25:43.199906+0000","last_became_peered":"2026-03-10T07:25:43.199906+0000","last_unstale":"2026-03-10T07:26:08.669269+0000","last_undegraded":"2026-03-10T07:26:08.669269+0000","last_fullsized":"2026-03-10T07:26:08.669269+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:27:58.887171+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588592+0000","last_change":"2026-03-10T07:25:36.944990+0000","last_active":"2026-03-10T07:26:08.588592+0000","last_peered":"2026-03-10T07:26:08.588592+0000","last_clean":"2026-03-10T07:26:08.588592+0000","last_became_active":"2026-03-10T07:25:36.943138+0000","last_became_peered":"2026-03-10T07:25:36.943138+0000","last_unstale":"2026-03-10T07:26:08.588592+0000","last_undegraded":"2026-03-10T07:26:08.588592+0000","last_fullsized":"2026-03-10T07:26:08.588592+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:18:49.343181+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"62'5","reported_seq":43,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.640166+0000","last_change":"2026-03-10T07:25:38.961769+0000","last_active":"2026-03-10T07:26:09.640166+0000","last_peered":"2026-03-10T07:26:09.640166+0000","last_clean":"2026-03-10T07:26:09.640166+0000","last_became_active":"2026-03-10T07:25:38.961644+0000","last_became_peered":"2026-03-10T07:25:38.961644+0000","last_unstale":"2026-03-10T07:26:09.640166+0000","last_undegraded":"2026-03-10T07:26:09.640166+0000","last_fullsized":"2026-03-10T07:26:09.640166+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:42:41.376076+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643366+0000","last_change":"2026-03-10T07:25:40.965848+0000","last_active":"2026-03-10T07:26:08.643366+0000","last_peered":"2026-03-10T07:26:08.643366+0000","last_clean":"2026-03-10T07:26:08.643366+0000","last_became_active":"2026-03-10T07:25:40.965167+0000","last_became_peered":"2026-03-10T07:25:40.965167+0000","last_unstale":"2026-03-10T07:26:08.643366+0000","last_undegraded":"2026-03-10T07:26:08.643366+0000","last_fullsized":"2026-03-10T07:26:08.643366+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:32:03.354641+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537094+0000","last_change":"2026-03-10T07:25:43.202517+0000","last_active":"2026-03-10T07:26:08.537094+0000","last_peered":"2026-03-10T07:26:08.537094+0000","last_clean":"2026-03-10T07:26:08.537094+0000","last_became_active":"2026-03-10T07:25:43.202184+0000","last_became_peered":"2026-03-10T07:25:43.202184+0000","last_unstale":"2026-03-10T07:26:08.537094+0000","last_undegraded":"2026-03-10T07:26:08.537094+0000","last_fullsized":"2026-03-10T07:26:08.537094+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:01:09.617216+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537183+0000","last_change":"2026-03-10T07:25:36.942521+0000","last_active":"2026-03-10T07:26:08.537183+0000","last_peered":"2026-03-10T07:26:08.537183+0000","last_clean":"2026-03-10T07:26:08.537183+0000","last_became_active":"2026-03-10T07:25:36.942411+0000","last_became_peered":"2026-03-10T07:25:36.942411+0000","last_unstale":"2026-03-10T07:26:08.537183+0000","last_undegraded":"2026-03-10T07:26:08.537183+0000","last_fullsized":"2026-03-10T07:26:08.537183+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:47:04.381354+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669588+0000","last_change":"2026-03-10T07:25:38.957120+0000","last_active":"2026-03-10T07:26:08.669588+0000","last_peered":"2026-03-10T07:26:08.669588+0000","last_clean":"2026-03-10T07:26:08.669588+0000","last_became_active":"2026-03-10T07:25:38.957022+0000","last_became_peered":"2026-03-10T07:25:38.957022+0000","last_unstale":"2026-03-10T07:26:08.669588+0000","last_undegraded":"2026-03-10T07:26:08.669588+0000","last_fullsized":"2026-03-10T07:26:08.669588+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:37:02.020374+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669546+0000","last_change":"2026-03-10T07:25:40.951728+0000","last_active":"2026-03-10T07:26:08.669546+0000","last_peered":"2026-03-10T07:26:08.669546+0000","last_clean":"2026-03-10T07:26:08.669546+0000","last_became_active":"2026-03-10T07:25:40.945177+0000","last_became_peered":"2026-03-10T07:25:40.945177+0000","last_unstale":"2026-03-10T07:26:08.669546+0000","last_undegraded":"2026-03-10T07:26:08.669546+0000","last_fullsized":"2026-03-10T07:26:08.669546+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:53:58.773385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":62,"num_read_kb":57,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":51,"seq":219043332116,"num_pgs":59,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":28048,"kb_used_data":1216,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939376,"statfs":{"total":21470642176,"available":21441921024,"internally_reserved":0,"allocated":1245184,"data_stored":778866,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":44,"seq":188978561052,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":28016,"kb_used_data":1180,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939408,"statfs":{"total":21470642176,"available":21441953792,"internally_reserved":0,"allocated":1208320,"data_stored":776518,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":38,"seq":163208757282,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27572,"kb_used_data":732,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939852,"statfs":{"total":21470642176,"available":21442408448,"internally_reserved":0,"allocated":749568,"data_stored":317640,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":31,"seq":133143986217,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27608,"kb_used_data":768,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939816,"statfs":{"total":21470642176,"available":21442371584,"internally_reserved":0,"allocated":786432,"data_stored":318183,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149743,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27576,"kb_used_data":736,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939848,"statfs":{"total":21470642176,"available":21442404352,"internally_reserved":0,"allocated":753664,"data_stored":318631,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411382,"num_pgs":39,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27580,"kb_used_data":740,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939844,"statfs":{"total":21470642176,"available":21442400256,"internally_reserved":0,"allocated":757760,"data_stored":318180,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574909,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27592,"kb_used_data":752,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939832,"statfs":{"total":21470642176,"available":21442387968,"internally_reserved":0,"allocated":770048,"data_stored":318998,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738436,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":28036,"kb_used_data":1200,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939388,"statfs":{"total":21470642176,"available":21441933312,"internally_reserved":0,"allocated":1228800,"data_stored":777609,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":574,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1039,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1085,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T07:26:56.098 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph pg dump --format=json 2026-03-10T07:26:57.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:56 vm03 bash[23382]: audit 2026-03-10T07:26:56.037585+0000 mgr.y (mgr.24407) 63 : audit [DBG] from='client.24527 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:26:57.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:56 vm03 bash[23382]: audit 2026-03-10T07:26:56.037585+0000 mgr.y (mgr.24407) 63 : audit [DBG] from='client.24527 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:26:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:56 vm00 bash[28005]: audit 2026-03-10T07:26:56.037585+0000 mgr.y (mgr.24407) 63 : audit [DBG] from='client.24527 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:26:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:56 vm00 bash[28005]: audit 2026-03-10T07:26:56.037585+0000 mgr.y (mgr.24407) 63 : audit [DBG] from='client.24527 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:26:57.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:56 vm00 bash[20701]: audit 2026-03-10T07:26:56.037585+0000 mgr.y (mgr.24407) 63 : audit [DBG] from='client.24527 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:26:57.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:56 vm00 bash[20701]: audit 2026-03-10T07:26:56.037585+0000 mgr.y (mgr.24407) 63 : audit [DBG] from='client.24527 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:26:58.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:57 vm03 bash[23382]: cluster 2026-03-10T07:26:56.559510+0000 mgr.y (mgr.24407) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:58.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:57 vm03 bash[23382]: cluster 2026-03-10T07:26:56.559510+0000 mgr.y (mgr.24407) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:57 vm00 bash[20701]: cluster 2026-03-10T07:26:56.559510+0000 mgr.y (mgr.24407) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:57 vm00 bash[20701]: cluster 2026-03-10T07:26:56.559510+0000 mgr.y (mgr.24407) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:57 vm00 bash[28005]: cluster 2026-03-10T07:26:56.559510+0000 mgr.y (mgr.24407) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:57 vm00 bash[28005]: cluster 2026-03-10T07:26:56.559510+0000 mgr.y (mgr.24407) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:59.816 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:26:59.917 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:59 vm00 bash[28005]: cluster 2026-03-10T07:26:58.559858+0000 mgr.y (mgr.24407) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:59.917 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:26:59 vm00 bash[28005]: cluster 2026-03-10T07:26:58.559858+0000 mgr.y (mgr.24407) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:59.917 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:59 vm00 bash[20701]: cluster 2026-03-10T07:26:58.559858+0000 mgr.y (mgr.24407) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:59.917 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:26:59 vm00 bash[20701]: cluster 2026-03-10T07:26:58.559858+0000 mgr.y (mgr.24407) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:26:59.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 -- 192.168.123.100:0/2758118151 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f764810f2d0 msgr2=0x7f7648111750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:59.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 --2- 192.168.123.100:0/2758118151 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f764810f2d0 0x7f7648111750 secure :-1 s=READY pgs=178 cs=0 l=1 rev1=1 crypto rx=0x7f763800b0a0 tx=0x7f763802f450 comp rx=0 tx=0).stop 2026-03-10T07:26:59.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 -- 192.168.123.100:0/2758118151 shutdown_connections 2026-03-10T07:26:59.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 --2- 192.168.123.100:0/2758118151 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f764810f2d0 0x7f7648111750 unknown :-1 s=CLOSED pgs=178 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:59.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 --2- 192.168.123.100:0/2758118151 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7648101690 0x7f764810ec60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:59.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 --2- 192.168.123.100:0/2758118151 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7648100ce0 0x7f76481010c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:59.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 -- 192.168.123.100:0/2758118151 >> 192.168.123.100:0/2758118151 conn(0x7f76480fc910 msgr2=0x7f76480fed30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:26:59.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 -- 192.168.123.100:0/2758118151 shutdown_connections 2026-03-10T07:26:59.978 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 -- 192.168.123.100:0/2758118151 wait complete. 2026-03-10T07:26:59.979 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 Processor -- start 2026-03-10T07:26:59.979 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 -- start start 2026-03-10T07:26:59.979 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7648100ce0 0x7f76481a2560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:59.979 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7648101690 0x7f76481a2aa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:59.979 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7646ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7648100ce0 0x7f76481a2560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:59.979 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7646ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7648100ce0 0x7f76481a2560 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:49038/0 (socket says 192.168.123.100:49038) 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f764810f2d0 0x7f764819c630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f76481143a0 con 0x7f7648100ce0 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f7648114220 con 0x7f764810f2d0 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f7648114520 con 0x7f7648101690 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7646ffd640 1 -- 192.168.123.100:0/1501296639 learned_addr learned my addr 192.168.123.100:0/1501296639 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f76477fe640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f764810f2d0 0x7f764819c630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7646ffd640 1 -- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7648101690 msgr2=0x7f76481a2aa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f76467fc640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7648101690 0x7f76481a2aa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7646ffd640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7648101690 0x7f76481a2aa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7646ffd640 1 -- 192.168.123.100:0/1501296639 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f764810f2d0 msgr2=0x7f764819c630 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7646ffd640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f764810f2d0 0x7f764819c630 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7646ffd640 1 -- 192.168.123.100:0/1501296639 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f764819cea0 con 0x7f7648100ce0 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f76477fe640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f764810f2d0 0x7f764819c630 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:26:59.980 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7646ffd640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7648100ce0 0x7f76481a2560 secure :-1 s=READY pgs=179 cs=0 l=1 rev1=1 crypto rx=0x7f76300149e0 tx=0x7f7630014eb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:59.981 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7627fff640 1 -- 192.168.123.100:0/1501296639 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7630004270 con 0x7f7648100ce0 2026-03-10T07:26:59.981 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 -- 192.168.123.100:0/1501296639 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f764819d130 con 0x7f7648100ce0 2026-03-10T07:26:59.981 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f764d2e4640 1 -- 192.168.123.100:0/1501296639 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f7648104710 con 0x7f7648100ce0 2026-03-10T07:26:59.982 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7627fff640 1 -- 192.168.123.100:0/1501296639 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f7630019030 con 0x7f7648100ce0 2026-03-10T07:26:59.982 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.979+0000 7f7627fff640 1 -- 192.168.123.100:0/1501296639 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7630002920 con 0x7f7648100ce0 2026-03-10T07:26:59.982 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.983+0000 7f7627fff640 1 -- 192.168.123.100:0/1501296639 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f7630026020 con 0x7f7648100ce0 2026-03-10T07:26:59.983 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.983+0000 7f7627fff640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f76100777d0 0x7f7610079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:26:59.983 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.983+0000 7f7627fff640 1 -- 192.168.123.100:0/1501296639 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7f76300a1ff0 con 0x7f7648100ce0 2026-03-10T07:26:59.983 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.983+0000 7f76467fc640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f76100777d0 0x7f7610079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:26:59.983 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.983+0000 7f76467fc640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f76100777d0 0x7f7610079c90 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f763c0096f0 tx=0x7f763c009290 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:26:59.984 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.983+0000 7f764d2e4640 1 -- 192.168.123.100:0/1501296639 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7614005180 con 0x7f7648100ce0 2026-03-10T07:26:59.986 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:26:59.987+0000 7f7627fff640 1 -- 192.168.123.100:0/1501296639 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f7630073820 con 0x7f7648100ce0 2026-03-10T07:27:00.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:59 vm03 bash[23382]: cluster 2026-03-10T07:26:58.559858+0000 mgr.y (mgr.24407) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:00.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:26:59 vm03 bash[23382]: cluster 2026-03-10T07:26:58.559858+0000 mgr.y (mgr.24407) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:00.084 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.083+0000 7f764d2e4640 1 -- 192.168.123.100:0/1501296639 --> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f7614002bf0 con 0x7f76100777d0 2026-03-10T07:27:00.089 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.087+0000 7f7627fff640 1 -- 192.168.123.100:0/1501296639 <== mgr.24407 v2:192.168.123.100:6800/3339031114 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+346482 (secure 0 0 0) 0x7f7614002bf0 con 0x7f76100777d0 2026-03-10T07:27:00.089 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:27:00.091 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T07:27:00.092 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 -- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f76100777d0 msgr2=0x7f7610079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:27:00.092 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f76100777d0 0x7f7610079c90 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f763c0096f0 tx=0x7f763c009290 comp rx=0 tx=0).stop 2026-03-10T07:27:00.092 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 -- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7648100ce0 msgr2=0x7f76481a2560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:27:00.092 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7648100ce0 0x7f76481a2560 secure :-1 s=READY pgs=179 cs=0 l=1 rev1=1 crypto rx=0x7f76300149e0 tx=0x7f7630014eb0 comp rx=0 tx=0).stop 2026-03-10T07:27:00.093 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 -- 192.168.123.100:0/1501296639 shutdown_connections 2026-03-10T07:27:00.093 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f76100777d0 0x7f7610079c90 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:00.093 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f764810f2d0 0x7f764819c630 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:00.093 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7648101690 0x7f76481a2aa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:00.093 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 --2- 192.168.123.100:0/1501296639 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7648100ce0 0x7f76481a2560 unknown :-1 s=CLOSED pgs=179 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:00.093 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 -- 192.168.123.100:0/1501296639 >> 192.168.123.100:0/1501296639 conn(0x7f76480fc910 msgr2=0x7f76480fed00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:27:00.093 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 -- 192.168.123.100:0/1501296639 shutdown_connections 2026-03-10T07:27:00.093 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:00.091+0000 7f764d2e4640 1 -- 192.168.123.100:0/1501296639 wait complete. 2026-03-10T07:27:00.148 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":28,"stamp":"2026-03-10T07:26:58.559683+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":911,"num_read_kb":770,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":222028,"kb_used_data":7324,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167517364,"statfs":{"total":171765137408,"available":171537780736,"internally_reserved":0,"allocated":7499776,"data_stored":3924625,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002513"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537359+0000","last_change":"2026-03-10T07:25:43.202330+0000","last_active":"2026-03-10T07:26:08.537359+0000","last_peered":"2026-03-10T07:26:08.537359+0000","last_clean":"2026-03-10T07:26:08.537359+0000","last_became_active":"2026-03-10T07:25:43.201823+0000","last_became_peered":"2026-03-10T07:25:43.201823+0000","last_unstale":"2026-03-10T07:26:08.537359+0000","last_undegraded":"2026-03-10T07:26:08.537359+0000","last_fullsized":"2026-03-10T07:26:08.537359+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:15:41.777780+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.638866+0000","last_change":"2026-03-10T07:25:36.944849+0000","last_active":"2026-03-10T07:26:09.638866+0000","last_peered":"2026-03-10T07:26:09.638866+0000","last_clean":"2026-03-10T07:26:09.638866+0000","last_became_active":"2026-03-10T07:25:36.944724+0000","last_became_peered":"2026-03-10T07:25:36.944724+0000","last_unstale":"2026-03-10T07:26:09.638866+0000","last_undegraded":"2026-03-10T07:26:09.638866+0000","last_fullsized":"2026-03-10T07:26:09.638866+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:51:06.434127+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"62'10","reported_seq":48,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537400+0000","last_change":"2026-03-10T07:25:38.955716+0000","last_active":"2026-03-10T07:26:08.537400+0000","last_peered":"2026-03-10T07:26:08.537400+0000","last_clean":"2026-03-10T07:26:08.537400+0000","last_became_active":"2026-03-10T07:25:38.955307+0000","last_became_peered":"2026-03-10T07:25:38.955307+0000","last_unstale":"2026-03-10T07:26:08.537400+0000","last_undegraded":"2026-03-10T07:26:08.537400+0000","last_fullsized":"2026-03-10T07:26:08.537400+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:42:45.356494+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.668802+0000","last_change":"2026-03-10T07:25:40.962768+0000","last_active":"2026-03-10T07:26:08.668802+0000","last_peered":"2026-03-10T07:26:08.668802+0000","last_clean":"2026-03-10T07:26:08.668802+0000","last_became_active":"2026-03-10T07:25:40.962623+0000","last_became_peered":"2026-03-10T07:25:40.962623+0000","last_unstale":"2026-03-10T07:26:08.668802+0000","last_undegraded":"2026-03-10T07:26:08.668802+0000","last_fullsized":"2026-03-10T07:26:08.668802+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:40:46.353878+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537062+0000","last_change":"2026-03-10T07:25:36.937025+0000","last_active":"2026-03-10T07:26:08.537062+0000","last_peered":"2026-03-10T07:26:08.537062+0000","last_clean":"2026-03-10T07:26:08.537062+0000","last_became_active":"2026-03-10T07:25:36.936915+0000","last_became_peered":"2026-03-10T07:25:36.936915+0000","last_unstale":"2026-03-10T07:26:08.537062+0000","last_undegraded":"2026-03-10T07:26:08.537062+0000","last_fullsized":"2026-03-10T07:26:08.537062+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:29:30.586622+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"62'11","reported_seq":52,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639399+0000","last_change":"2026-03-10T07:25:38.942598+0000","last_active":"2026-03-10T07:26:09.639399+0000","last_peered":"2026-03-10T07:26:09.639399+0000","last_clean":"2026-03-10T07:26:09.639399+0000","last_became_active":"2026-03-10T07:25:38.942521+0000","last_became_peered":"2026-03-10T07:25:38.942521+0000","last_unstale":"2026-03-10T07:26:09.639399+0000","last_undegraded":"2026-03-10T07:26:09.639399+0000","last_fullsized":"2026-03-10T07:26:09.639399+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:12:12.132136+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643354+0000","last_change":"2026-03-10T07:25:40.956195+0000","last_active":"2026-03-10T07:26:08.643354+0000","last_peered":"2026-03-10T07:26:08.643354+0000","last_clean":"2026-03-10T07:26:08.643354+0000","last_became_active":"2026-03-10T07:25:40.956106+0000","last_became_peered":"2026-03-10T07:25:40.956106+0000","last_unstale":"2026-03-10T07:26:08.643354+0000","last_undegraded":"2026-03-10T07:26:08.643354+0000","last_fullsized":"2026-03-10T07:26:08.643354+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:30:47.995875+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.668311+0000","last_change":"2026-03-10T07:25:42.973744+0000","last_active":"2026-03-10T07:26:08.668311+0000","last_peered":"2026-03-10T07:26:08.668311+0000","last_clean":"2026-03-10T07:26:08.668311+0000","last_became_active":"2026-03-10T07:25:42.973548+0000","last_became_peered":"2026-03-10T07:25:42.973548+0000","last_unstale":"2026-03-10T07:26:08.668311+0000","last_undegraded":"2026-03-10T07:26:08.668311+0000","last_fullsized":"2026-03-10T07:26:08.668311+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:10:53.823226+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037097+0000","last_change":"2026-03-10T07:25:36.957998+0000","last_active":"2026-03-10T07:26:09.037097+0000","last_peered":"2026-03-10T07:26:09.037097+0000","last_clean":"2026-03-10T07:26:09.037097+0000","last_became_active":"2026-03-10T07:25:36.956931+0000","last_became_peered":"2026-03-10T07:25:36.956931+0000","last_unstale":"2026-03-10T07:26:09.037097+0000","last_undegraded":"2026-03-10T07:26:09.037097+0000","last_fullsized":"2026-03-10T07:26:09.037097+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:57:32.973481+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"62'15","reported_seq":58,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537438+0000","last_change":"2026-03-10T07:25:38.963336+0000","last_active":"2026-03-10T07:26:08.537438+0000","last_peered":"2026-03-10T07:26:08.537438+0000","last_clean":"2026-03-10T07:26:08.537438+0000","last_became_active":"2026-03-10T07:25:38.963168+0000","last_became_peered":"2026-03-10T07:25:38.963168+0000","last_unstale":"2026-03-10T07:26:08.537438+0000","last_undegraded":"2026-03-10T07:26:08.537438+0000","last_fullsized":"2026-03-10T07:26:08.537438+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:12:38.948372+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039332+0000","last_change":"2026-03-10T07:25:40.967300+0000","last_active":"2026-03-10T07:26:09.039332+0000","last_peered":"2026-03-10T07:26:09.039332+0000","last_clean":"2026-03-10T07:26:09.039332+0000","last_became_active":"2026-03-10T07:25:40.967173+0000","last_became_peered":"2026-03-10T07:25:40.967173+0000","last_unstale":"2026-03-10T07:26:09.039332+0000","last_undegraded":"2026-03-10T07:26:09.039332+0000","last_fullsized":"2026-03-10T07:26:09.039332+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:18:52.767495+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537396+0000","last_change":"2026-03-10T07:25:42.976658+0000","last_active":"2026-03-10T07:26:08.537396+0000","last_peered":"2026-03-10T07:26:08.537396+0000","last_clean":"2026-03-10T07:26:08.537396+0000","last_became_active":"2026-03-10T07:25:42.976337+0000","last_became_peered":"2026-03-10T07:25:42.976337+0000","last_unstale":"2026-03-10T07:26:08.537396+0000","last_undegraded":"2026-03-10T07:26:08.537396+0000","last_fullsized":"2026-03-10T07:26:08.537396+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:35:20.695170+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037512+0000","last_change":"2026-03-10T07:25:36.954845+0000","last_active":"2026-03-10T07:26:09.037512+0000","last_peered":"2026-03-10T07:26:09.037512+0000","last_clean":"2026-03-10T07:26:09.037512+0000","last_became_active":"2026-03-10T07:25:36.951106+0000","last_became_peered":"2026-03-10T07:25:36.951106+0000","last_unstale":"2026-03-10T07:26:09.037512+0000","last_undegraded":"2026-03-10T07:26:09.037512+0000","last_fullsized":"2026-03-10T07:26:09.037512+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:48:45.687745+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"62'12","reported_seq":56,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536303+0000","last_change":"2026-03-10T07:25:38.958439+0000","last_active":"2026-03-10T07:26:08.536303+0000","last_peered":"2026-03-10T07:26:08.536303+0000","last_clean":"2026-03-10T07:26:08.536303+0000","last_became_active":"2026-03-10T07:25:38.958251+0000","last_became_peered":"2026-03-10T07:25:38.958251+0000","last_unstale":"2026-03-10T07:26:08.536303+0000","last_undegraded":"2026-03-10T07:26:08.536303+0000","last_fullsized":"2026-03-10T07:26:08.536303+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:00:37.389406+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536260+0000","last_change":"2026-03-10T07:25:40.960067+0000","last_active":"2026-03-10T07:26:08.536260+0000","last_peered":"2026-03-10T07:26:08.536260+0000","last_clean":"2026-03-10T07:26:08.536260+0000","last_became_active":"2026-03-10T07:25:40.954903+0000","last_became_peered":"2026-03-10T07:25:40.954903+0000","last_unstale":"2026-03-10T07:26:08.536260+0000","last_undegraded":"2026-03-10T07:26:08.536260+0000","last_fullsized":"2026-03-10T07:26:08.536260+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:16:03.315943+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639150+0000","last_change":"2026-03-10T07:25:42.971903+0000","last_active":"2026-03-10T07:26:09.639150+0000","last_peered":"2026-03-10T07:26:09.639150+0000","last_clean":"2026-03-10T07:26:09.639150+0000","last_became_active":"2026-03-10T07:25:42.971771+0000","last_became_peered":"2026-03-10T07:25:42.971771+0000","last_unstale":"2026-03-10T07:26:09.639150+0000","last_undegraded":"2026-03-10T07:26:09.639150+0000","last_fullsized":"2026-03-10T07:26:09.639150+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:07:21.256438+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"62'19","reported_seq":64,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588617+0000","last_change":"2026-03-10T07:25:38.973814+0000","last_active":"2026-03-10T07:26:08.588617+0000","last_peered":"2026-03-10T07:26:08.588617+0000","last_clean":"2026-03-10T07:26:08.588617+0000","last_became_active":"2026-03-10T07:25:38.973682+0000","last_became_peered":"2026-03-10T07:25:38.973682+0000","last_unstale":"2026-03-10T07:26:08.588617+0000","last_undegraded":"2026-03-10T07:26:08.588617+0000","last_fullsized":"2026-03-10T07:26:08.588617+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:38:42.857194+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.036904+0000","last_change":"2026-03-10T07:25:36.954925+0000","last_active":"2026-03-10T07:26:09.036904+0000","last_peered":"2026-03-10T07:26:09.036904+0000","last_clean":"2026-03-10T07:26:09.036904+0000","last_became_active":"2026-03-10T07:25:36.954558+0000","last_became_peered":"2026-03-10T07:25:36.954558+0000","last_unstale":"2026-03-10T07:26:09.036904+0000","last_undegraded":"2026-03-10T07:26:09.036904+0000","last_fullsized":"2026-03-10T07:26:09.036904+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:44:38.495748+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644422+0000","last_change":"2026-03-10T07:25:40.965642+0000","last_active":"2026-03-10T07:26:08.644422+0000","last_peered":"2026-03-10T07:26:08.644422+0000","last_clean":"2026-03-10T07:26:08.644422+0000","last_became_active":"2026-03-10T07:25:40.965338+0000","last_became_peered":"2026-03-10T07:25:40.965338+0000","last_unstale":"2026-03-10T07:26:08.644422+0000","last_undegraded":"2026-03-10T07:26:08.644422+0000","last_fullsized":"2026-03-10T07:26:08.644422+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:20:10.509873+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.534068+0000","last_change":"2026-03-10T07:25:42.971467+0000","last_active":"2026-03-10T07:26:08.534068+0000","last_peered":"2026-03-10T07:26:08.534068+0000","last_clean":"2026-03-10T07:26:08.534068+0000","last_became_active":"2026-03-10T07:25:42.969267+0000","last_became_peered":"2026-03-10T07:25:42.969267+0000","last_unstale":"2026-03-10T07:26:08.534068+0000","last_undegraded":"2026-03-10T07:26:08.534068+0000","last_fullsized":"2026-03-10T07:26:08.534068+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:13:43.845735+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536866+0000","last_change":"2026-03-10T07:25:38.955595+0000","last_active":"2026-03-10T07:26:08.536866+0000","last_peered":"2026-03-10T07:26:08.536866+0000","last_clean":"2026-03-10T07:26:08.536866+0000","last_became_active":"2026-03-10T07:25:38.955505+0000","last_became_peered":"2026-03-10T07:25:38.955505+0000","last_unstale":"2026-03-10T07:26:08.536866+0000","last_undegraded":"2026-03-10T07:26:08.536866+0000","last_fullsized":"2026-03-10T07:26:08.536866+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:45:40.697478+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"55'1","reported_seq":45,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643588+0000","last_change":"2026-03-10T07:25:36.944402+0000","last_active":"2026-03-10T07:26:08.643588+0000","last_peered":"2026-03-10T07:26:08.643588+0000","last_clean":"2026-03-10T07:26:08.643588+0000","last_became_active":"2026-03-10T07:25:36.944272+0000","last_became_peered":"2026-03-10T07:25:36.944272+0000","last_unstale":"2026-03-10T07:26:08.643588+0000","last_undegraded":"2026-03-10T07:26:08.643588+0000","last_fullsized":"2026-03-10T07:26:08.643588+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:55:37.483284+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"63'11","reported_seq":55,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:47.004308+0000","last_change":"2026-03-10T07:25:40.970998+0000","last_active":"2026-03-10T07:26:47.004308+0000","last_peered":"2026-03-10T07:26:47.004308+0000","last_clean":"2026-03-10T07:26:47.004308+0000","last_became_active":"2026-03-10T07:25:40.970797+0000","last_became_peered":"2026-03-10T07:25:40.970797+0000","last_unstale":"2026-03-10T07:26:47.004308+0000","last_undegraded":"2026-03-10T07:26:47.004308+0000","last_fullsized":"2026-03-10T07:26:47.004308+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:05:21.503300+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669644+0000","last_change":"2026-03-10T07:25:42.966911+0000","last_active":"2026-03-10T07:26:08.669644+0000","last_peered":"2026-03-10T07:26:08.669644+0000","last_clean":"2026-03-10T07:26:08.669644+0000","last_became_active":"2026-03-10T07:25:42.966790+0000","last_became_peered":"2026-03-10T07:25:42.966790+0000","last_unstale":"2026-03-10T07:26:08.669644+0000","last_undegraded":"2026-03-10T07:26:08.669644+0000","last_fullsized":"2026-03-10T07:26:08.669644+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:01:01.647071+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"62'15","reported_seq":58,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536906+0000","last_change":"2026-03-10T07:25:38.940123+0000","last_active":"2026-03-10T07:26:08.536906+0000","last_peered":"2026-03-10T07:26:08.536906+0000","last_clean":"2026-03-10T07:26:08.536906+0000","last_became_active":"2026-03-10T07:25:38.940054+0000","last_became_peered":"2026-03-10T07:25:38.940054+0000","last_unstale":"2026-03-10T07:26:08.536906+0000","last_undegraded":"2026-03-10T07:26:08.536906+0000","last_fullsized":"2026-03-10T07:26:08.536906+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:29:42.593084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643672+0000","last_change":"2026-03-10T07:25:36.943880+0000","last_active":"2026-03-10T07:26:08.643672+0000","last_peered":"2026-03-10T07:26:08.643672+0000","last_clean":"2026-03-10T07:26:08.643672+0000","last_became_active":"2026-03-10T07:25:36.943741+0000","last_became_peered":"2026-03-10T07:25:36.943741+0000","last_unstale":"2026-03-10T07:26:08.643672+0000","last_undegraded":"2026-03-10T07:26:08.643672+0000","last_fullsized":"2026-03-10T07:26:08.643672+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:23:44.430096+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"63'11","reported_seq":58,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:47.003692+0000","last_change":"2026-03-10T07:25:40.964279+0000","last_active":"2026-03-10T07:26:47.003692+0000","last_peered":"2026-03-10T07:26:47.003692+0000","last_clean":"2026-03-10T07:26:47.003692+0000","last_became_active":"2026-03-10T07:25:40.964088+0000","last_became_peered":"2026-03-10T07:25:40.964088+0000","last_unstale":"2026-03-10T07:26:47.003692+0000","last_undegraded":"2026-03-10T07:26:47.003692+0000","last_fullsized":"2026-03-10T07:26:47.003692+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:08:38.681787+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536116+0000","last_change":"2026-03-10T07:25:42.977842+0000","last_active":"2026-03-10T07:26:08.536116+0000","last_peered":"2026-03-10T07:26:08.536116+0000","last_clean":"2026-03-10T07:26:08.536116+0000","last_became_active":"2026-03-10T07:25:42.976504+0000","last_became_peered":"2026-03-10T07:25:42.976504+0000","last_unstale":"2026-03-10T07:26:08.536116+0000","last_undegraded":"2026-03-10T07:26:08.536116+0000","last_fullsized":"2026-03-10T07:26:08.536116+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:14:17.578558+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"62'12","reported_seq":56,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.668606+0000","last_change":"2026-03-10T07:25:38.966196+0000","last_active":"2026-03-10T07:26:08.668606+0000","last_peered":"2026-03-10T07:26:08.668606+0000","last_clean":"2026-03-10T07:26:08.668606+0000","last_became_active":"2026-03-10T07:25:38.965999+0000","last_became_peered":"2026-03-10T07:25:38.965999+0000","last_unstale":"2026-03-10T07:26:08.668606+0000","last_undegraded":"2026-03-10T07:26:08.668606+0000","last_fullsized":"2026-03-10T07:26:08.668606+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:57:31.664387+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":15,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533770+0000","last_change":"2026-03-10T07:25:53.619324+0000","last_active":"2026-03-10T07:26:08.533770+0000","last_peered":"2026-03-10T07:26:08.533770+0000","last_clean":"2026-03-10T07:26:08.533770+0000","last_became_active":"2026-03-10T07:25:53.619200+0000","last_became_peered":"2026-03-10T07:25:53.619200+0000","last_unstale":"2026-03-10T07:26:08.533770+0000","last_undegraded":"2026-03-10T07:26:08.533770+0000","last_fullsized":"2026-03-10T07:26:08.533770+0000","mapping_epoch":65,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":66,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:25:24.107480+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537716+0000","last_change":"2026-03-10T07:25:40.951544+0000","last_active":"2026-03-10T07:26:08.537716+0000","last_peered":"2026-03-10T07:26:08.537716+0000","last_clean":"2026-03-10T07:26:08.537716+0000","last_became_active":"2026-03-10T07:25:40.951459+0000","last_became_peered":"2026-03-10T07:25:40.951459+0000","last_unstale":"2026-03-10T07:26:08.537716+0000","last_undegraded":"2026-03-10T07:26:08.537716+0000","last_fullsized":"2026-03-10T07:26:08.537716+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:07:27.746825+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536421+0000","last_change":"2026-03-10T07:25:43.202393+0000","last_active":"2026-03-10T07:26:08.536421+0000","last_peered":"2026-03-10T07:26:08.536421+0000","last_clean":"2026-03-10T07:26:08.536421+0000","last_became_active":"2026-03-10T07:25:43.201952+0000","last_became_peered":"2026-03-10T07:25:43.201952+0000","last_unstale":"2026-03-10T07:26:08.536421+0000","last_undegraded":"2026-03-10T07:26:08.536421+0000","last_fullsized":"2026-03-10T07:26:08.536421+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:43:55.701446+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"62'12","reported_seq":51,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.640065+0000","last_change":"2026-03-10T07:25:38.961561+0000","last_active":"2026-03-10T07:26:09.640065+0000","last_peered":"2026-03-10T07:26:09.640065+0000","last_clean":"2026-03-10T07:26:09.640065+0000","last_became_active":"2026-03-10T07:25:38.961399+0000","last_became_peered":"2026-03-10T07:25:38.961399+0000","last_unstale":"2026-03-10T07:26:09.640065+0000","last_undegraded":"2026-03-10T07:26:09.640065+0000","last_fullsized":"2026-03-10T07:26:09.640065+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:54:56.962298+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588752+0000","last_change":"2026-03-10T07:25:36.952085+0000","last_active":"2026-03-10T07:26:08.588752+0000","last_peered":"2026-03-10T07:26:08.588752+0000","last_clean":"2026-03-10T07:26:08.588752+0000","last_became_active":"2026-03-10T07:25:36.943264+0000","last_became_peered":"2026-03-10T07:25:36.943264+0000","last_unstale":"2026-03-10T07:26:08.588752+0000","last_undegraded":"2026-03-10T07:26:08.588752+0000","last_fullsized":"2026-03-10T07:26:08.588752+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:53:46.385863+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"62'1","reported_seq":39,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669957+0000","last_change":"2026-03-10T07:25:46.009402+0000","last_active":"2026-03-10T07:26:08.669957+0000","last_peered":"2026-03-10T07:26:08.669957+0000","last_clean":"2026-03-10T07:26:08.669957+0000","last_became_active":"2026-03-10T07:25:39.941081+0000","last_became_peered":"2026-03-10T07:25:39.941081+0000","last_unstale":"2026-03-10T07:26:08.669957+0000","last_undegraded":"2026-03-10T07:26:08.669957+0000","last_fullsized":"2026-03-10T07:26:08.669957+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_clean_scrub_stamp":"2026-03-10T07:25:38.919122+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:13:21.320933+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00027072000000000001,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"63'11","reported_seq":55,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:47.004722+0000","last_change":"2026-03-10T07:25:40.942899+0000","last_active":"2026-03-10T07:26:47.004722+0000","last_peered":"2026-03-10T07:26:47.004722+0000","last_clean":"2026-03-10T07:26:47.004722+0000","last_became_active":"2026-03-10T07:25:40.942773+0000","last_became_peered":"2026-03-10T07:25:40.942773+0000","last_unstale":"2026-03-10T07:26:47.004722+0000","last_undegraded":"2026-03-10T07:26:47.004722+0000","last_fullsized":"2026-03-10T07:26:47.004722+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:25:17.256099+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.038914+0000","last_change":"2026-03-10T07:25:43.200990+0000","last_active":"2026-03-10T07:26:09.038914+0000","last_peered":"2026-03-10T07:26:09.038914+0000","last_clean":"2026-03-10T07:26:09.038914+0000","last_became_active":"2026-03-10T07:25:43.200846+0000","last_became_peered":"2026-03-10T07:25:43.200846+0000","last_unstale":"2026-03-10T07:26:09.038914+0000","last_undegraded":"2026-03-10T07:26:09.038914+0000","last_fullsized":"2026-03-10T07:26:09.038914+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:54:04.388963+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"62'13","reported_seq":60,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536667+0000","last_change":"2026-03-10T07:25:38.933781+0000","last_active":"2026-03-10T07:26:08.536667+0000","last_peered":"2026-03-10T07:26:08.536667+0000","last_clean":"2026-03-10T07:26:08.536667+0000","last_became_active":"2026-03-10T07:25:38.933661+0000","last_became_peered":"2026-03-10T07:25:38.933661+0000","last_unstale":"2026-03-10T07:26:08.536667+0000","last_undegraded":"2026-03-10T07:26:08.536667+0000","last_fullsized":"2026-03-10T07:26:08.536667+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:48:37.834539+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"55'1","reported_seq":38,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643705+0000","last_change":"2026-03-10T07:25:36.931556+0000","last_active":"2026-03-10T07:26:08.643705+0000","last_peered":"2026-03-10T07:26:08.643705+0000","last_clean":"2026-03-10T07:26:08.643705+0000","last_became_active":"2026-03-10T07:25:36.931409+0000","last_became_peered":"2026-03-10T07:25:36.931409+0000","last_unstale":"2026-03-10T07:26:08.643705+0000","last_undegraded":"2026-03-10T07:26:08.643705+0000","last_fullsized":"2026-03-10T07:26:08.643705+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:09:12.898569+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"65'5","reported_seq":108,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:52.916952+0000","last_change":"2026-03-10T07:25:46.007814+0000","last_active":"2026-03-10T07:26:52.916952+0000","last_peered":"2026-03-10T07:26:52.916952+0000","last_clean":"2026-03-10T07:26:52.916952+0000","last_became_active":"2026-03-10T07:25:39.958901+0000","last_became_peered":"2026-03-10T07:25:39.958901+0000","last_unstale":"2026-03-10T07:26:52.916952+0000","last_undegraded":"2026-03-10T07:26:52.916952+0000","last_fullsized":"2026-03-10T07:26:52.916952+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_clean_scrub_stamp":"2026-03-10T07:25:38.919122+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:50:10.105303+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.001194566,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":67,"num_read_kb":62,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669180+0000","last_change":"2026-03-10T07:25:40.963847+0000","last_active":"2026-03-10T07:26:08.669180+0000","last_peered":"2026-03-10T07:26:08.669180+0000","last_clean":"2026-03-10T07:26:08.669180+0000","last_became_active":"2026-03-10T07:25:40.963603+0000","last_became_peered":"2026-03-10T07:25:40.963603+0000","last_unstale":"2026-03-10T07:26:08.669180+0000","last_undegraded":"2026-03-10T07:26:08.669180+0000","last_fullsized":"2026-03-10T07:26:08.669180+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:11:38.631378+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669113+0000","last_change":"2026-03-10T07:25:42.965054+0000","last_active":"2026-03-10T07:26:08.669113+0000","last_peered":"2026-03-10T07:26:08.669113+0000","last_clean":"2026-03-10T07:26:08.669113+0000","last_became_active":"2026-03-10T07:25:42.964850+0000","last_became_peered":"2026-03-10T07:25:42.964850+0000","last_unstale":"2026-03-10T07:26:08.669113+0000","last_undegraded":"2026-03-10T07:26:08.669113+0000","last_fullsized":"2026-03-10T07:26:08.669113+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:03:12.928825+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"63'30","reported_seq":99,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:47.004721+0000","last_change":"2026-03-10T07:25:38.960669+0000","last_active":"2026-03-10T07:26:47.004721+0000","last_peered":"2026-03-10T07:26:47.004721+0000","last_clean":"2026-03-10T07:26:47.004721+0000","last_became_active":"2026-03-10T07:25:38.960502+0000","last_became_peered":"2026-03-10T07:25:38.960502+0000","last_unstale":"2026-03-10T07:26:47.004721+0000","last_undegraded":"2026-03-10T07:26:47.004721+0000","last_fullsized":"2026-03-10T07:26:47.004721+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:30:30.665007+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037002+0000","last_change":"2026-03-10T07:25:36.948081+0000","last_active":"2026-03-10T07:26:09.037002+0000","last_peered":"2026-03-10T07:26:09.037002+0000","last_clean":"2026-03-10T07:26:09.037002+0000","last_became_active":"2026-03-10T07:25:36.947999+0000","last_became_peered":"2026-03-10T07:25:36.947999+0000","last_unstale":"2026-03-10T07:26:09.037002+0000","last_undegraded":"2026-03-10T07:26:09.037002+0000","last_fullsized":"2026-03-10T07:26:09.037002+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:00:21.576970+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.589081+0000","last_change":"2026-03-10T07:25:40.958648+0000","last_active":"2026-03-10T07:26:08.589081+0000","last_peered":"2026-03-10T07:26:08.589081+0000","last_clean":"2026-03-10T07:26:08.589081+0000","last_became_active":"2026-03-10T07:25:40.958332+0000","last_became_peered":"2026-03-10T07:25:40.958332+0000","last_unstale":"2026-03-10T07:26:08.589081+0000","last_undegraded":"2026-03-10T07:26:08.589081+0000","last_fullsized":"2026-03-10T07:26:08.589081+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:03:25.286525+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644209+0000","last_change":"2026-03-10T07:25:43.200748+0000","last_active":"2026-03-10T07:26:08.644209+0000","last_peered":"2026-03-10T07:26:08.644209+0000","last_clean":"2026-03-10T07:26:08.644209+0000","last_became_active":"2026-03-10T07:25:43.200458+0000","last_became_peered":"2026-03-10T07:25:43.200458+0000","last_unstale":"2026-03-10T07:26:08.644209+0000","last_undegraded":"2026-03-10T07:26:08.644209+0000","last_fullsized":"2026-03-10T07:26:08.644209+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:02:22.146589+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"62'16","reported_seq":71,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:47.003969+0000","last_change":"2026-03-10T07:25:38.963876+0000","last_active":"2026-03-10T07:26:47.003969+0000","last_peered":"2026-03-10T07:26:47.003969+0000","last_clean":"2026-03-10T07:26:47.003969+0000","last_became_active":"2026-03-10T07:25:38.959289+0000","last_became_peered":"2026-03-10T07:25:38.959289+0000","last_unstale":"2026-03-10T07:26:47.003969+0000","last_undegraded":"2026-03-10T07:26:47.003969+0000","last_fullsized":"2026-03-10T07:26:47.003969+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:49:14.638829+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644046+0000","last_change":"2026-03-10T07:25:36.943600+0000","last_active":"2026-03-10T07:26:08.644046+0000","last_peered":"2026-03-10T07:26:08.644046+0000","last_clean":"2026-03-10T07:26:08.644046+0000","last_became_active":"2026-03-10T07:25:36.943506+0000","last_became_peered":"2026-03-10T07:25:36.943506+0000","last_unstale":"2026-03-10T07:26:08.644046+0000","last_undegraded":"2026-03-10T07:26:08.644046+0000","last_fullsized":"2026-03-10T07:26:08.644046+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:49:52.148563+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"64'2","reported_seq":40,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644098+0000","last_change":"2026-03-10T07:25:46.013902+0000","last_active":"2026-03-10T07:26:08.644098+0000","last_peered":"2026-03-10T07:26:08.644098+0000","last_clean":"2026-03-10T07:26:08.644098+0000","last_became_active":"2026-03-10T07:25:39.942792+0000","last_became_peered":"2026-03-10T07:25:39.942792+0000","last_unstale":"2026-03-10T07:26:08.644098+0000","last_undegraded":"2026-03-10T07:26:08.644098+0000","last_fullsized":"2026-03-10T07:26:08.644098+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:38.919122+0000","last_clean_scrub_stamp":"2026-03-10T07:25:38.919122+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:08:45.303294+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00038671099999999998,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"63'11","reported_seq":55,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:47.004306+0000","last_change":"2026-03-10T07:25:40.960641+0000","last_active":"2026-03-10T07:26:47.004306+0000","last_peered":"2026-03-10T07:26:47.004306+0000","last_clean":"2026-03-10T07:26:47.004306+0000","last_became_active":"2026-03-10T07:25:40.960493+0000","last_became_peered":"2026-03-10T07:25:40.960493+0000","last_unstale":"2026-03-10T07:26:47.004306+0000","last_undegraded":"2026-03-10T07:26:47.004306+0000","last_fullsized":"2026-03-10T07:26:47.004306+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:31:14.671878+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639788+0000","last_change":"2026-03-10T07:25:42.965443+0000","last_active":"2026-03-10T07:26:09.639788+0000","last_peered":"2026-03-10T07:26:09.639788+0000","last_clean":"2026-03-10T07:26:09.639788+0000","last_became_active":"2026-03-10T07:25:42.965119+0000","last_became_peered":"2026-03-10T07:25:42.965119+0000","last_unstale":"2026-03-10T07:26:09.639788+0000","last_undegraded":"2026-03-10T07:26:09.639788+0000","last_fullsized":"2026-03-10T07:26:09.639788+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:48:03.889308+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"62'19","reported_seq":69,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.668934+0000","last_change":"2026-03-10T07:25:38.965676+0000","last_active":"2026-03-10T07:26:08.668934+0000","last_peered":"2026-03-10T07:26:08.668934+0000","last_clean":"2026-03-10T07:26:08.668934+0000","last_became_active":"2026-03-10T07:25:38.965566+0000","last_became_peered":"2026-03-10T07:25:38.965566+0000","last_unstale":"2026-03-10T07:26:08.668934+0000","last_undegraded":"2026-03-10T07:26:08.668934+0000","last_fullsized":"2026-03-10T07:26:08.668934+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:49:42.698967+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537299+0000","last_change":"2026-03-10T07:25:36.953145+0000","last_active":"2026-03-10T07:26:08.537299+0000","last_peered":"2026-03-10T07:26:08.537299+0000","last_clean":"2026-03-10T07:26:08.537299+0000","last_became_active":"2026-03-10T07:25:36.952663+0000","last_became_peered":"2026-03-10T07:25:36.952663+0000","last_unstale":"2026-03-10T07:26:08.537299+0000","last_undegraded":"2026-03-10T07:26:08.537299+0000","last_fullsized":"2026-03-10T07:26:08.537299+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:54:36.535214+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639707+0000","last_change":"2026-03-10T07:25:40.969529+0000","last_active":"2026-03-10T07:26:09.639707+0000","last_peered":"2026-03-10T07:26:09.639707+0000","last_clean":"2026-03-10T07:26:09.639707+0000","last_became_active":"2026-03-10T07:25:40.969355+0000","last_became_peered":"2026-03-10T07:25:40.969355+0000","last_unstale":"2026-03-10T07:26:09.639707+0000","last_undegraded":"2026-03-10T07:26:09.639707+0000","last_fullsized":"2026-03-10T07:26:09.639707+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:08:47.424379+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536492+0000","last_change":"2026-03-10T07:25:42.974239+0000","last_active":"2026-03-10T07:26:08.536492+0000","last_peered":"2026-03-10T07:26:08.536492+0000","last_clean":"2026-03-10T07:26:08.536492+0000","last_became_active":"2026-03-10T07:25:42.974128+0000","last_became_peered":"2026-03-10T07:25:42.974128+0000","last_unstale":"2026-03-10T07:26:08.536492+0000","last_undegraded":"2026-03-10T07:26:08.536492+0000","last_fullsized":"2026-03-10T07:26:08.536492+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:43:22.280218+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"62'18","reported_seq":65,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644168+0000","last_change":"2026-03-10T07:25:38.951573+0000","last_active":"2026-03-10T07:26:08.644168+0000","last_peered":"2026-03-10T07:26:08.644168+0000","last_clean":"2026-03-10T07:26:08.644168+0000","last_became_active":"2026-03-10T07:25:38.951483+0000","last_became_peered":"2026-03-10T07:25:38.951483+0000","last_unstale":"2026-03-10T07:26:08.644168+0000","last_undegraded":"2026-03-10T07:26:08.644168+0000","last_fullsized":"2026-03-10T07:26:08.644168+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:51:32.866863+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533380+0000","last_change":"2026-03-10T07:25:36.932854+0000","last_active":"2026-03-10T07:26:08.533380+0000","last_peered":"2026-03-10T07:26:08.533380+0000","last_clean":"2026-03-10T07:26:08.533380+0000","last_became_active":"2026-03-10T07:25:36.932505+0000","last_became_peered":"2026-03-10T07:25:36.932505+0000","last_unstale":"2026-03-10T07:26:08.533380+0000","last_undegraded":"2026-03-10T07:26:08.533380+0000","last_fullsized":"2026-03-10T07:26:08.533380+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:26:59.726222+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533449+0000","last_change":"2026-03-10T07:25:40.970950+0000","last_active":"2026-03-10T07:26:08.533449+0000","last_peered":"2026-03-10T07:26:08.533449+0000","last_clean":"2026-03-10T07:26:08.533449+0000","last_became_active":"2026-03-10T07:25:40.970698+0000","last_became_peered":"2026-03-10T07:25:40.970698+0000","last_unstale":"2026-03-10T07:26:08.533449+0000","last_undegraded":"2026-03-10T07:26:08.533449+0000","last_fullsized":"2026-03-10T07:26:08.533449+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:19:57.146422+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039897+0000","last_change":"2026-03-10T07:25:43.200928+0000","last_active":"2026-03-10T07:26:09.039897+0000","last_peered":"2026-03-10T07:26:09.039897+0000","last_clean":"2026-03-10T07:26:09.039897+0000","last_became_active":"2026-03-10T07:25:43.200702+0000","last_became_peered":"2026-03-10T07:25:43.200702+0000","last_unstale":"2026-03-10T07:26:09.039897+0000","last_undegraded":"2026-03-10T07:26:09.039897+0000","last_fullsized":"2026-03-10T07:26:09.039897+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:24:17.176040+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"62'14","reported_seq":54,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639921+0000","last_change":"2026-03-10T07:25:38.961264+0000","last_active":"2026-03-10T07:26:09.639921+0000","last_peered":"2026-03-10T07:26:09.639921+0000","last_clean":"2026-03-10T07:26:09.639921+0000","last_became_active":"2026-03-10T07:25:38.961139+0000","last_became_peered":"2026-03-10T07:25:38.961139+0000","last_unstale":"2026-03-10T07:26:09.639921+0000","last_undegraded":"2026-03-10T07:26:09.639921+0000","last_fullsized":"2026-03-10T07:26:09.639921+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:17:52.133465+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037048+0000","last_change":"2026-03-10T07:25:36.949403+0000","last_active":"2026-03-10T07:26:09.037048+0000","last_peered":"2026-03-10T07:26:09.037048+0000","last_clean":"2026-03-10T07:26:09.037048+0000","last_became_active":"2026-03-10T07:25:36.949202+0000","last_became_peered":"2026-03-10T07:25:36.949202+0000","last_unstale":"2026-03-10T07:26:09.037048+0000","last_undegraded":"2026-03-10T07:26:09.037048+0000","last_fullsized":"2026-03-10T07:26:09.037048+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:28:17.458227+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537518+0000","last_change":"2026-03-10T07:25:40.956067+0000","last_active":"2026-03-10T07:26:08.537518+0000","last_peered":"2026-03-10T07:26:08.537518+0000","last_clean":"2026-03-10T07:26:08.537518+0000","last_became_active":"2026-03-10T07:25:40.955956+0000","last_became_peered":"2026-03-10T07:25:40.955956+0000","last_unstale":"2026-03-10T07:26:08.537518+0000","last_undegraded":"2026-03-10T07:26:08.537518+0000","last_fullsized":"2026-03-10T07:26:08.537518+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:58:58.918939+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643951+0000","last_change":"2026-03-10T07:25:42.980305+0000","last_active":"2026-03-10T07:26:08.643951+0000","last_peered":"2026-03-10T07:26:08.643951+0000","last_clean":"2026-03-10T07:26:08.643951+0000","last_became_active":"2026-03-10T07:25:42.980211+0000","last_became_peered":"2026-03-10T07:25:42.980211+0000","last_unstale":"2026-03-10T07:26:08.643951+0000","last_undegraded":"2026-03-10T07:26:08.643951+0000","last_fullsized":"2026-03-10T07:26:08.643951+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:31:00.382876+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"62'10","reported_seq":48,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537235+0000","last_change":"2026-03-10T07:25:38.958950+0000","last_active":"2026-03-10T07:26:08.537235+0000","last_peered":"2026-03-10T07:26:08.537235+0000","last_clean":"2026-03-10T07:26:08.537235+0000","last_became_active":"2026-03-10T07:25:38.955334+0000","last_became_peered":"2026-03-10T07:25:38.955334+0000","last_unstale":"2026-03-10T07:26:08.537235+0000","last_undegraded":"2026-03-10T07:26:08.537235+0000","last_fullsized":"2026-03-10T07:26:08.537235+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:00:09.253457+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536386+0000","last_change":"2026-03-10T07:25:36.947198+0000","last_active":"2026-03-10T07:26:08.536386+0000","last_peered":"2026-03-10T07:26:08.536386+0000","last_clean":"2026-03-10T07:26:08.536386+0000","last_became_active":"2026-03-10T07:25:36.946061+0000","last_became_peered":"2026-03-10T07:25:36.946061+0000","last_unstale":"2026-03-10T07:26:08.536386+0000","last_undegraded":"2026-03-10T07:26:08.536386+0000","last_fullsized":"2026-03-10T07:26:08.536386+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:01:05.581883+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"68'39","reported_seq":72,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:10.646635+0000","last_change":"2026-03-10T07:25:16.846892+0000","last_active":"2026-03-10T07:26:10.646635+0000","last_peered":"2026-03-10T07:26:10.646635+0000","last_clean":"2026-03-10T07:26:10.646635+0000","last_became_active":"2026-03-10T07:25:16.841838+0000","last_became_peered":"2026-03-10T07:25:16.841838+0000","last_unstale":"2026-03-10T07:26:10.646635+0000","last_undegraded":"2026-03-10T07:26:10.646635+0000","last_fullsized":"2026-03-10T07:26:10.646635+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:22:28.664661+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:22:28.664661+0000","last_clean_scrub_stamp":"2026-03-10T07:22:28.664661+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:40:47.538404+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.038864+0000","last_change":"2026-03-10T07:25:40.955630+0000","last_active":"2026-03-10T07:26:09.038864+0000","last_peered":"2026-03-10T07:26:09.038864+0000","last_clean":"2026-03-10T07:26:09.038864+0000","last_became_active":"2026-03-10T07:25:40.955539+0000","last_became_peered":"2026-03-10T07:25:40.955539+0000","last_unstale":"2026-03-10T07:26:09.038864+0000","last_undegraded":"2026-03-10T07:26:09.038864+0000","last_fullsized":"2026-03-10T07:26:09.038864+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:43:05.712356+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536341+0000","last_change":"2026-03-10T07:25:42.973285+0000","last_active":"2026-03-10T07:26:08.536341+0000","last_peered":"2026-03-10T07:26:08.536341+0000","last_clean":"2026-03-10T07:26:08.536341+0000","last_became_active":"2026-03-10T07:25:42.973174+0000","last_became_peered":"2026-03-10T07:25:42.973174+0000","last_unstale":"2026-03-10T07:26:08.536341+0000","last_undegraded":"2026-03-10T07:26:08.536341+0000","last_fullsized":"2026-03-10T07:26:08.536341+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:59:05.440102+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"62'17","reported_seq":61,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.038025+0000","last_change":"2026-03-10T07:25:38.949174+0000","last_active":"2026-03-10T07:26:09.038025+0000","last_peered":"2026-03-10T07:26:09.038025+0000","last_clean":"2026-03-10T07:26:09.038025+0000","last_became_active":"2026-03-10T07:25:38.949040+0000","last_became_peered":"2026-03-10T07:25:38.949040+0000","last_unstale":"2026-03-10T07:26:09.038025+0000","last_undegraded":"2026-03-10T07:26:09.038025+0000","last_fullsized":"2026-03-10T07:26:09.038025+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:29:52.963218+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533535+0000","last_change":"2026-03-10T07:25:36.936648+0000","last_active":"2026-03-10T07:26:08.533535+0000","last_peered":"2026-03-10T07:26:08.533535+0000","last_clean":"2026-03-10T07:26:08.533535+0000","last_became_active":"2026-03-10T07:25:36.936290+0000","last_became_peered":"2026-03-10T07:25:36.936290+0000","last_unstale":"2026-03-10T07:26:08.533535+0000","last_undegraded":"2026-03-10T07:26:08.533535+0000","last_fullsized":"2026-03-10T07:26:08.533535+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:46:33.718844+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533540+0000","last_change":"2026-03-10T07:25:40.970881+0000","last_active":"2026-03-10T07:26:08.533540+0000","last_peered":"2026-03-10T07:26:08.533540+0000","last_clean":"2026-03-10T07:26:08.533540+0000","last_became_active":"2026-03-10T07:25:40.970590+0000","last_became_peered":"2026-03-10T07:25:40.970590+0000","last_unstale":"2026-03-10T07:26:08.533540+0000","last_undegraded":"2026-03-10T07:26:08.533540+0000","last_fullsized":"2026-03-10T07:26:08.533540+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:28:46.961059+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"62'1","reported_seq":26,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.038095+0000","last_change":"2026-03-10T07:25:42.975336+0000","last_active":"2026-03-10T07:26:09.038095+0000","last_peered":"2026-03-10T07:26:09.038095+0000","last_clean":"2026-03-10T07:26:09.038095+0000","last_became_active":"2026-03-10T07:25:42.975159+0000","last_became_peered":"2026-03-10T07:25:42.975159+0000","last_unstale":"2026-03-10T07:26:09.038095+0000","last_undegraded":"2026-03-10T07:26:09.038095+0000","last_fullsized":"2026-03-10T07:26:09.038095+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:18:25.609708+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"62'10","reported_seq":48,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537338+0000","last_change":"2026-03-10T07:25:38.949184+0000","last_active":"2026-03-10T07:26:08.537338+0000","last_peered":"2026-03-10T07:26:08.537338+0000","last_clean":"2026-03-10T07:26:08.537338+0000","last_became_active":"2026-03-10T07:25:38.949098+0000","last_became_peered":"2026-03-10T07:25:38.949098+0000","last_unstale":"2026-03-10T07:26:08.537338+0000","last_undegraded":"2026-03-10T07:26:08.537338+0000","last_fullsized":"2026-03-10T07:26:08.537338+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:28:15.649500+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643955+0000","last_change":"2026-03-10T07:25:36.935418+0000","last_active":"2026-03-10T07:26:08.643955+0000","last_peered":"2026-03-10T07:26:08.643955+0000","last_clean":"2026-03-10T07:26:08.643955+0000","last_became_active":"2026-03-10T07:25:36.935317+0000","last_became_peered":"2026-03-10T07:25:36.935317+0000","last_unstale":"2026-03-10T07:26:08.643955+0000","last_undegraded":"2026-03-10T07:26:08.643955+0000","last_fullsized":"2026-03-10T07:26:08.643955+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:17:20.453634+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533932+0000","last_change":"2026-03-10T07:25:40.943934+0000","last_active":"2026-03-10T07:26:08.533932+0000","last_peered":"2026-03-10T07:26:08.533932+0000","last_clean":"2026-03-10T07:26:08.533932+0000","last_became_active":"2026-03-10T07:25:40.943855+0000","last_became_peered":"2026-03-10T07:25:40.943855+0000","last_unstale":"2026-03-10T07:26:08.533932+0000","last_undegraded":"2026-03-10T07:26:08.533932+0000","last_fullsized":"2026-03-10T07:26:08.533932+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:34:39.624347+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639038+0000","last_change":"2026-03-10T07:25:42.972089+0000","last_active":"2026-03-10T07:26:09.639038+0000","last_peered":"2026-03-10T07:26:09.639038+0000","last_clean":"2026-03-10T07:26:09.639038+0000","last_became_active":"2026-03-10T07:25:42.971881+0000","last_became_peered":"2026-03-10T07:25:42.971881+0000","last_unstale":"2026-03-10T07:26:09.639038+0000","last_undegraded":"2026-03-10T07:26:09.639038+0000","last_fullsized":"2026-03-10T07:26:09.639038+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:32:11.777461+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"62'15","reported_seq":58,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.038143+0000","last_change":"2026-03-10T07:25:38.961667+0000","last_active":"2026-03-10T07:26:09.038143+0000","last_peered":"2026-03-10T07:26:09.038143+0000","last_clean":"2026-03-10T07:26:09.038143+0000","last_became_active":"2026-03-10T07:25:38.961190+0000","last_became_peered":"2026-03-10T07:25:38.961190+0000","last_unstale":"2026-03-10T07:26:09.038143+0000","last_undegraded":"2026-03-10T07:26:09.038143+0000","last_fullsized":"2026-03-10T07:26:09.038143+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:48:26.032255+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533338+0000","last_change":"2026-03-10T07:25:36.943492+0000","last_active":"2026-03-10T07:26:08.533338+0000","last_peered":"2026-03-10T07:26:08.533338+0000","last_clean":"2026-03-10T07:26:08.533338+0000","last_became_active":"2026-03-10T07:25:36.943414+0000","last_became_peered":"2026-03-10T07:25:36.943414+0000","last_unstale":"2026-03-10T07:26:08.533338+0000","last_undegraded":"2026-03-10T07:26:08.533338+0000","last_fullsized":"2026-03-10T07:26:08.533338+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:21:45.284904+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"63'11","reported_seq":55,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:47.004306+0000","last_change":"2026-03-10T07:25:40.967840+0000","last_active":"2026-03-10T07:26:47.004306+0000","last_peered":"2026-03-10T07:26:47.004306+0000","last_clean":"2026-03-10T07:26:47.004306+0000","last_became_active":"2026-03-10T07:25:40.966975+0000","last_became_peered":"2026-03-10T07:25:40.966975+0000","last_unstale":"2026-03-10T07:26:47.004306+0000","last_undegraded":"2026-03-10T07:26:47.004306+0000","last_fullsized":"2026-03-10T07:26:47.004306+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:08:03.362729+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536036+0000","last_change":"2026-03-10T07:25:43.203990+0000","last_active":"2026-03-10T07:26:08.536036+0000","last_peered":"2026-03-10T07:26:08.536036+0000","last_clean":"2026-03-10T07:26:08.536036+0000","last_became_active":"2026-03-10T07:25:43.203824+0000","last_became_peered":"2026-03-10T07:25:43.203824+0000","last_unstale":"2026-03-10T07:26:08.536036+0000","last_undegraded":"2026-03-10T07:26:08.536036+0000","last_fullsized":"2026-03-10T07:26:08.536036+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:31:33.451326+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"62'11","reported_seq":52,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039162+0000","last_change":"2026-03-10T07:25:38.961478+0000","last_active":"2026-03-10T07:26:09.039162+0000","last_peered":"2026-03-10T07:26:09.039162+0000","last_clean":"2026-03-10T07:26:09.039162+0000","last_became_active":"2026-03-10T07:25:38.960621+0000","last_became_peered":"2026-03-10T07:25:38.960621+0000","last_unstale":"2026-03-10T07:26:09.039162+0000","last_undegraded":"2026-03-10T07:26:09.039162+0000","last_fullsized":"2026-03-10T07:26:09.039162+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:08:58.671231+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"55'2","reported_seq":53,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669467+0000","last_change":"2026-03-10T07:25:36.944295+0000","last_active":"2026-03-10T07:26:08.669467+0000","last_peered":"2026-03-10T07:26:08.669467+0000","last_clean":"2026-03-10T07:26:08.669467+0000","last_became_active":"2026-03-10T07:25:36.943976+0000","last_became_peered":"2026-03-10T07:25:36.943976+0000","last_unstale":"2026-03-10T07:26:08.669467+0000","last_undegraded":"2026-03-10T07:26:08.669467+0000","last_fullsized":"2026-03-10T07:26:08.669467+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:33:28.098706+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533891+0000","last_change":"2026-03-10T07:25:40.955113+0000","last_active":"2026-03-10T07:26:08.533891+0000","last_peered":"2026-03-10T07:26:08.533891+0000","last_clean":"2026-03-10T07:26:08.533891+0000","last_became_active":"2026-03-10T07:25:40.955022+0000","last_became_peered":"2026-03-10T07:25:40.955022+0000","last_unstale":"2026-03-10T07:26:08.533891+0000","last_undegraded":"2026-03-10T07:26:08.533891+0000","last_fullsized":"2026-03-10T07:26:08.533891+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:23:59.987139+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536484+0000","last_change":"2026-03-10T07:25:42.975466+0000","last_active":"2026-03-10T07:26:08.536484+0000","last_peered":"2026-03-10T07:26:08.536484+0000","last_clean":"2026-03-10T07:26:08.536484+0000","last_became_active":"2026-03-10T07:25:42.975100+0000","last_became_peered":"2026-03-10T07:25:42.975100+0000","last_unstale":"2026-03-10T07:26:08.536484+0000","last_undegraded":"2026-03-10T07:26:08.536484+0000","last_fullsized":"2026-03-10T07:26:08.536484+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:18:39.078295+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"62'11","reported_seq":52,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037906+0000","last_change":"2026-03-10T07:25:38.961581+0000","last_active":"2026-03-10T07:26:09.037906+0000","last_peered":"2026-03-10T07:26:09.037906+0000","last_clean":"2026-03-10T07:26:09.037906+0000","last_became_active":"2026-03-10T07:25:38.961037+0000","last_became_peered":"2026-03-10T07:25:38.961037+0000","last_unstale":"2026-03-10T07:26:09.037906+0000","last_undegraded":"2026-03-10T07:26:09.037906+0000","last_fullsized":"2026-03-10T07:26:09.037906+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:20:28.812410+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"55'1","reported_seq":45,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533602+0000","last_change":"2026-03-10T07:25:36.934795+0000","last_active":"2026-03-10T07:26:08.533602+0000","last_peered":"2026-03-10T07:26:08.533602+0000","last_clean":"2026-03-10T07:26:08.533602+0000","last_became_active":"2026-03-10T07:25:36.932716+0000","last_became_peered":"2026-03-10T07:25:36.932716+0000","last_unstale":"2026-03-10T07:26:08.533602+0000","last_undegraded":"2026-03-10T07:26:08.533602+0000","last_fullsized":"2026-03-10T07:26:08.533602+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:44:53.657616+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537606+0000","last_change":"2026-03-10T07:25:40.949370+0000","last_active":"2026-03-10T07:26:08.537606+0000","last_peered":"2026-03-10T07:26:08.537606+0000","last_clean":"2026-03-10T07:26:08.537606+0000","last_became_active":"2026-03-10T07:25:40.949240+0000","last_became_peered":"2026-03-10T07:25:40.949240+0000","last_unstale":"2026-03-10T07:26:08.537606+0000","last_undegraded":"2026-03-10T07:26:08.537606+0000","last_fullsized":"2026-03-10T07:26:08.537606+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:23:09.954288+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.533634+0000","last_change":"2026-03-10T07:25:42.973305+0000","last_active":"2026-03-10T07:26:08.533634+0000","last_peered":"2026-03-10T07:26:08.533634+0000","last_clean":"2026-03-10T07:26:08.533634+0000","last_became_active":"2026-03-10T07:25:42.973188+0000","last_became_peered":"2026-03-10T07:25:42.973188+0000","last_unstale":"2026-03-10T07:26:08.533634+0000","last_undegraded":"2026-03-10T07:26:08.533634+0000","last_fullsized":"2026-03-10T07:26:08.533634+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:41:02.387664+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"62'4","reported_seq":39,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588703+0000","last_change":"2026-03-10T07:25:38.957721+0000","last_active":"2026-03-10T07:26:08.588703+0000","last_peered":"2026-03-10T07:26:08.588703+0000","last_clean":"2026-03-10T07:26:08.588703+0000","last_became_active":"2026-03-10T07:25:38.957530+0000","last_became_peered":"2026-03-10T07:25:38.957530+0000","last_unstale":"2026-03-10T07:26:08.588703+0000","last_undegraded":"2026-03-10T07:26:08.588703+0000","last_fullsized":"2026-03-10T07:26:08.588703+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:57:51.095488+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588679+0000","last_change":"2026-03-10T07:25:36.933671+0000","last_active":"2026-03-10T07:26:08.588679+0000","last_peered":"2026-03-10T07:26:08.588679+0000","last_clean":"2026-03-10T07:26:08.588679+0000","last_became_active":"2026-03-10T07:25:36.933544+0000","last_became_peered":"2026-03-10T07:25:36.933544+0000","last_unstale":"2026-03-10T07:26:08.588679+0000","last_undegraded":"2026-03-10T07:26:08.588679+0000","last_fullsized":"2026-03-10T07:26:08.588679+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:36:03.268805+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537656+0000","last_change":"2026-03-10T07:25:40.952470+0000","last_active":"2026-03-10T07:26:08.537656+0000","last_peered":"2026-03-10T07:26:08.537656+0000","last_clean":"2026-03-10T07:26:08.537656+0000","last_became_active":"2026-03-10T07:25:40.950383+0000","last_became_peered":"2026-03-10T07:25:40.950383+0000","last_unstale":"2026-03-10T07:26:08.537656+0000","last_undegraded":"2026-03-10T07:26:08.537656+0000","last_fullsized":"2026-03-10T07:26:08.537656+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:58:11.138488+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039967+0000","last_change":"2026-03-10T07:25:43.201671+0000","last_active":"2026-03-10T07:26:09.039967+0000","last_peered":"2026-03-10T07:26:09.039967+0000","last_clean":"2026-03-10T07:26:09.039967+0000","last_became_active":"2026-03-10T07:25:43.201330+0000","last_became_peered":"2026-03-10T07:25:43.201330+0000","last_unstale":"2026-03-10T07:26:09.039967+0000","last_undegraded":"2026-03-10T07:26:09.039967+0000","last_fullsized":"2026-03-10T07:26:09.039967+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:33:00.205062+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"62'11","reported_seq":52,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039039+0000","last_change":"2026-03-10T07:25:38.961975+0000","last_active":"2026-03-10T07:26:09.039039+0000","last_peered":"2026-03-10T07:26:09.039039+0000","last_clean":"2026-03-10T07:26:09.039039+0000","last_became_active":"2026-03-10T07:25:38.961334+0000","last_became_peered":"2026-03-10T07:25:38.961334+0000","last_unstale":"2026-03-10T07:26:09.039039+0000","last_undegraded":"2026-03-10T07:26:09.039039+0000","last_fullsized":"2026-03-10T07:26:09.039039+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:12:18.306521+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536714+0000","last_change":"2026-03-10T07:25:36.946853+0000","last_active":"2026-03-10T07:26:08.536714+0000","last_peered":"2026-03-10T07:26:08.536714+0000","last_clean":"2026-03-10T07:26:08.536714+0000","last_became_active":"2026-03-10T07:25:36.945627+0000","last_became_peered":"2026-03-10T07:25:36.945627+0000","last_unstale":"2026-03-10T07:26:08.536714+0000","last_undegraded":"2026-03-10T07:26:08.536714+0000","last_fullsized":"2026-03-10T07:26:08.536714+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:54:56.690638+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"63'11","reported_seq":55,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:47.003882+0000","last_change":"2026-03-10T07:25:40.956343+0000","last_active":"2026-03-10T07:26:47.003882+0000","last_peered":"2026-03-10T07:26:47.003882+0000","last_clean":"2026-03-10T07:26:47.003882+0000","last_became_active":"2026-03-10T07:25:40.955608+0000","last_became_peered":"2026-03-10T07:25:40.955608+0000","last_unstale":"2026-03-10T07:26:47.003882+0000","last_undegraded":"2026-03-10T07:26:47.003882+0000","last_fullsized":"2026-03-10T07:26:47.003882+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:50:28.121766+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639301+0000","last_change":"2026-03-10T07:25:42.966964+0000","last_active":"2026-03-10T07:26:09.639301+0000","last_peered":"2026-03-10T07:26:09.639301+0000","last_clean":"2026-03-10T07:26:09.639301+0000","last_became_active":"2026-03-10T07:25:42.966831+0000","last_became_peered":"2026-03-10T07:25:42.966831+0000","last_unstale":"2026-03-10T07:26:09.639301+0000","last_undegraded":"2026-03-10T07:26:09.639301+0000","last_fullsized":"2026-03-10T07:26:09.639301+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:51:07.331594+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639524+0000","last_change":"2026-03-10T07:25:38.935508+0000","last_active":"2026-03-10T07:26:09.639524+0000","last_peered":"2026-03-10T07:26:09.639524+0000","last_clean":"2026-03-10T07:26:09.639524+0000","last_became_active":"2026-03-10T07:25:38.935404+0000","last_became_peered":"2026-03-10T07:25:38.935404+0000","last_unstale":"2026-03-10T07:26:09.639524+0000","last_undegraded":"2026-03-10T07:26:09.639524+0000","last_fullsized":"2026-03-10T07:26:09.639524+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:59:49.592651+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.638808+0000","last_change":"2026-03-10T07:25:36.935149+0000","last_active":"2026-03-10T07:26:09.638808+0000","last_peered":"2026-03-10T07:26:09.638808+0000","last_clean":"2026-03-10T07:26:09.638808+0000","last_became_active":"2026-03-10T07:25:36.935007+0000","last_became_peered":"2026-03-10T07:25:36.935007+0000","last_unstale":"2026-03-10T07:26:09.638808+0000","last_undegraded":"2026-03-10T07:26:09.638808+0000","last_fullsized":"2026-03-10T07:26:09.638808+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:54:49.458298+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"63'11","reported_seq":55,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:47.004816+0000","last_change":"2026-03-10T07:25:40.949838+0000","last_active":"2026-03-10T07:26:47.004816+0000","last_peered":"2026-03-10T07:26:47.004816+0000","last_clean":"2026-03-10T07:26:47.004816+0000","last_became_active":"2026-03-10T07:25:40.949708+0000","last_became_peered":"2026-03-10T07:25:40.949708+0000","last_unstale":"2026-03-10T07:26:47.004816+0000","last_undegraded":"2026-03-10T07:26:47.004816+0000","last_fullsized":"2026-03-10T07:26:47.004816+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:07:59.191575+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669047+0000","last_change":"2026-03-10T07:25:42.973849+0000","last_active":"2026-03-10T07:26:08.669047+0000","last_peered":"2026-03-10T07:26:08.669047+0000","last_clean":"2026-03-10T07:26:08.669047+0000","last_became_active":"2026-03-10T07:25:42.973676+0000","last_became_peered":"2026-03-10T07:25:42.973676+0000","last_unstale":"2026-03-10T07:26:08.669047+0000","last_undegraded":"2026-03-10T07:26:08.669047+0000","last_fullsized":"2026-03-10T07:26:08.669047+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:40:38.682029+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.037847+0000","last_change":"2026-03-10T07:25:38.961755+0000","last_active":"2026-03-10T07:26:09.037847+0000","last_peered":"2026-03-10T07:26:09.037847+0000","last_clean":"2026-03-10T07:26:09.037847+0000","last_became_active":"2026-03-10T07:25:38.960755+0000","last_became_peered":"2026-03-10T07:25:38.960755+0000","last_unstale":"2026-03-10T07:26:09.037847+0000","last_undegraded":"2026-03-10T07:26:09.037847+0000","last_fullsized":"2026-03-10T07:26:09.037847+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:06:44.905997+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588630+0000","last_change":"2026-03-10T07:25:36.937727+0000","last_active":"2026-03-10T07:26:08.588630+0000","last_peered":"2026-03-10T07:26:08.588630+0000","last_clean":"2026-03-10T07:26:08.588630+0000","last_became_active":"2026-03-10T07:25:36.936312+0000","last_became_peered":"2026-03-10T07:25:36.936312+0000","last_unstale":"2026-03-10T07:26:08.588630+0000","last_undegraded":"2026-03-10T07:26:08.588630+0000","last_fullsized":"2026-03-10T07:26:08.588630+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:11:40.468598+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536825+0000","last_change":"2026-03-10T07:25:40.955794+0000","last_active":"2026-03-10T07:26:08.536825+0000","last_peered":"2026-03-10T07:26:08.536825+0000","last_clean":"2026-03-10T07:26:08.536825+0000","last_became_active":"2026-03-10T07:25:40.955671+0000","last_became_peered":"2026-03-10T07:25:40.955671+0000","last_unstale":"2026-03-10T07:26:08.536825+0000","last_undegraded":"2026-03-10T07:26:08.536825+0000","last_fullsized":"2026-03-10T07:26:08.536825+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:41:30.295536+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639073+0000","last_change":"2026-03-10T07:25:42.971300+0000","last_active":"2026-03-10T07:26:09.639073+0000","last_peered":"2026-03-10T07:26:09.639073+0000","last_clean":"2026-03-10T07:26:09.639073+0000","last_became_active":"2026-03-10T07:25:42.971206+0000","last_became_peered":"2026-03-10T07:25:42.971206+0000","last_unstale":"2026-03-10T07:26:09.639073+0000","last_undegraded":"2026-03-10T07:26:09.639073+0000","last_fullsized":"2026-03-10T07:26:09.639073+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:45:57.050615+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"62'10","reported_seq":48,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.668522+0000","last_change":"2026-03-10T07:25:38.972133+0000","last_active":"2026-03-10T07:26:08.668522+0000","last_peered":"2026-03-10T07:26:08.668522+0000","last_clean":"2026-03-10T07:26:08.668522+0000","last_became_active":"2026-03-10T07:25:38.965840+0000","last_became_peered":"2026-03-10T07:25:38.965840+0000","last_unstale":"2026-03-10T07:26:08.668522+0000","last_undegraded":"2026-03-10T07:26:08.668522+0000","last_fullsized":"2026-03-10T07:26:08.668522+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:22:40.667148+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643792+0000","last_change":"2026-03-10T07:25:36.936409+0000","last_active":"2026-03-10T07:26:08.643792+0000","last_peered":"2026-03-10T07:26:08.643792+0000","last_clean":"2026-03-10T07:26:08.643792+0000","last_became_active":"2026-03-10T07:25:36.936327+0000","last_became_peered":"2026-03-10T07:25:36.936327+0000","last_unstale":"2026-03-10T07:26:08.643792+0000","last_undegraded":"2026-03-10T07:26:08.643792+0000","last_fullsized":"2026-03-10T07:26:08.643792+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:37:57.866689+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643764+0000","last_change":"2026-03-10T07:25:40.949966+0000","last_active":"2026-03-10T07:26:08.643764+0000","last_peered":"2026-03-10T07:26:08.643764+0000","last_clean":"2026-03-10T07:26:08.643764+0000","last_became_active":"2026-03-10T07:25:40.949869+0000","last_became_peered":"2026-03-10T07:25:40.949869+0000","last_unstale":"2026-03-10T07:26:08.643764+0000","last_undegraded":"2026-03-10T07:26:08.643764+0000","last_fullsized":"2026-03-10T07:26:08.643764+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:54:54.626867+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536587+0000","last_change":"2026-03-10T07:25:42.971021+0000","last_active":"2026-03-10T07:26:08.536587+0000","last_peered":"2026-03-10T07:26:08.536587+0000","last_clean":"2026-03-10T07:26:08.536587+0000","last_became_active":"2026-03-10T07:25:42.970947+0000","last_became_peered":"2026-03-10T07:25:42.970947+0000","last_unstale":"2026-03-10T07:26:08.536587+0000","last_undegraded":"2026-03-10T07:26:08.536587+0000","last_fullsized":"2026-03-10T07:26:08.536587+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:52:41.859306+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"62'6","reported_seq":42,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.640112+0000","last_change":"2026-03-10T07:25:38.944574+0000","last_active":"2026-03-10T07:26:09.640112+0000","last_peered":"2026-03-10T07:26:09.640112+0000","last_clean":"2026-03-10T07:26:09.640112+0000","last_became_active":"2026-03-10T07:25:38.944280+0000","last_became_peered":"2026-03-10T07:25:38.944280+0000","last_unstale":"2026-03-10T07:26:09.640112+0000","last_undegraded":"2026-03-10T07:26:09.640112+0000","last_fullsized":"2026-03-10T07:26:09.640112+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:35:54.118521+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536663+0000","last_change":"2026-03-10T07:25:36.953116+0000","last_active":"2026-03-10T07:26:08.536663+0000","last_peered":"2026-03-10T07:26:08.536663+0000","last_clean":"2026-03-10T07:26:08.536663+0000","last_became_active":"2026-03-10T07:25:36.952709+0000","last_became_peered":"2026-03-10T07:25:36.952709+0000","last_unstale":"2026-03-10T07:26:08.536663+0000","last_undegraded":"2026-03-10T07:26:08.536663+0000","last_fullsized":"2026-03-10T07:26:08.536663+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:21:09.373601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588191+0000","last_change":"2026-03-10T07:25:40.965383+0000","last_active":"2026-03-10T07:26:08.588191+0000","last_peered":"2026-03-10T07:26:08.588191+0000","last_clean":"2026-03-10T07:26:08.588191+0000","last_became_active":"2026-03-10T07:25:40.965097+0000","last_became_peered":"2026-03-10T07:25:40.965097+0000","last_unstale":"2026-03-10T07:26:08.588191+0000","last_undegraded":"2026-03-10T07:26:08.588191+0000","last_fullsized":"2026-03-10T07:26:08.588191+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:39:01.892358+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039808+0000","last_change":"2026-03-10T07:25:42.973181+0000","last_active":"2026-03-10T07:26:09.039808+0000","last_peered":"2026-03-10T07:26:09.039808+0000","last_clean":"2026-03-10T07:26:09.039808+0000","last_became_active":"2026-03-10T07:25:42.973082+0000","last_became_peered":"2026-03-10T07:25:42.973082+0000","last_unstale":"2026-03-10T07:26:09.039808+0000","last_undegraded":"2026-03-10T07:26:09.039808+0000","last_fullsized":"2026-03-10T07:26:09.039808+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:41:35.388945+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536575+0000","last_change":"2026-03-10T07:25:38.963414+0000","last_active":"2026-03-10T07:26:08.536575+0000","last_peered":"2026-03-10T07:26:08.536575+0000","last_clean":"2026-03-10T07:26:08.536575+0000","last_became_active":"2026-03-10T07:25:38.963049+0000","last_became_peered":"2026-03-10T07:25:38.963049+0000","last_unstale":"2026-03-10T07:26:08.536575+0000","last_undegraded":"2026-03-10T07:26:08.536575+0000","last_fullsized":"2026-03-10T07:26:08.536575+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:43:44.102685+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588563+0000","last_change":"2026-03-10T07:25:36.939528+0000","last_active":"2026-03-10T07:26:08.588563+0000","last_peered":"2026-03-10T07:26:08.588563+0000","last_clean":"2026-03-10T07:26:08.588563+0000","last_became_active":"2026-03-10T07:25:36.937955+0000","last_became_peered":"2026-03-10T07:25:36.937955+0000","last_unstale":"2026-03-10T07:26:08.588563+0000","last_undegraded":"2026-03-10T07:26:08.588563+0000","last_fullsized":"2026-03-10T07:26:08.588563+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:24:35.327401+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039744+0000","last_change":"2026-03-10T07:25:40.967257+0000","last_active":"2026-03-10T07:26:09.039744+0000","last_peered":"2026-03-10T07:26:09.039744+0000","last_clean":"2026-03-10T07:26:09.039744+0000","last_became_active":"2026-03-10T07:25:40.966757+0000","last_became_peered":"2026-03-10T07:25:40.966757+0000","last_unstale":"2026-03-10T07:26:09.039744+0000","last_undegraded":"2026-03-10T07:26:09.039744+0000","last_fullsized":"2026-03-10T07:26:09.039744+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:43:41.363337+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.536384+0000","last_change":"2026-03-10T07:25:43.202041+0000","last_active":"2026-03-10T07:26:08.536384+0000","last_peered":"2026-03-10T07:26:08.536384+0000","last_clean":"2026-03-10T07:26:08.536384+0000","last_became_active":"2026-03-10T07:25:43.201200+0000","last_became_peered":"2026-03-10T07:25:43.201200+0000","last_unstale":"2026-03-10T07:26:08.536384+0000","last_undegraded":"2026-03-10T07:26:08.536384+0000","last_fullsized":"2026-03-10T07:26:08.536384+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:51:50.787234+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"62'1","reported_seq":27,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.039268+0000","last_change":"2026-03-10T07:25:42.978185+0000","last_active":"2026-03-10T07:26:09.039268+0000","last_peered":"2026-03-10T07:26:09.039268+0000","last_clean":"2026-03-10T07:26:09.039268+0000","last_became_active":"2026-03-10T07:25:42.978084+0000","last_became_peered":"2026-03-10T07:25:42.978084+0000","last_unstale":"2026-03-10T07:26:09.039268+0000","last_undegraded":"2026-03-10T07:26:09.039268+0000","last_fullsized":"2026-03-10T07:26:09.039268+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:43:12.892532+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"62'15","reported_seq":58,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.644282+0000","last_change":"2026-03-10T07:25:38.963537+0000","last_active":"2026-03-10T07:26:08.644282+0000","last_peered":"2026-03-10T07:26:08.644282+0000","last_clean":"2026-03-10T07:26:08.644282+0000","last_became_active":"2026-03-10T07:25:38.963373+0000","last_became_peered":"2026-03-10T07:25:38.963373+0000","last_unstale":"2026-03-10T07:26:08.644282+0000","last_undegraded":"2026-03-10T07:26:08.644282+0000","last_fullsized":"2026-03-10T07:26:08.644282+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:23:13.871434+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537259+0000","last_change":"2026-03-10T07:25:36.946640+0000","last_active":"2026-03-10T07:26:08.537259+0000","last_peered":"2026-03-10T07:26:08.537259+0000","last_clean":"2026-03-10T07:26:08.537259+0000","last_became_active":"2026-03-10T07:25:36.945831+0000","last_became_peered":"2026-03-10T07:25:36.945831+0000","last_unstale":"2026-03-10T07:26:08.537259+0000","last_undegraded":"2026-03-10T07:26:08.537259+0000","last_fullsized":"2026-03-10T07:26:08.537259+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:27:06.610514+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"63'11","reported_seq":58,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:47.003895+0000","last_change":"2026-03-10T07:25:40.965271+0000","last_active":"2026-03-10T07:26:47.003895+0000","last_peered":"2026-03-10T07:26:47.003895+0000","last_clean":"2026-03-10T07:26:47.003895+0000","last_became_active":"2026-03-10T07:25:40.964821+0000","last_became_peered":"2026-03-10T07:25:40.964821+0000","last_unstale":"2026-03-10T07:26:47.003895+0000","last_undegraded":"2026-03-10T07:26:47.003895+0000","last_fullsized":"2026-03-10T07:26:47.003895+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:17:15.429481+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643401+0000","last_change":"2026-03-10T07:25:42.980060+0000","last_active":"2026-03-10T07:26:08.643401+0000","last_peered":"2026-03-10T07:26:08.643401+0000","last_clean":"2026-03-10T07:26:08.643401+0000","last_became_active":"2026-03-10T07:25:42.979985+0000","last_became_peered":"2026-03-10T07:25:42.979985+0000","last_unstale":"2026-03-10T07:26:08.643401+0000","last_undegraded":"2026-03-10T07:26:08.643401+0000","last_fullsized":"2026-03-10T07:26:08.643401+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:04:13.624981+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537312+0000","last_change":"2026-03-10T07:25:38.940303+0000","last_active":"2026-03-10T07:26:08.537312+0000","last_peered":"2026-03-10T07:26:08.537312+0000","last_clean":"2026-03-10T07:26:08.537312+0000","last_became_active":"2026-03-10T07:25:38.940123+0000","last_became_peered":"2026-03-10T07:25:38.940123+0000","last_unstale":"2026-03-10T07:26:08.537312+0000","last_undegraded":"2026-03-10T07:26:08.537312+0000","last_fullsized":"2026-03-10T07:26:08.537312+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:01:47.129134+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"55'1","reported_seq":38,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537277+0000","last_change":"2026-03-10T07:25:36.931081+0000","last_active":"2026-03-10T07:26:08.537277+0000","last_peered":"2026-03-10T07:26:08.537277+0000","last_clean":"2026-03-10T07:26:08.537277+0000","last_became_active":"2026-03-10T07:25:36.930935+0000","last_became_peered":"2026-03-10T07:25:36.930935+0000","last_unstale":"2026-03-10T07:26:08.537277+0000","last_undegraded":"2026-03-10T07:26:08.537277+0000","last_fullsized":"2026-03-10T07:26:08.537277+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:43:38.866828+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.639565+0000","last_change":"2026-03-10T07:25:40.958175+0000","last_active":"2026-03-10T07:26:09.639565+0000","last_peered":"2026-03-10T07:26:09.639565+0000","last_clean":"2026-03-10T07:26:09.639565+0000","last_became_active":"2026-03-10T07:25:40.958020+0000","last_became_peered":"2026-03-10T07:25:40.958020+0000","last_unstale":"2026-03-10T07:26:09.639565+0000","last_undegraded":"2026-03-10T07:26:09.639565+0000","last_fullsized":"2026-03-10T07:26:09.639565+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:12:06.304347+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669269+0000","last_change":"2026-03-10T07:25:43.200020+0000","last_active":"2026-03-10T07:26:08.669269+0000","last_peered":"2026-03-10T07:26:08.669269+0000","last_clean":"2026-03-10T07:26:08.669269+0000","last_became_active":"2026-03-10T07:25:43.199906+0000","last_became_peered":"2026-03-10T07:25:43.199906+0000","last_unstale":"2026-03-10T07:26:08.669269+0000","last_undegraded":"2026-03-10T07:26:08.669269+0000","last_fullsized":"2026-03-10T07:26:08.669269+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:27:58.887171+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.588592+0000","last_change":"2026-03-10T07:25:36.944990+0000","last_active":"2026-03-10T07:26:08.588592+0000","last_peered":"2026-03-10T07:26:08.588592+0000","last_clean":"2026-03-10T07:26:08.588592+0000","last_became_active":"2026-03-10T07:25:36.943138+0000","last_became_peered":"2026-03-10T07:25:36.943138+0000","last_unstale":"2026-03-10T07:26:08.588592+0000","last_undegraded":"2026-03-10T07:26:08.588592+0000","last_fullsized":"2026-03-10T07:26:08.588592+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:18:49.343181+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"62'5","reported_seq":43,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:09.640166+0000","last_change":"2026-03-10T07:25:38.961769+0000","last_active":"2026-03-10T07:26:09.640166+0000","last_peered":"2026-03-10T07:26:09.640166+0000","last_clean":"2026-03-10T07:26:09.640166+0000","last_became_active":"2026-03-10T07:25:38.961644+0000","last_became_peered":"2026-03-10T07:25:38.961644+0000","last_unstale":"2026-03-10T07:26:09.640166+0000","last_undegraded":"2026-03-10T07:26:09.640166+0000","last_fullsized":"2026-03-10T07:26:09.640166+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:42:41.376076+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.643366+0000","last_change":"2026-03-10T07:25:40.965848+0000","last_active":"2026-03-10T07:26:08.643366+0000","last_peered":"2026-03-10T07:26:08.643366+0000","last_clean":"2026-03-10T07:26:08.643366+0000","last_became_active":"2026-03-10T07:25:40.965167+0000","last_became_peered":"2026-03-10T07:25:40.965167+0000","last_unstale":"2026-03-10T07:26:08.643366+0000","last_undegraded":"2026-03-10T07:26:08.643366+0000","last_fullsized":"2026-03-10T07:26:08.643366+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:32:03.354641+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":25,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537094+0000","last_change":"2026-03-10T07:25:43.202517+0000","last_active":"2026-03-10T07:26:08.537094+0000","last_peered":"2026-03-10T07:26:08.537094+0000","last_clean":"2026-03-10T07:26:08.537094+0000","last_became_active":"2026-03-10T07:25:43.202184+0000","last_became_peered":"2026-03-10T07:25:43.202184+0000","last_unstale":"2026-03-10T07:26:08.537094+0000","last_undegraded":"2026-03-10T07:26:08.537094+0000","last_fullsized":"2026-03-10T07:26:08.537094+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:41.928990+0000","last_clean_scrub_stamp":"2026-03-10T07:25:41.928990+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:01:09.617216+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":37,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.537183+0000","last_change":"2026-03-10T07:25:36.942521+0000","last_active":"2026-03-10T07:26:08.537183+0000","last_peered":"2026-03-10T07:26:08.537183+0000","last_clean":"2026-03-10T07:26:08.537183+0000","last_became_active":"2026-03-10T07:25:36.942411+0000","last_became_peered":"2026-03-10T07:25:36.942411+0000","last_unstale":"2026-03-10T07:26:08.537183+0000","last_undegraded":"2026-03-10T07:26:08.537183+0000","last_fullsized":"2026-03-10T07:26:08.537183+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:35.908488+0000","last_clean_scrub_stamp":"2026-03-10T07:25:35.908488+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:47:04.381354+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"62'9","reported_seq":49,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669588+0000","last_change":"2026-03-10T07:25:38.957120+0000","last_active":"2026-03-10T07:26:08.669588+0000","last_peered":"2026-03-10T07:26:08.669588+0000","last_clean":"2026-03-10T07:26:08.669588+0000","last_became_active":"2026-03-10T07:25:38.957022+0000","last_became_peered":"2026-03-10T07:25:38.957022+0000","last_unstale":"2026-03-10T07:26:08.669588+0000","last_undegraded":"2026-03-10T07:26:08.669588+0000","last_fullsized":"2026-03-10T07:26:08.669588+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:37.916046+0000","last_clean_scrub_stamp":"2026-03-10T07:25:37.916046+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:37:02.020374+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":29,"reported_epoch":68,"state":"active+clean","last_fresh":"2026-03-10T07:26:08.669546+0000","last_change":"2026-03-10T07:25:40.951728+0000","last_active":"2026-03-10T07:26:08.669546+0000","last_peered":"2026-03-10T07:26:08.669546+0000","last_clean":"2026-03-10T07:26:08.669546+0000","last_became_active":"2026-03-10T07:25:40.945177+0000","last_became_peered":"2026-03-10T07:25:40.945177+0000","last_unstale":"2026-03-10T07:26:08.669546+0000","last_undegraded":"2026-03-10T07:26:08.669546+0000","last_fullsized":"2026-03-10T07:26:08.669546+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:25:39.922911+0000","last_clean_scrub_stamp":"2026-03-10T07:25:39.922911+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:53:58.773385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":67,"num_read_kb":62,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":51,"seq":219043332117,"num_pgs":59,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":28048,"kb_used_data":1216,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939376,"statfs":{"total":21470642176,"available":21441921024,"internally_reserved":0,"allocated":1245184,"data_stored":778866,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":44,"seq":188978561053,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":28016,"kb_used_data":1180,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939408,"statfs":{"total":21470642176,"available":21441953792,"internally_reserved":0,"allocated":1208320,"data_stored":776518,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":38,"seq":163208757283,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27572,"kb_used_data":732,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939852,"statfs":{"total":21470642176,"available":21442408448,"internally_reserved":0,"allocated":749568,"data_stored":317640,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":31,"seq":133143986218,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27608,"kb_used_data":768,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939816,"statfs":{"total":21470642176,"available":21442371584,"internally_reserved":0,"allocated":786432,"data_stored":318183,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149744,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27576,"kb_used_data":736,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939848,"statfs":{"total":21470642176,"available":21442404352,"internally_reserved":0,"allocated":753664,"data_stored":318631,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411383,"num_pgs":39,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27580,"kb_used_data":740,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939844,"statfs":{"total":21470642176,"available":21442400256,"internally_reserved":0,"allocated":757760,"data_stored":318180,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574910,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27592,"kb_used_data":752,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939832,"statfs":{"total":21470642176,"available":21442387968,"internally_reserved":0,"allocated":770048,"data_stored":318998,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738437,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":28036,"kb_used_data":1200,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939388,"statfs":{"total":21470642176,"available":21441933312,"internally_reserved":0,"allocated":1228800,"data_stored":777609,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":574,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1039,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1085,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T07:27:00.150 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T07:27:00.150 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T07:27:00.150 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T07:27:00.150 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph health --format=json 2026-03-10T07:27:01.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:00 vm03 bash[23382]: audit 2026-03-10T07:27:00.088073+0000 mgr.y (mgr.24407) 66 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:27:01.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:00 vm03 bash[23382]: audit 2026-03-10T07:27:00.088073+0000 mgr.y (mgr.24407) 66 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:27:01.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:00 vm00 bash[28005]: audit 2026-03-10T07:27:00.088073+0000 mgr.y (mgr.24407) 66 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:27:01.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:00 vm00 bash[28005]: audit 2026-03-10T07:27:00.088073+0000 mgr.y (mgr.24407) 66 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:27:01.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:00 vm00 bash[20701]: audit 2026-03-10T07:27:00.088073+0000 mgr.y (mgr.24407) 66 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:27:01.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:00 vm00 bash[20701]: audit 2026-03-10T07:27:00.088073+0000 mgr.y (mgr.24407) 66 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:27:01.383 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:27:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:27:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:27:02.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:01 vm03 bash[23382]: cluster 2026-03-10T07:27:00.560416+0000 mgr.y (mgr.24407) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:02.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:01 vm03 bash[23382]: cluster 2026-03-10T07:27:00.560416+0000 mgr.y (mgr.24407) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:01 vm00 bash[20701]: cluster 2026-03-10T07:27:00.560416+0000 mgr.y (mgr.24407) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:01 vm00 bash[20701]: cluster 2026-03-10T07:27:00.560416+0000 mgr.y (mgr.24407) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:02.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:01 vm00 bash[28005]: cluster 2026-03-10T07:27:00.560416+0000 mgr.y (mgr.24407) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:02.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:01 vm00 bash[28005]: cluster 2026-03-10T07:27:00.560416+0000 mgr.y (mgr.24407) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:03.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:27:02 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:27:03.848 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T07:27:03.941 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:03 vm00 bash[28005]: cluster 2026-03-10T07:27:02.560752+0000 mgr.y (mgr.24407) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:03.941 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:03 vm00 bash[28005]: cluster 2026-03-10T07:27:02.560752+0000 mgr.y (mgr.24407) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:03.941 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:03 vm00 bash[28005]: audit 2026-03-10T07:27:02.888112+0000 mgr.y (mgr.24407) 69 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:03.941 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:03 vm00 bash[28005]: audit 2026-03-10T07:27:02.888112+0000 mgr.y (mgr.24407) 69 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:03.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:03 vm00 bash[20701]: cluster 2026-03-10T07:27:02.560752+0000 mgr.y (mgr.24407) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:03.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:03 vm00 bash[20701]: cluster 2026-03-10T07:27:02.560752+0000 mgr.y (mgr.24407) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:03.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:03 vm00 bash[20701]: audit 2026-03-10T07:27:02.888112+0000 mgr.y (mgr.24407) 69 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:03.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:03 vm00 bash[20701]: audit 2026-03-10T07:27:02.888112+0000 mgr.y (mgr.24407) 69 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:04.003 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 -- 192.168.123.100:0/1657468754 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7ec104f40 msgr2=0x7ff7ec105320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:27:04.003 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 --2- 192.168.123.100:0/1657468754 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7ec104f40 0x7ff7ec105320 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7ff7e0009a80 tx=0x7ff7e002f270 comp rx=0 tx=0).stop 2026-03-10T07:27:04.003 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 -- 192.168.123.100:0/1657468754 shutdown_connections 2026-03-10T07:27:04.003 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 --2- 192.168.123.100:0/1657468754 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7ec10a070 0x7ff7ec111bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:04.003 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 --2- 192.168.123.100:0/1657468754 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff7ec1058f0 0x7ff7ec109940 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:04.003 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 --2- 192.168.123.100:0/1657468754 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7ec104f40 0x7ff7ec105320 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:04.004 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 -- 192.168.123.100:0/1657468754 >> 192.168.123.100:0/1657468754 conn(0x7ff7ec1009e0 msgr2=0x7ff7ec102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:27:04.004 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 -- 192.168.123.100:0/1657468754 shutdown_connections 2026-03-10T07:27:04.004 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 -- 192.168.123.100:0/1657468754 wait complete. 2026-03-10T07:27:04.004 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 Processor -- start 2026-03-10T07:27:04.004 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 -- start start 2026-03-10T07:27:04.005 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7ec104f40 0x7ff7ec111bf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:27:04.005 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7ec1058f0 0x7ff7ec10ecf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:27:04.005 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff7ec10a070 0x7ff7ec10f4c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:27:04.005 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff7ec114370 con 0x7ff7ec1058f0 2026-03-10T07:27:04.005 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7ff7ec1141f0 con 0x7ff7ec10a070 2026-03-10T07:27:04.005 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7f0ded640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7ff7ec1144f0 con 0x7ff7ec104f40 2026-03-10T07:27:04.005 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7ead76640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff7ec10a070 0x7ff7ec10f4c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:27:04.005 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7ead76640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff7ec10a070 0x7ff7ec10f4c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:39344/0 (socket says 192.168.123.100:39344) 2026-03-10T07:27:04.005 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7ead76640 1 -- 192.168.123.100:0/3398847555 learned_addr learned my addr 192.168.123.100:0/3398847555 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:27:04.005 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.003+0000 7ff7e9d74640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7ec1058f0 0x7ff7ec10ecf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:27:04.005 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7ea575640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7ec104f40 0x7ff7ec111bf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:27:04.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7ead76640 1 -- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7ec104f40 msgr2=0x7ff7ec111bf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:27:04.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7ead76640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7ec104f40 0x7ff7ec111bf0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:04.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7ead76640 1 -- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7ec1058f0 msgr2=0x7ff7ec10ecf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:27:04.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7ead76640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7ec1058f0 0x7ff7ec10ecf0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:04.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7ead76640 1 -- 192.168.123.100:0/3398847555 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff7ec1a27e0 con 0x7ff7ec10a070 2026-03-10T07:27:04.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7e9d74640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7ec1058f0 0x7ff7ec10ecf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:27:04.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7ea575640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7ec104f40 0x7ff7ec111bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:27:04.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7ead76640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff7ec10a070 0x7ff7ec10f4c0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7ff7dc00e1c0 tx=0x7ff7dc00e690 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:27:04.007 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7cf7fe640 1 -- 192.168.123.100:0/3398847555 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff7dc00ee50 con 0x7ff7ec10a070 2026-03-10T07:27:04.007 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7f0ded640 1 -- 192.168.123.100:0/3398847555 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff7ec1a2a70 con 0x7ff7ec10a070 2026-03-10T07:27:04.007 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7cf7fe640 1 -- 192.168.123.100:0/3398847555 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7ff7dc002c70 con 0x7ff7ec10a070 2026-03-10T07:27:04.008 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7cf7fe640 1 -- 192.168.123.100:0/3398847555 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff7dc013640 con 0x7ff7ec10a070 2026-03-10T07:27:04.008 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7f0ded640 1 -- 192.168.123.100:0/3398847555 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff7ec1a3000 con 0x7ff7ec10a070 2026-03-10T07:27:04.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.007+0000 7ff7f0ded640 1 -- 192.168.123.100:0/3398847555 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff7ec106420 con 0x7ff7ec10a070 2026-03-10T07:27:04.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.011+0000 7ff7cf7fe640 1 -- 192.168.123.100:0/3398847555 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7ff7dc004b00 con 0x7ff7ec10a070 2026-03-10T07:27:04.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.011+0000 7ff7cf7fe640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff7c4077670 0x7ff7c4079b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:27:04.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.011+0000 7ff7cf7fe640 1 -- 192.168.123.100:0/3398847555 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(68..68 src has 1..68) ==== 7430+0+0 (secure 0 0 0) 0x7ff7dc099420 con 0x7ff7ec10a070 2026-03-10T07:27:04.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.011+0000 7ff7ea575640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff7c4077670 0x7ff7c4079b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:27:04.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.011+0000 7ff7ea575640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff7c4077670 0x7ff7c4079b30 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7ff7e00097c0 tx=0x7ff7e0005c90 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:27:04.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.011+0000 7ff7cf7fe640 1 -- 192.168.123.100:0/3398847555 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff7dc010040 con 0x7ff7ec10a070 2026-03-10T07:27:04.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:03 vm03 bash[23382]: cluster 2026-03-10T07:27:02.560752+0000 mgr.y (mgr.24407) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:04.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:03 vm03 bash[23382]: cluster 2026-03-10T07:27:02.560752+0000 mgr.y (mgr.24407) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:04.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:03 vm03 bash[23382]: audit 2026-03-10T07:27:02.888112+0000 mgr.y (mgr.24407) 69 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:04.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:03 vm03 bash[23382]: audit 2026-03-10T07:27:02.888112+0000 mgr.y (mgr.24407) 69 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:04.138 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.139+0000 7ff7f0ded640 1 -- 192.168.123.100:0/3398847555 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "health", "format": "json"} v 0) -- 0x7ff7ec105320 con 0x7ff7ec10a070 2026-03-10T07:27:04.139 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.139+0000 7ff7cf7fe640 1 -- 192.168.123.100:0/3398847555 <== mon.1 v2:192.168.123.103:3300/0 7 ==== mon_command_ack([{"prefix": "health", "format": "json"}]=0 v0) ==== 72+0+46 (secure 0 0 0) 0x7ff7dc065d20 con 0x7ff7ec10a070 2026-03-10T07:27:04.139 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T07:27:04.139 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 -- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff7c4077670 msgr2=0x7ff7c4079b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff7c4077670 0x7ff7c4079b30 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7ff7e00097c0 tx=0x7ff7e0005c90 comp rx=0 tx=0).stop 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 -- 192.168.123.100:0/3398847555 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff7ec10a070 msgr2=0x7ff7ec10f4c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff7ec10a070 0x7ff7ec10f4c0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7ff7dc00e1c0 tx=0x7ff7dc00e690 comp rx=0 tx=0).stop 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 -- 192.168.123.100:0/3398847555 shutdown_connections 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7ff7c4077670 0x7ff7c4079b30 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7ff7ec10a070 0x7ff7ec10f4c0 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff7ec1058f0 0x7ff7ec10ecf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 --2- 192.168.123.100:0/3398847555 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7ff7ec104f40 0x7ff7ec111bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 -- 192.168.123.100:0/3398847555 >> 192.168.123.100:0/3398847555 conn(0x7ff7ec1009e0 msgr2=0x7ff7ec101e70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 -- 192.168.123.100:0/3398847555 shutdown_connections 2026-03-10T07:27:04.142 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T07:27:04.143+0000 7ff7f0ded640 1 -- 192.168.123.100:0/3398847555 wait complete. 2026-03-10T07:27:04.198 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T07:27:04.198 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T07:27:04.198 INFO:teuthology.run_tasks:Running task workunit... 2026-03-10T07:27:04.202 INFO:tasks.workunit:Pulling workunits from ref 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T07:27:04.202 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-10T07:27:04.203 DEBUG:teuthology.orchestra.run.vm00:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-10T07:27:04.206 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:27:04.207 INFO:teuthology.orchestra.run.vm00.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-10T07:27:04.207 DEBUG:teuthology.orchestra.run.vm00:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T07:27:04.252 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-10T07:27:04.252 DEBUG:teuthology.orchestra.run.vm00:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-10T07:27:04.297 INFO:tasks.workunit:timeout=3h 2026-03-10T07:27:04.297 INFO:tasks.workunit:cleanup=True 2026-03-10T07:27:04.297 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T07:27:04.341 INFO:tasks.workunit.client.0.vm00.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-10T07:27:05.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:04 vm03 bash[23382]: audit 2026-03-10T07:27:04.142159+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.100:0/3398847555' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:27:05.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:04 vm03 bash[23382]: audit 2026-03-10T07:27:04.142159+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.100:0/3398847555' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:27:05.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:04 vm00 bash[20701]: audit 2026-03-10T07:27:04.142159+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.100:0/3398847555' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:27:05.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:04 vm00 bash[20701]: audit 2026-03-10T07:27:04.142159+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.100:0/3398847555' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:27:05.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:04 vm00 bash[28005]: audit 2026-03-10T07:27:04.142159+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.100:0/3398847555' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:27:05.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:04 vm00 bash[28005]: audit 2026-03-10T07:27:04.142159+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.100:0/3398847555' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:27:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:05 vm03 bash[23382]: cluster 2026-03-10T07:27:04.561041+0000 mgr.y (mgr.24407) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:05 vm03 bash[23382]: cluster 2026-03-10T07:27:04.561041+0000 mgr.y (mgr.24407) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:06.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:05 vm00 bash[20701]: cluster 2026-03-10T07:27:04.561041+0000 mgr.y (mgr.24407) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:06.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:05 vm00 bash[20701]: cluster 2026-03-10T07:27:04.561041+0000 mgr.y (mgr.24407) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:06.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:05 vm00 bash[28005]: cluster 2026-03-10T07:27:04.561041+0000 mgr.y (mgr.24407) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:06.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:05 vm00 bash[28005]: cluster 2026-03-10T07:27:04.561041+0000 mgr.y (mgr.24407) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:08.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:07 vm03 bash[23382]: cluster 2026-03-10T07:27:06.561475+0000 mgr.y (mgr.24407) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:08.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:07 vm03 bash[23382]: cluster 2026-03-10T07:27:06.561475+0000 mgr.y (mgr.24407) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:07 vm00 bash[28005]: cluster 2026-03-10T07:27:06.561475+0000 mgr.y (mgr.24407) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:08.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:07 vm00 bash[28005]: cluster 2026-03-10T07:27:06.561475+0000 mgr.y (mgr.24407) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:08.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:07 vm00 bash[20701]: cluster 2026-03-10T07:27:06.561475+0000 mgr.y (mgr.24407) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:08.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:07 vm00 bash[20701]: cluster 2026-03-10T07:27:06.561475+0000 mgr.y (mgr.24407) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:09.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:08 vm03 bash[23382]: audit 2026-03-10T07:27:08.686199+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:09.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:08 vm03 bash[23382]: audit 2026-03-10T07:27:08.686199+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:09.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:08 vm00 bash[20701]: audit 2026-03-10T07:27:08.686199+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:09.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:08 vm00 bash[20701]: audit 2026-03-10T07:27:08.686199+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:09.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:08 vm00 bash[28005]: audit 2026-03-10T07:27:08.686199+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:09.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:08 vm00 bash[28005]: audit 2026-03-10T07:27:08.686199+0000 mon.c (mon.2) 71 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:10.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:09 vm00 bash[20701]: cluster 2026-03-10T07:27:08.561801+0000 mgr.y (mgr.24407) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:10.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:09 vm00 bash[20701]: cluster 2026-03-10T07:27:08.561801+0000 mgr.y (mgr.24407) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:10.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:09 vm00 bash[28005]: cluster 2026-03-10T07:27:08.561801+0000 mgr.y (mgr.24407) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:10.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:09 vm00 bash[28005]: cluster 2026-03-10T07:27:08.561801+0000 mgr.y (mgr.24407) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:10.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:09 vm03 bash[23382]: cluster 2026-03-10T07:27:08.561801+0000 mgr.y (mgr.24407) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:10.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:09 vm03 bash[23382]: cluster 2026-03-10T07:27:08.561801+0000 mgr.y (mgr.24407) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:11.382 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:27:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:27:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:27:12.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:11 vm00 bash[20701]: cluster 2026-03-10T07:27:10.562351+0000 mgr.y (mgr.24407) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:12.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:11 vm00 bash[20701]: cluster 2026-03-10T07:27:10.562351+0000 mgr.y (mgr.24407) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:12.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:11 vm00 bash[28005]: cluster 2026-03-10T07:27:10.562351+0000 mgr.y (mgr.24407) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:12.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:11 vm00 bash[28005]: cluster 2026-03-10T07:27:10.562351+0000 mgr.y (mgr.24407) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:12.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:11 vm03 bash[23382]: cluster 2026-03-10T07:27:10.562351+0000 mgr.y (mgr.24407) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:12.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:11 vm03 bash[23382]: cluster 2026-03-10T07:27:10.562351+0000 mgr.y (mgr.24407) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:13.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:27:12 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:27:14.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:13 vm00 bash[20701]: cluster 2026-03-10T07:27:12.562707+0000 mgr.y (mgr.24407) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:14.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:13 vm00 bash[20701]: cluster 2026-03-10T07:27:12.562707+0000 mgr.y (mgr.24407) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:14.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:13 vm00 bash[20701]: audit 2026-03-10T07:27:12.898894+0000 mgr.y (mgr.24407) 75 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:14.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:13 vm00 bash[20701]: audit 2026-03-10T07:27:12.898894+0000 mgr.y (mgr.24407) 75 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:14.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:13 vm00 bash[28005]: cluster 2026-03-10T07:27:12.562707+0000 mgr.y (mgr.24407) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:14.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:13 vm00 bash[28005]: cluster 2026-03-10T07:27:12.562707+0000 mgr.y (mgr.24407) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:14.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:13 vm00 bash[28005]: audit 2026-03-10T07:27:12.898894+0000 mgr.y (mgr.24407) 75 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:14.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:13 vm00 bash[28005]: audit 2026-03-10T07:27:12.898894+0000 mgr.y (mgr.24407) 75 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:14.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:13 vm03 bash[23382]: cluster 2026-03-10T07:27:12.562707+0000 mgr.y (mgr.24407) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:14.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:13 vm03 bash[23382]: cluster 2026-03-10T07:27:12.562707+0000 mgr.y (mgr.24407) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:14.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:13 vm03 bash[23382]: audit 2026-03-10T07:27:12.898894+0000 mgr.y (mgr.24407) 75 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:14.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:13 vm03 bash[23382]: audit 2026-03-10T07:27:12.898894+0000 mgr.y (mgr.24407) 75 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:15 vm00 bash[20701]: cluster 2026-03-10T07:27:14.562978+0000 mgr.y (mgr.24407) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:15 vm00 bash[20701]: cluster 2026-03-10T07:27:14.562978+0000 mgr.y (mgr.24407) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:16.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:15 vm00 bash[28005]: cluster 2026-03-10T07:27:14.562978+0000 mgr.y (mgr.24407) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:16.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:15 vm00 bash[28005]: cluster 2026-03-10T07:27:14.562978+0000 mgr.y (mgr.24407) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:16.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:15 vm03 bash[23382]: cluster 2026-03-10T07:27:14.562978+0000 mgr.y (mgr.24407) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:16.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:15 vm03 bash[23382]: cluster 2026-03-10T07:27:14.562978+0000 mgr.y (mgr.24407) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:18.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:17 vm03 bash[23382]: cluster 2026-03-10T07:27:16.563549+0000 mgr.y (mgr.24407) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:18.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:17 vm03 bash[23382]: cluster 2026-03-10T07:27:16.563549+0000 mgr.y (mgr.24407) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:18.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:17 vm00 bash[20701]: cluster 2026-03-10T07:27:16.563549+0000 mgr.y (mgr.24407) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:18.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:17 vm00 bash[20701]: cluster 2026-03-10T07:27:16.563549+0000 mgr.y (mgr.24407) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:18.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:17 vm00 bash[28005]: cluster 2026-03-10T07:27:16.563549+0000 mgr.y (mgr.24407) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:18.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:17 vm00 bash[28005]: cluster 2026-03-10T07:27:16.563549+0000 mgr.y (mgr.24407) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:20.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:19 vm03 bash[23382]: cluster 2026-03-10T07:27:18.563879+0000 mgr.y (mgr.24407) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:20.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:19 vm03 bash[23382]: cluster 2026-03-10T07:27:18.563879+0000 mgr.y (mgr.24407) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:20.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:19 vm00 bash[28005]: cluster 2026-03-10T07:27:18.563879+0000 mgr.y (mgr.24407) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:20.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:19 vm00 bash[28005]: cluster 2026-03-10T07:27:18.563879+0000 mgr.y (mgr.24407) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:20.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:19 vm00 bash[20701]: cluster 2026-03-10T07:27:18.563879+0000 mgr.y (mgr.24407) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:20.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:19 vm00 bash[20701]: cluster 2026-03-10T07:27:18.563879+0000 mgr.y (mgr.24407) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:21.382 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:27:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:27:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:27:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:21 vm03 bash[23382]: cluster 2026-03-10T07:27:20.564330+0000 mgr.y (mgr.24407) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:21 vm03 bash[23382]: cluster 2026-03-10T07:27:20.564330+0000 mgr.y (mgr.24407) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:22.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:21 vm00 bash[28005]: cluster 2026-03-10T07:27:20.564330+0000 mgr.y (mgr.24407) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:22.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:21 vm00 bash[28005]: cluster 2026-03-10T07:27:20.564330+0000 mgr.y (mgr.24407) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:22.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:21 vm00 bash[20701]: cluster 2026-03-10T07:27:20.564330+0000 mgr.y (mgr.24407) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:22.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:21 vm00 bash[20701]: cluster 2026-03-10T07:27:20.564330+0000 mgr.y (mgr.24407) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:23.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:27:22 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:27:24.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:23 vm03 bash[23382]: cluster 2026-03-10T07:27:22.564661+0000 mgr.y (mgr.24407) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:24.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:23 vm03 bash[23382]: cluster 2026-03-10T07:27:22.564661+0000 mgr.y (mgr.24407) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:24.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:23 vm03 bash[23382]: audit 2026-03-10T07:27:22.909673+0000 mgr.y (mgr.24407) 81 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:24.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:23 vm03 bash[23382]: audit 2026-03-10T07:27:22.909673+0000 mgr.y (mgr.24407) 81 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:24.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:23 vm03 bash[23382]: audit 2026-03-10T07:27:23.697675+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:24.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:23 vm03 bash[23382]: audit 2026-03-10T07:27:23.697675+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:23 vm00 bash[20701]: cluster 2026-03-10T07:27:22.564661+0000 mgr.y (mgr.24407) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:23 vm00 bash[20701]: cluster 2026-03-10T07:27:22.564661+0000 mgr.y (mgr.24407) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:23 vm00 bash[20701]: audit 2026-03-10T07:27:22.909673+0000 mgr.y (mgr.24407) 81 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:23 vm00 bash[20701]: audit 2026-03-10T07:27:22.909673+0000 mgr.y (mgr.24407) 81 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:23 vm00 bash[20701]: audit 2026-03-10T07:27:23.697675+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:23 vm00 bash[20701]: audit 2026-03-10T07:27:23.697675+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:23 vm00 bash[28005]: cluster 2026-03-10T07:27:22.564661+0000 mgr.y (mgr.24407) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:23 vm00 bash[28005]: cluster 2026-03-10T07:27:22.564661+0000 mgr.y (mgr.24407) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:23 vm00 bash[28005]: audit 2026-03-10T07:27:22.909673+0000 mgr.y (mgr.24407) 81 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:23 vm00 bash[28005]: audit 2026-03-10T07:27:22.909673+0000 mgr.y (mgr.24407) 81 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:23 vm00 bash[28005]: audit 2026-03-10T07:27:23.697675+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:23 vm00 bash[28005]: audit 2026-03-10T07:27:23.697675+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:26.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:25 vm03 bash[23382]: cluster 2026-03-10T07:27:24.564951+0000 mgr.y (mgr.24407) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:26.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:25 vm03 bash[23382]: cluster 2026-03-10T07:27:24.564951+0000 mgr.y (mgr.24407) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:26.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:25 vm00 bash[28005]: cluster 2026-03-10T07:27:24.564951+0000 mgr.y (mgr.24407) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:26.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:25 vm00 bash[28005]: cluster 2026-03-10T07:27:24.564951+0000 mgr.y (mgr.24407) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:26.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:25 vm00 bash[20701]: cluster 2026-03-10T07:27:24.564951+0000 mgr.y (mgr.24407) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:26.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:25 vm00 bash[20701]: cluster 2026-03-10T07:27:24.564951+0000 mgr.y (mgr.24407) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:28.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:28 vm00 bash[20701]: cluster 2026-03-10T07:27:26.565615+0000 mgr.y (mgr.24407) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:28.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:28 vm00 bash[20701]: cluster 2026-03-10T07:27:26.565615+0000 mgr.y (mgr.24407) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:28.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:28 vm00 bash[28005]: cluster 2026-03-10T07:27:26.565615+0000 mgr.y (mgr.24407) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:28.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:28 vm00 bash[28005]: cluster 2026-03-10T07:27:26.565615+0000 mgr.y (mgr.24407) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:28.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:28 vm03 bash[23382]: cluster 2026-03-10T07:27:26.565615+0000 mgr.y (mgr.24407) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:28.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:28 vm03 bash[23382]: cluster 2026-03-10T07:27:26.565615+0000 mgr.y (mgr.24407) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:30.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:30 vm03 bash[23382]: cluster 2026-03-10T07:27:28.565930+0000 mgr.y (mgr.24407) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:30.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:30 vm03 bash[23382]: cluster 2026-03-10T07:27:28.565930+0000 mgr.y (mgr.24407) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:30.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:30 vm00 bash[20701]: cluster 2026-03-10T07:27:28.565930+0000 mgr.y (mgr.24407) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:30.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:30 vm00 bash[20701]: cluster 2026-03-10T07:27:28.565930+0000 mgr.y (mgr.24407) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:30.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:30 vm00 bash[28005]: cluster 2026-03-10T07:27:28.565930+0000 mgr.y (mgr.24407) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:30.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:30 vm00 bash[28005]: cluster 2026-03-10T07:27:28.565930+0000 mgr.y (mgr.24407) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:31.382 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:27:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:27:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:27:32.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:32 vm03 bash[23382]: cluster 2026-03-10T07:27:30.566413+0000 mgr.y (mgr.24407) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:32.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:32 vm03 bash[23382]: cluster 2026-03-10T07:27:30.566413+0000 mgr.y (mgr.24407) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:32.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:32 vm00 bash[20701]: cluster 2026-03-10T07:27:30.566413+0000 mgr.y (mgr.24407) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:32.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:32 vm00 bash[20701]: cluster 2026-03-10T07:27:30.566413+0000 mgr.y (mgr.24407) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:32.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:32 vm00 bash[28005]: cluster 2026-03-10T07:27:30.566413+0000 mgr.y (mgr.24407) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:32.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:32 vm00 bash[28005]: cluster 2026-03-10T07:27:30.566413+0000 mgr.y (mgr.24407) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:33.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:27:32 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:27:35.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:34 vm00 bash[20701]: cluster 2026-03-10T07:27:32.566734+0000 mgr.y (mgr.24407) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:35.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:34 vm00 bash[20701]: cluster 2026-03-10T07:27:32.566734+0000 mgr.y (mgr.24407) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:35.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:34 vm00 bash[20701]: audit 2026-03-10T07:27:32.920459+0000 mgr.y (mgr.24407) 87 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:35.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:34 vm00 bash[20701]: audit 2026-03-10T07:27:32.920459+0000 mgr.y (mgr.24407) 87 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:35.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:34 vm00 bash[28005]: cluster 2026-03-10T07:27:32.566734+0000 mgr.y (mgr.24407) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:35.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:34 vm00 bash[28005]: cluster 2026-03-10T07:27:32.566734+0000 mgr.y (mgr.24407) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:35.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:34 vm00 bash[28005]: audit 2026-03-10T07:27:32.920459+0000 mgr.y (mgr.24407) 87 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:35.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:34 vm00 bash[28005]: audit 2026-03-10T07:27:32.920459+0000 mgr.y (mgr.24407) 87 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:35.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:34 vm03 bash[23382]: cluster 2026-03-10T07:27:32.566734+0000 mgr.y (mgr.24407) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:35.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:34 vm03 bash[23382]: cluster 2026-03-10T07:27:32.566734+0000 mgr.y (mgr.24407) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:35.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:34 vm03 bash[23382]: audit 2026-03-10T07:27:32.920459+0000 mgr.y (mgr.24407) 87 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:35.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:34 vm03 bash[23382]: audit 2026-03-10T07:27:32.920459+0000 mgr.y (mgr.24407) 87 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:36.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:35 vm03 bash[23382]: cluster 2026-03-10T07:27:34.567073+0000 mgr.y (mgr.24407) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:36.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:35 vm03 bash[23382]: cluster 2026-03-10T07:27:34.567073+0000 mgr.y (mgr.24407) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:36.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:35 vm00 bash[28005]: cluster 2026-03-10T07:27:34.567073+0000 mgr.y (mgr.24407) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:36.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:35 vm00 bash[28005]: cluster 2026-03-10T07:27:34.567073+0000 mgr.y (mgr.24407) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:36.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:35 vm00 bash[20701]: cluster 2026-03-10T07:27:34.567073+0000 mgr.y (mgr.24407) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:36.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:35 vm00 bash[20701]: cluster 2026-03-10T07:27:34.567073+0000 mgr.y (mgr.24407) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:38.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:37 vm03 bash[23382]: cluster 2026-03-10T07:27:36.567686+0000 mgr.y (mgr.24407) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:38.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:37 vm03 bash[23382]: cluster 2026-03-10T07:27:36.567686+0000 mgr.y (mgr.24407) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:38.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:37 vm00 bash[28005]: cluster 2026-03-10T07:27:36.567686+0000 mgr.y (mgr.24407) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:38.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:37 vm00 bash[28005]: cluster 2026-03-10T07:27:36.567686+0000 mgr.y (mgr.24407) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:38.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:37 vm00 bash[20701]: cluster 2026-03-10T07:27:36.567686+0000 mgr.y (mgr.24407) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:38.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:37 vm00 bash[20701]: cluster 2026-03-10T07:27:36.567686+0000 mgr.y (mgr.24407) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:39.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:38 vm03 bash[23382]: audit 2026-03-10T07:27:38.711257+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:39.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:38 vm03 bash[23382]: audit 2026-03-10T07:27:38.711257+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:38 vm00 bash[28005]: audit 2026-03-10T07:27:38.711257+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:38 vm00 bash[28005]: audit 2026-03-10T07:27:38.711257+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:38 vm00 bash[20701]: audit 2026-03-10T07:27:38.711257+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:38 vm00 bash[20701]: audit 2026-03-10T07:27:38.711257+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:40.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:39 vm03 bash[23382]: cluster 2026-03-10T07:27:38.568072+0000 mgr.y (mgr.24407) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:40.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:39 vm03 bash[23382]: cluster 2026-03-10T07:27:38.568072+0000 mgr.y (mgr.24407) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:39 vm00 bash[20701]: cluster 2026-03-10T07:27:38.568072+0000 mgr.y (mgr.24407) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:39 vm00 bash[20701]: cluster 2026-03-10T07:27:38.568072+0000 mgr.y (mgr.24407) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:39 vm00 bash[28005]: cluster 2026-03-10T07:27:38.568072+0000 mgr.y (mgr.24407) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:39 vm00 bash[28005]: cluster 2026-03-10T07:27:38.568072+0000 mgr.y (mgr.24407) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:41.382 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:27:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:27:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:27:42.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:42 vm03 bash[23382]: cluster 2026-03-10T07:27:40.568582+0000 mgr.y (mgr.24407) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:42.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:42 vm03 bash[23382]: cluster 2026-03-10T07:27:40.568582+0000 mgr.y (mgr.24407) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:42 vm00 bash[20701]: cluster 2026-03-10T07:27:40.568582+0000 mgr.y (mgr.24407) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:42 vm00 bash[20701]: cluster 2026-03-10T07:27:40.568582+0000 mgr.y (mgr.24407) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:42.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:42 vm00 bash[28005]: cluster 2026-03-10T07:27:40.568582+0000 mgr.y (mgr.24407) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:42.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:42 vm00 bash[28005]: cluster 2026-03-10T07:27:40.568582+0000 mgr.y (mgr.24407) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:43.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:27:42 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:27:44.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:44 vm03 bash[23382]: cluster 2026-03-10T07:27:42.568932+0000 mgr.y (mgr.24407) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:44.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:44 vm03 bash[23382]: cluster 2026-03-10T07:27:42.568932+0000 mgr.y (mgr.24407) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:44.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:44 vm03 bash[23382]: audit 2026-03-10T07:27:42.928586+0000 mgr.y (mgr.24407) 93 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:44.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:44 vm03 bash[23382]: audit 2026-03-10T07:27:42.928586+0000 mgr.y (mgr.24407) 93 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:44 vm00 bash[20701]: cluster 2026-03-10T07:27:42.568932+0000 mgr.y (mgr.24407) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:44 vm00 bash[20701]: cluster 2026-03-10T07:27:42.568932+0000 mgr.y (mgr.24407) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:44 vm00 bash[20701]: audit 2026-03-10T07:27:42.928586+0000 mgr.y (mgr.24407) 93 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:44 vm00 bash[20701]: audit 2026-03-10T07:27:42.928586+0000 mgr.y (mgr.24407) 93 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:44 vm00 bash[28005]: cluster 2026-03-10T07:27:42.568932+0000 mgr.y (mgr.24407) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:44 vm00 bash[28005]: cluster 2026-03-10T07:27:42.568932+0000 mgr.y (mgr.24407) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:44 vm00 bash[28005]: audit 2026-03-10T07:27:42.928586+0000 mgr.y (mgr.24407) 93 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:44 vm00 bash[28005]: audit 2026-03-10T07:27:42.928586+0000 mgr.y (mgr.24407) 93 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:46 vm00 bash[20701]: cluster 2026-03-10T07:27:44.569285+0000 mgr.y (mgr.24407) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:46 vm00 bash[20701]: cluster 2026-03-10T07:27:44.569285+0000 mgr.y (mgr.24407) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:46 vm00 bash[20701]: audit 2026-03-10T07:27:45.465415+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:27:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:46 vm00 bash[20701]: audit 2026-03-10T07:27:45.465415+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:27:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:46 vm00 bash[28005]: cluster 2026-03-10T07:27:44.569285+0000 mgr.y (mgr.24407) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:46 vm00 bash[28005]: cluster 2026-03-10T07:27:44.569285+0000 mgr.y (mgr.24407) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:46 vm00 bash[28005]: audit 2026-03-10T07:27:45.465415+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:27:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:46 vm00 bash[28005]: audit 2026-03-10T07:27:45.465415+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:27:46.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:46 vm03 bash[23382]: cluster 2026-03-10T07:27:44.569285+0000 mgr.y (mgr.24407) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:46.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:46 vm03 bash[23382]: cluster 2026-03-10T07:27:44.569285+0000 mgr.y (mgr.24407) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:46.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:46 vm03 bash[23382]: audit 2026-03-10T07:27:45.465415+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:27:46.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:46 vm03 bash[23382]: audit 2026-03-10T07:27:45.465415+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:27:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:48 vm00 bash[20701]: cluster 2026-03-10T07:27:46.569748+0000 mgr.y (mgr.24407) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:48 vm00 bash[20701]: cluster 2026-03-10T07:27:46.569748+0000 mgr.y (mgr.24407) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:48 vm00 bash[28005]: cluster 2026-03-10T07:27:46.569748+0000 mgr.y (mgr.24407) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:48 vm00 bash[28005]: cluster 2026-03-10T07:27:46.569748+0000 mgr.y (mgr.24407) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:48.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:48 vm03 bash[23382]: cluster 2026-03-10T07:27:46.569748+0000 mgr.y (mgr.24407) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:48.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:48 vm03 bash[23382]: cluster 2026-03-10T07:27:46.569748+0000 mgr.y (mgr.24407) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:49.516 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:27:49 vm03 bash[51371]: logger=infra.usagestats t=2026-03-10T07:27:49.12909495Z level=info msg="Usage stats are ready to report" 2026-03-10T07:27:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:50 vm00 bash[20701]: cluster 2026-03-10T07:27:48.570030+0000 mgr.y (mgr.24407) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:50 vm00 bash[20701]: cluster 2026-03-10T07:27:48.570030+0000 mgr.y (mgr.24407) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:50 vm00 bash[28005]: cluster 2026-03-10T07:27:48.570030+0000 mgr.y (mgr.24407) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:50 vm00 bash[28005]: cluster 2026-03-10T07:27:48.570030+0000 mgr.y (mgr.24407) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:50.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:50 vm03 bash[23382]: cluster 2026-03-10T07:27:48.570030+0000 mgr.y (mgr.24407) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:50.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:50 vm03 bash[23382]: cluster 2026-03-10T07:27:48.570030+0000 mgr.y (mgr.24407) 96 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:51.382 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:27:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:27:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:27:52.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:52 vm03 bash[23382]: cluster 2026-03-10T07:27:50.570526+0000 mgr.y (mgr.24407) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:52.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:52 vm03 bash[23382]: cluster 2026-03-10T07:27:50.570526+0000 mgr.y (mgr.24407) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:52.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:52 vm03 bash[23382]: audit 2026-03-10T07:27:50.879271+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:52 vm03 bash[23382]: audit 2026-03-10T07:27:50.879271+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:52 vm03 bash[23382]: audit 2026-03-10T07:27:50.888422+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:52 vm03 bash[23382]: audit 2026-03-10T07:27:50.888422+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:52 vm03 bash[23382]: audit 2026-03-10T07:27:51.498042+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:52 vm03 bash[23382]: audit 2026-03-10T07:27:51.498042+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:52 vm03 bash[23382]: audit 2026-03-10T07:27:51.505449+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:52 vm03 bash[23382]: audit 2026-03-10T07:27:51.505449+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:52 vm00 bash[28005]: cluster 2026-03-10T07:27:50.570526+0000 mgr.y (mgr.24407) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:52 vm00 bash[28005]: cluster 2026-03-10T07:27:50.570526+0000 mgr.y (mgr.24407) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:52 vm00 bash[28005]: audit 2026-03-10T07:27:50.879271+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:52 vm00 bash[28005]: audit 2026-03-10T07:27:50.879271+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:52 vm00 bash[28005]: audit 2026-03-10T07:27:50.888422+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:52 vm00 bash[28005]: audit 2026-03-10T07:27:50.888422+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:52 vm00 bash[28005]: audit 2026-03-10T07:27:51.498042+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:52 vm00 bash[28005]: audit 2026-03-10T07:27:51.498042+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:52 vm00 bash[28005]: audit 2026-03-10T07:27:51.505449+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:52 vm00 bash[28005]: audit 2026-03-10T07:27:51.505449+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:52 vm00 bash[20701]: cluster 2026-03-10T07:27:50.570526+0000 mgr.y (mgr.24407) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:52 vm00 bash[20701]: cluster 2026-03-10T07:27:50.570526+0000 mgr.y (mgr.24407) 97 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:52 vm00 bash[20701]: audit 2026-03-10T07:27:50.879271+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:52 vm00 bash[20701]: audit 2026-03-10T07:27:50.879271+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:52 vm00 bash[20701]: audit 2026-03-10T07:27:50.888422+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:52 vm00 bash[20701]: audit 2026-03-10T07:27:50.888422+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:52 vm00 bash[20701]: audit 2026-03-10T07:27:51.498042+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:52 vm00 bash[20701]: audit 2026-03-10T07:27:51.498042+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:52 vm00 bash[20701]: audit 2026-03-10T07:27:51.505449+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:52 vm00 bash[20701]: audit 2026-03-10T07:27:51.505449+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:53.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:27:52 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:27:53.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:53 vm03 bash[23382]: audit 2026-03-10T07:27:52.009963+0000 mon.c (mon.2) 75 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:27:53.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:53 vm03 bash[23382]: audit 2026-03-10T07:27:52.009963+0000 mon.c (mon.2) 75 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:27:53.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:53 vm03 bash[23382]: audit 2026-03-10T07:27:52.022462+0000 mon.c (mon.2) 76 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:27:53.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:53 vm03 bash[23382]: audit 2026-03-10T07:27:52.022462+0000 mon.c (mon.2) 76 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:27:53.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:53 vm03 bash[23382]: audit 2026-03-10T07:27:52.032087+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:53.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:53 vm03 bash[23382]: audit 2026-03-10T07:27:52.032087+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:53 vm00 bash[28005]: audit 2026-03-10T07:27:52.009963+0000 mon.c (mon.2) 75 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:53 vm00 bash[28005]: audit 2026-03-10T07:27:52.009963+0000 mon.c (mon.2) 75 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:53 vm00 bash[28005]: audit 2026-03-10T07:27:52.022462+0000 mon.c (mon.2) 76 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:53 vm00 bash[28005]: audit 2026-03-10T07:27:52.022462+0000 mon.c (mon.2) 76 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:53 vm00 bash[28005]: audit 2026-03-10T07:27:52.032087+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:53 vm00 bash[28005]: audit 2026-03-10T07:27:52.032087+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:53 vm00 bash[20701]: audit 2026-03-10T07:27:52.009963+0000 mon.c (mon.2) 75 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:53 vm00 bash[20701]: audit 2026-03-10T07:27:52.009963+0000 mon.c (mon.2) 75 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:53 vm00 bash[20701]: audit 2026-03-10T07:27:52.022462+0000 mon.c (mon.2) 76 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:53 vm00 bash[20701]: audit 2026-03-10T07:27:52.022462+0000 mon.c (mon.2) 76 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:53 vm00 bash[20701]: audit 2026-03-10T07:27:52.032087+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:53.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:53 vm00 bash[20701]: audit 2026-03-10T07:27:52.032087+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:54 vm00 bash[28005]: cluster 2026-03-10T07:27:52.570833+0000 mgr.y (mgr.24407) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:54 vm00 bash[28005]: cluster 2026-03-10T07:27:52.570833+0000 mgr.y (mgr.24407) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:54 vm00 bash[28005]: audit 2026-03-10T07:27:52.939159+0000 mgr.y (mgr.24407) 99 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:54 vm00 bash[28005]: audit 2026-03-10T07:27:52.939159+0000 mgr.y (mgr.24407) 99 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:54 vm00 bash[28005]: audit 2026-03-10T07:27:53.718860+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:54 vm00 bash[28005]: audit 2026-03-10T07:27:53.718860+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:54 vm00 bash[20701]: cluster 2026-03-10T07:27:52.570833+0000 mgr.y (mgr.24407) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:54 vm00 bash[20701]: cluster 2026-03-10T07:27:52.570833+0000 mgr.y (mgr.24407) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:54 vm00 bash[20701]: audit 2026-03-10T07:27:52.939159+0000 mgr.y (mgr.24407) 99 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:54 vm00 bash[20701]: audit 2026-03-10T07:27:52.939159+0000 mgr.y (mgr.24407) 99 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:54 vm00 bash[20701]: audit 2026-03-10T07:27:53.718860+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:54.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:54 vm00 bash[20701]: audit 2026-03-10T07:27:53.718860+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:54.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:54 vm03 bash[23382]: cluster 2026-03-10T07:27:52.570833+0000 mgr.y (mgr.24407) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:54.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:54 vm03 bash[23382]: cluster 2026-03-10T07:27:52.570833+0000 mgr.y (mgr.24407) 98 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:54.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:54 vm03 bash[23382]: audit 2026-03-10T07:27:52.939159+0000 mgr.y (mgr.24407) 99 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:54.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:54 vm03 bash[23382]: audit 2026-03-10T07:27:52.939159+0000 mgr.y (mgr.24407) 99 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:27:54.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:54 vm03 bash[23382]: audit 2026-03-10T07:27:53.718860+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:54.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:54 vm03 bash[23382]: audit 2026-03-10T07:27:53.718860+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:27:56.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:56 vm00 bash[28005]: cluster 2026-03-10T07:27:54.571126+0000 mgr.y (mgr.24407) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:56.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:56 vm00 bash[28005]: cluster 2026-03-10T07:27:54.571126+0000 mgr.y (mgr.24407) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:56.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:56 vm00 bash[20701]: cluster 2026-03-10T07:27:54.571126+0000 mgr.y (mgr.24407) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:56.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:56 vm00 bash[20701]: cluster 2026-03-10T07:27:54.571126+0000 mgr.y (mgr.24407) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:56.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:56 vm03 bash[23382]: cluster 2026-03-10T07:27:54.571126+0000 mgr.y (mgr.24407) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:56.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:56 vm03 bash[23382]: cluster 2026-03-10T07:27:54.571126+0000 mgr.y (mgr.24407) 100 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:27:57.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:57 vm00 bash[28005]: cluster 2026-03-10T07:27:56.571692+0000 mgr.y (mgr.24407) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:57.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:57 vm00 bash[28005]: cluster 2026-03-10T07:27:56.571692+0000 mgr.y (mgr.24407) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:57.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:57 vm00 bash[20701]: cluster 2026-03-10T07:27:56.571692+0000 mgr.y (mgr.24407) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:57.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:57 vm00 bash[20701]: cluster 2026-03-10T07:27:56.571692+0000 mgr.y (mgr.24407) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:57.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:57 vm03 bash[23382]: cluster 2026-03-10T07:27:56.571692+0000 mgr.y (mgr.24407) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:27:57.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:57 vm03 bash[23382]: cluster 2026-03-10T07:27:56.571692+0000 mgr.y (mgr.24407) 101 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:00.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:59 vm03 bash[23382]: cluster 2026-03-10T07:27:58.572033+0000 mgr.y (mgr.24407) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:00.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:27:59 vm03 bash[23382]: cluster 2026-03-10T07:27:58.572033+0000 mgr.y (mgr.24407) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:00.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:59 vm00 bash[28005]: cluster 2026-03-10T07:27:58.572033+0000 mgr.y (mgr.24407) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:27:59 vm00 bash[28005]: cluster 2026-03-10T07:27:58.572033+0000 mgr.y (mgr.24407) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:00.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:59 vm00 bash[20701]: cluster 2026-03-10T07:27:58.572033+0000 mgr.y (mgr.24407) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:00.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:27:59 vm00 bash[20701]: cluster 2026-03-10T07:27:58.572033+0000 mgr.y (mgr.24407) 102 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:01.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:28:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:28:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:28:02.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:01 vm03 bash[23382]: cluster 2026-03-10T07:28:00.572526+0000 mgr.y (mgr.24407) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:02.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:01 vm03 bash[23382]: cluster 2026-03-10T07:28:00.572526+0000 mgr.y (mgr.24407) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:02.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:01 vm00 bash[20701]: cluster 2026-03-10T07:28:00.572526+0000 mgr.y (mgr.24407) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:02.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:01 vm00 bash[20701]: cluster 2026-03-10T07:28:00.572526+0000 mgr.y (mgr.24407) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:01 vm00 bash[28005]: cluster 2026-03-10T07:28:00.572526+0000 mgr.y (mgr.24407) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:01 vm00 bash[28005]: cluster 2026-03-10T07:28:00.572526+0000 mgr.y (mgr.24407) 103 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:03.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:28:02 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr:Note: switching to '75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b'. 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr:state without impacting any branches by switching back to a branch. 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr: git switch -c 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr:Or undo this operation with: 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr: git switch - 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T07:28:03.613 INFO:tasks.workunit.client.0.vm00.stderr:HEAD is now at 75a68fd8ca3 qa/suites/orch/cephadm/osds: drop nvme_loop task 2026-03-10T07:28:03.619 DEBUG:teuthology.orchestra.run.vm00:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-10T07:28:03.666 INFO:tasks.workunit.client.0.vm00.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-10T07:28:03.668 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T07:28:03.668 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-10T07:28:03.735 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-10T07:28:03.775 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-10T07:28:03.799 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T07:28:03.800 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T07:28:03.800 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-10T07:28:03.824 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T07:28:03.856 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T07:28:03.856 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-10T07:28:03.899 INFO:tasks.workunit:Running workunits matching rados/test.sh on client.0... 2026-03-10T07:28:03.899 INFO:tasks.workunit:Running workunit rados/test.sh... 2026-03-10T07:28:03.899 DEBUG:teuthology.orchestra.run.vm00:workunit test rados/test.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh 2026-03-10T07:28:03.947 INFO:tasks.workunit.client.0.vm00.stderr:+ parallel=1 2026-03-10T07:28:03.986 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' '' = --serial ']' 2026-03-10T07:28:03.986 INFO:tasks.workunit.client.0.vm00.stderr:+ crimson=0 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' '' = --crimson ']' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ color= 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -t 1 ']' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ trap cleanup EXIT ERR HUP INT QUIT 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ GTEST_OUTPUT_DIR=/home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ mkdir -p /home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ declare -A pids 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_aio 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_aio' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_aio 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stdout:test api_aio on pid 59624 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_aio 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59624 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_aio on pid 59624' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59624 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2>&1 | tee ceph_test_rados_api_aio.log | sed "s/^/ api_aio: /"' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_aio_pp 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_aio_pp' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_aio_pp 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_aio: /' 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_aio.log 2026-03-10T07:28:03.987 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2026-03-10T07:28:03.988 INFO:tasks.workunit.client.0.vm00.stdout:test api_aio_pp on pid 59632 2026-03-10T07:28:03.988 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_aio_pp 2026-03-10T07:28:03.988 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59632 2026-03-10T07:28:03.988 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_aio_pp on pid 59632' 2026-03-10T07:28:03.988 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59632 2026-03-10T07:28:03.988 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:03.988 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:03.988 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_io 2026-03-10T07:28:03.988 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_io' 2026-03-10T07:28:03.988 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2>&1 | tee ceph_test_rados_api_aio_pp.log | sed "s/^/ api_aio_pp: /"' 2026-03-10T07:28:03.988 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_aio_pp: /' 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_aio_pp.log 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_io 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stdout:test api_io on pid 59640 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_io 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59640 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_io on pid 59640' 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59640 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2>&1 | tee ceph_test_rados_api_io.log | sed "s/^/ api_io: /"' 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_io_pp 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_io_pp' 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_io_pp 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:03.989 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_io.log 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_io: /' 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stdout:test api_io_pp on pid 59648 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_io_pp 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59648 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_io_pp on pid 59648' 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59648 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2>&1 | tee ceph_test_rados_api_io_pp.log | sed "s/^/ api_io_pp: /"' 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_io_pp.log 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_asio 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_asio' 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_io_pp: /' 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:03.990 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_asio 2026-03-10T07:28:03.991 INFO:tasks.workunit.client.0.vm00.stdout:test api_asio on pid 59674 2026-03-10T07:28:03.992 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_asio 2026-03-10T07:28:03.992 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59674 2026-03-10T07:28:03.992 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_asio on pid 59674' 2026-03-10T07:28:03.992 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59674 2026-03-10T07:28:03.992 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:03.992 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:03.993 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_list 2026-03-10T07:28:03.993 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_list' 2026-03-10T07:28:03.993 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2>&1 | tee ceph_test_rados_api_asio.log | sed "s/^/ api_asio: /"' 2026-03-10T07:28:03.994 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:03.994 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:03.994 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:03.994 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:03.995 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_asio: /' 2026-03-10T07:28:03.995 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_asio.log 2026-03-10T07:28:03.996 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:03.996 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_list 2026-03-10T07:28:04.000 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2026-03-10T07:28:04.005 INFO:tasks.workunit.client.0.vm00.stdout:test api_list on pid 59696 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_list 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59696 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_list on pid 59696' 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59696 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2>&1 | tee ceph_test_rados_api_list.log | sed "s/^/ api_list: /"' 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_lock 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_lock' 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.031 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_lock 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stdout:test api_lock on pid 59709 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_lock 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59709 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_lock on pid 59709' 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59709 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2>&1 | tee ceph_test_rados_api_lock.log | sed "s/^/ api_lock: /"' 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_lock_pp 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_lock_pp' 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_list.log 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_list: /' 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_lock.log 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_lock: /' 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_lock_pp 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.032 INFO:tasks.workunit.client.0.vm00.stdout:test api_lock_pp on pid 59729 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_lock_pp 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59729 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_lock_pp on pid 59729' 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59729 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2>&1 | tee ceph_test_rados_api_lock_pp.log | sed "s/^/ api_lock_pp: /"' 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_misc 2026-03-10T07:28:04.033 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_misc' 2026-03-10T07:28:04.035 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_lock_pp.log 2026-03-10T07:28:04.035 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_lock_pp: /' 2026-03-10T07:28:04.042 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_misc 2026-03-10T07:28:04.042 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.044 INFO:tasks.workunit.client.0.vm00.stdout:test api_misc on pid 59754 2026-03-10T07:28:04.044 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_misc 2026-03-10T07:28:04.044 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59754 2026-03-10T07:28:04.044 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_misc on pid 59754' 2026-03-10T07:28:04.044 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59754 2026-03-10T07:28:04.044 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.044 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.045 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2>&1 | tee ceph_test_rados_api_misc.log | sed "s/^/ api_misc: /"' 2026-03-10T07:28:04.045 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_misc_pp 2026-03-10T07:28:04.045 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_misc_pp' 2026-03-10T07:28:04.045 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.045 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.045 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.045 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.046 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.048 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2026-03-10T07:28:04.049 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_misc_pp 2026-03-10T07:28:04.049 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_misc.log 2026-03-10T07:28:04.050 INFO:tasks.workunit.client.0.vm00.stdout:test api_misc_pp on pid 59769 2026-03-10T07:28:04.050 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_misc_pp 2026-03-10T07:28:04.050 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59769 2026-03-10T07:28:04.050 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_misc_pp on pid 59769' 2026-03-10T07:28:04.050 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59769 2026-03-10T07:28:04.050 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.050 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.050 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_misc: /' 2026-03-10T07:28:04.051 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_tier_pp 2026-03-10T07:28:04.051 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_tier_pp' 2026-03-10T07:28:04.051 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.052 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2>&1 | tee ceph_test_rados_api_misc_pp.log | sed "s/^/ api_misc_pp: /"' 2026-03-10T07:28:04.052 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_tier_pp 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stdout:test api_tier_pp on pid 59774 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_tier_pp 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59774 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_tier_pp on pid 59774' 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59774 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_pool 2026-03-10T07:28:04.053 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_pool' 2026-03-10T07:28:04.054 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2>&1 | tee ceph_test_rados_api_tier_pp.log | sed "s/^/ api_tier_pp: /"' 2026-03-10T07:28:04.054 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2026-03-10T07:28:04.054 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_misc_pp.log 2026-03-10T07:28:04.055 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.057 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.057 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.057 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.060 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_tier_pp: /' 2026-03-10T07:28:04.060 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2026-03-10T07:28:04.061 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_tier_pp.log 2026-03-10T07:28:04.061 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_misc_pp: /' 2026-03-10T07:28:04.064 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_pool 2026-03-10T07:28:04.065 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.066 INFO:tasks.workunit.client.0.vm00.stdout:test api_pool on pid 59787 2026-03-10T07:28:04.066 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_pool 2026-03-10T07:28:04.066 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59787 2026-03-10T07:28:04.066 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_pool on pid 59787' 2026-03-10T07:28:04.066 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59787 2026-03-10T07:28:04.066 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.066 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.091 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_snapshots 2026-03-10T07:28:04.091 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_snapshots' 2026-03-10T07:28:04.092 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2>&1 | tee ceph_test_rados_api_pool.log | sed "s/^/ api_pool: /"' 2026-03-10T07:28:04.094 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.094 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.094 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.094 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.103 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2026-03-10T07:28:04.104 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_pool: /' 2026-03-10T07:28:04.105 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_pool.log 2026-03-10T07:28:04.112 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.114 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_snapshots 2026-03-10T07:28:04.125 INFO:tasks.workunit.client.0.vm00.stdout:test api_snapshots on pid 59863 2026-03-10T07:28:04.125 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_snapshots 2026-03-10T07:28:04.125 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59863 2026-03-10T07:28:04.125 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_snapshots on pid 59863' 2026-03-10T07:28:04.125 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59863 2026-03-10T07:28:04.125 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.125 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.131 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2>&1 | tee ceph_test_rados_api_snapshots.log | sed "s/^/ api_snapshots: /"' 2026-03-10T07:28:04.132 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_snapshots_pp 2026-03-10T07:28:04.132 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_snapshots_pp' 2026-03-10T07:28:04.134 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.134 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.134 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.134 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.134 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2026-03-10T07:28:04.144 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_snapshots_pp 2026-03-10T07:28:04.171 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.171 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_snapshots: /' 2026-03-10T07:28:04.171 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_snapshots.log 2026-03-10T07:28:04.265 INFO:tasks.workunit.client.0.vm00.stdout:test api_snapshots_pp on pid 59951 2026-03-10T07:28:04.265 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_snapshots_pp 2026-03-10T07:28:04.265 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59951 2026-03-10T07:28:04.265 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_snapshots_pp on pid 59951' 2026-03-10T07:28:04.265 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59951 2026-03-10T07:28:04.265 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.265 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.265 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2>&1 | tee ceph_test_rados_api_snapshots_pp.log | sed "s/^/ api_snapshots_pp: /"' 2026-03-10T07:28:04.265 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_stat 2026-03-10T07:28:04.265 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_stat' 2026-03-10T07:28:04.265 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_stat 2026-03-10T07:28:04.266 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.266 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.266 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.266 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.266 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.267 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_snapshots_pp.log 2026-03-10T07:28:04.276 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2026-03-10T07:28:04.279 INFO:tasks.workunit.client.0.vm00.stdout:test api_stat on pid 59959 2026-03-10T07:28:04.279 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_stat 2026-03-10T07:28:04.279 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59959 2026-03-10T07:28:04.279 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_stat on pid 59959' 2026-03-10T07:28:04.279 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59959 2026-03-10T07:28:04.279 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.279 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.279 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_snapshots_pp: /' 2026-03-10T07:28:04.281 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_stat_pp 2026-03-10T07:28:04.285 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_stat_pp' 2026-03-10T07:28:04.285 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2>&1 | tee ceph_test_rados_api_stat.log | sed "s/^/ api_stat: /"' 2026-03-10T07:28:04.288 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.291 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.291 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.291 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.291 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_stat: /' 2026-03-10T07:28:04.291 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2026-03-10T07:28:04.292 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_stat_pp 2026-03-10T07:28:04.292 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.293 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_stat.log 2026-03-10T07:28:04.295 INFO:tasks.workunit.client.0.vm00.stdout:test api_stat_pp on pid 59967 2026-03-10T07:28:04.295 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_stat_pp 2026-03-10T07:28:04.295 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59967 2026-03-10T07:28:04.295 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_stat_pp on pid 59967' 2026-03-10T07:28:04.295 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59967 2026-03-10T07:28:04.295 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.295 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.296 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2>&1 | tee ceph_test_rados_api_stat_pp.log | sed "s/^/ api_stat_pp: /"' 2026-03-10T07:28:04.297 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.297 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.297 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.297 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.297 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2026-03-10T07:28:04.300 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_watch_notify 2026-03-10T07:28:04.300 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_watch_notify' 2026-03-10T07:28:04.307 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_stat_pp.log 2026-03-10T07:28:04.307 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_stat_pp: /' 2026-03-10T07:28:04.312 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_watch_notify 2026-03-10T07:28:04.312 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.314 INFO:tasks.workunit.client.0.vm00.stdout:test api_watch_notify on pid 59996 2026-03-10T07:28:04.314 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_watch_notify 2026-03-10T07:28:04.315 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59996 2026-03-10T07:28:04.315 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_watch_notify on pid 59996' 2026-03-10T07:28:04.315 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=59996 2026-03-10T07:28:04.315 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.315 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.315 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_watch_notify_pp 2026-03-10T07:28:04.315 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_watch_notify_pp' 2026-03-10T07:28:04.315 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2>&1 | tee ceph_test_rados_api_watch_notify.log | sed "s/^/ api_watch_notify: /"' 2026-03-10T07:28:04.316 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.316 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.316 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.316 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.316 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_watch_notify: /' 2026-03-10T07:28:04.317 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_watch_notify.log 2026-03-10T07:28:04.317 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.318 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_watch_notify_pp 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stdout:test api_watch_notify_pp on pid 60004 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_watch_notify_pp 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60004 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_watch_notify_pp on pid 60004' 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60004 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2>&1 | tee ceph_test_rados_api_watch_notify_pp.log | sed "s/^/ api_watch_notify_pp: /"' 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.322 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_cmd 2026-03-10T07:28:04.323 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_cmd' 2026-03-10T07:28:04.323 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_watch_notify_pp: /' 2026-03-10T07:28:04.324 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_watch_notify_pp.log 2026-03-10T07:28:04.325 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2026-03-10T07:28:04.334 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.334 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_cmd 2026-03-10T07:28:04.336 INFO:tasks.workunit.client.0.vm00.stdout:test api_cmd on pid 60012 2026-03-10T07:28:04.336 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_cmd 2026-03-10T07:28:04.336 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60012 2026-03-10T07:28:04.336 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_cmd on pid 60012' 2026-03-10T07:28:04.336 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60012 2026-03-10T07:28:04.336 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.336 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.338 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_cmd_pp 2026-03-10T07:28:04.338 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_cmd_pp' 2026-03-10T07:28:04.341 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.342 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_cmd_pp 2026-03-10T07:28:04.344 INFO:tasks.workunit.client.0.vm00.stdout:test api_cmd_pp on pid 60024 2026-03-10T07:28:04.344 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_cmd_pp 2026-03-10T07:28:04.344 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60024 2026-03-10T07:28:04.344 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_cmd_pp on pid 60024' 2026-03-10T07:28:04.344 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60024 2026-03-10T07:28:04.344 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.344 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.344 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_service 2026-03-10T07:28:04.344 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_service' 2026-03-10T07:28:04.344 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2>&1 | tee ceph_test_rados_api_cmd_pp.log | sed "s/^/ api_cmd_pp: /"' 2026-03-10T07:28:04.345 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2>&1 | tee ceph_test_rados_api_cmd.log | sed "s/^/ api_cmd: /"' 2026-03-10T07:28:04.346 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.346 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.346 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.346 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.346 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.346 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.346 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.346 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.347 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_cmd_pp: /' 2026-03-10T07:28:04.347 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.348 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_cmd_pp.log 2026-03-10T07:28:04.349 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2026-03-10T07:28:04.349 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2026-03-10T07:28:04.350 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_cmd.log 2026-03-10T07:28:04.351 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_service 2026-03-10T07:28:04.352 INFO:tasks.workunit.client.0.vm00.stdout:test api_service on pid 60035 2026-03-10T07:28:04.352 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_service 2026-03-10T07:28:04.352 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60035 2026-03-10T07:28:04.352 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_service on pid 60035' 2026-03-10T07:28:04.352 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60035 2026-03-10T07:28:04.352 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.352 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.352 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_cmd: /' 2026-03-10T07:28:04.354 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2>&1 | tee ceph_test_rados_api_service.log | sed "s/^/ api_service: /"' 2026-03-10T07:28:04.359 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_service_pp 2026-03-10T07:28:04.359 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_service_pp' 2026-03-10T07:28:04.360 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.360 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.360 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.360 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.362 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_service_pp 2026-03-10T07:28:04.364 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2026-03-10T07:28:04.365 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_service.log 2026-03-10T07:28:04.365 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_service: /' 2026-03-10T07:28:04.388 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.401 INFO:tasks.workunit.client.0.vm00.stdout:test api_service_pp on pid 60086 2026-03-10T07:28:04.402 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_service_pp 2026-03-10T07:28:04.402 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60086 2026-03-10T07:28:04.402 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_service_pp on pid 60086' 2026-03-10T07:28:04.402 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60086 2026-03-10T07:28:04.402 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.402 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.405 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2>&1 | tee ceph_test_rados_api_service_pp.log | sed "s/^/ api_service_pp: /"' 2026-03-10T07:28:04.406 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_c_write_operations 2026-03-10T07:28:04.406 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_c_write_operations' 2026-03-10T07:28:04.409 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.409 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.409 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.409 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.412 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.414 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2026-03-10T07:28:04.414 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_service_pp: /' 2026-03-10T07:28:04.418 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_service_pp.log 2026-03-10T07:28:04.418 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_c_write_operations 2026-03-10T07:28:04.418 INFO:tasks.workunit.client.0.vm00.stdout:test api_c_write_operations on pid 60111 2026-03-10T07:28:04.418 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_c_write_operations 2026-03-10T07:28:04.419 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60111 2026-03-10T07:28:04.419 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_c_write_operations on pid 60111' 2026-03-10T07:28:04.419 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60111 2026-03-10T07:28:04.419 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.419 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.425 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2>&1 | tee ceph_test_rados_api_c_write_operations.log | sed "s/^/ api_c_write_operations: /"' 2026-03-10T07:28:04.425 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s api_c_read_operations 2026-03-10T07:28:04.426 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' api_c_read_operations' 2026-03-10T07:28:04.431 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.431 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.431 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.431 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.436 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2026-03-10T07:28:04.440 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_c_write_operations.log 2026-03-10T07:28:04.441 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_c_write_operations: /' 2026-03-10T07:28:04.442 INFO:tasks.workunit.client.0.vm00.stderr:++ echo api_c_read_operations 2026-03-10T07:28:04.442 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.444 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=api_c_read_operations 2026-03-10T07:28:04.444 INFO:tasks.workunit.client.0.vm00.stdout:test api_c_read_operations on pid 60152 2026-03-10T07:28:04.444 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60152 2026-03-10T07:28:04.444 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test api_c_read_operations on pid 60152' 2026-03-10T07:28:04.444 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60152 2026-03-10T07:28:04.444 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.444 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.445 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2>&1 | tee ceph_test_rados_api_c_read_operations.log | sed "s/^/ api_c_read_operations: /"' 2026-03-10T07:28:04.445 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s list_parallel 2026-03-10T07:28:04.445 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' list_parallel' 2026-03-10T07:28:04.446 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.448 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.448 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.448 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.450 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ api_c_read_operations: /' 2026-03-10T07:28:04.453 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_api_c_read_operations.log 2026-03-10T07:28:04.453 INFO:tasks.workunit.client.0.vm00.stderr:++ echo list_parallel 2026-03-10T07:28:04.453 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2026-03-10T07:28:04.455 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.458 INFO:tasks.workunit.client.0.vm00.stdout:test list_parallel on pid 60166 2026-03-10T07:28:04.458 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=list_parallel 2026-03-10T07:28:04.458 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60166 2026-03-10T07:28:04.458 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test list_parallel on pid 60166' 2026-03-10T07:28:04.458 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60166 2026-03-10T07:28:04.458 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.458 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.475 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2>&1 | tee ceph_test_rados_list_parallel.log | sed "s/^/ list_parallel: /"' 2026-03-10T07:28:04.475 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.475 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.475 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.475 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.477 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_list_parallel.log 2026-03-10T07:28:04.477 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ list_parallel: /' 2026-03-10T07:28:04.477 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s open_pools_parallel 2026-03-10T07:28:04.477 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' open_pools_parallel' 2026-03-10T07:28:04.478 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2026-03-10T07:28:04.504 INFO:tasks.workunit.client.0.vm00.stderr:++ echo open_pools_parallel 2026-03-10T07:28:04.505 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.512 INFO:tasks.workunit.client.0.vm00.stdout:test open_pools_parallel on pid 60200 2026-03-10T07:28:04.519 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=open_pools_parallel 2026-03-10T07:28:04.519 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60200 2026-03-10T07:28:04.519 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test open_pools_parallel on pid 60200' 2026-03-10T07:28:04.519 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60200 2026-03-10T07:28:04.519 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T07:28:04.519 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.527 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2>&1 | tee ceph_test_rados_open_pools_parallel.log | sed "s/^/ open_pools_parallel: /"' 2026-03-10T07:28:04.527 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s delete_pools_parallel 2026-03-10T07:28:04.527 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' delete_pools_parallel' 2026-03-10T07:28:04.543 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.544 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.544 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.544 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.566 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2026-03-10T07:28:04.572 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_open_pools_parallel.log 2026-03-10T07:28:04.572 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ open_pools_parallel: /' 2026-03-10T07:28:04.578 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.581 INFO:tasks.workunit.client.0.vm00.stderr:++ echo delete_pools_parallel 2026-03-10T07:28:04.593 INFO:tasks.workunit.client.0.vm00.stdout:test delete_pools_parallel on pid 60262 2026-03-10T07:28:04.594 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=delete_pools_parallel 2026-03-10T07:28:04.594 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60262 2026-03-10T07:28:04.594 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test delete_pools_parallel on pid 60262' 2026-03-10T07:28:04.594 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60262 2026-03-10T07:28:04.594 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.594 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.606 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2>&1 | tee ceph_test_rados_delete_pools_parallel.log | sed "s/^/ delete_pools_parallel: /"' 2026-03-10T07:28:04.607 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.607 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.607 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.607 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.607 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s cls 2026-03-10T07:28:04.607 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' cls' 2026-03-10T07:28:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:04 vm00 bash[28005]: cluster 2026-03-10T07:28:02.572789+0000 mgr.y (mgr.24407) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:04 vm00 bash[28005]: cluster 2026-03-10T07:28:02.572789+0000 mgr.y (mgr.24407) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:04 vm00 bash[28005]: audit 2026-03-10T07:28:02.947797+0000 mgr.y (mgr.24407) 105 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:04 vm00 bash[28005]: audit 2026-03-10T07:28:02.947797+0000 mgr.y (mgr.24407) 105 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:04 vm00 bash[20701]: cluster 2026-03-10T07:28:02.572789+0000 mgr.y (mgr.24407) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:04 vm00 bash[20701]: cluster 2026-03-10T07:28:02.572789+0000 mgr.y (mgr.24407) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:04 vm00 bash[20701]: audit 2026-03-10T07:28:02.947797+0000 mgr.y (mgr.24407) 105 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:04 vm00 bash[20701]: audit 2026-03-10T07:28:02.947797+0000 mgr.y (mgr.24407) 105 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:04.638 INFO:tasks.workunit.client.0.vm00.stderr:++ echo cls 2026-03-10T07:28:04.640 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ delete_pools_parallel: /' 2026-03-10T07:28:04.641 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.642 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_rados_delete_pools_parallel.log 2026-03-10T07:28:04.644 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2026-03-10T07:28:04.655 INFO:tasks.workunit.client.0.vm00.stdout:test cls on pid 60354 2026-03-10T07:28:04.655 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=cls 2026-03-10T07:28:04.655 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60354 2026-03-10T07:28:04.655 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test cls on pid 60354' 2026-03-10T07:28:04.655 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60354 2026-03-10T07:28:04.655 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.655 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.662 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cls 2>&1 | tee ceph_test_neorados_cls.log | sed "s/^/ cls: /"' 2026-03-10T07:28:04.669 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s cmd 2026-03-10T07:28:04.669 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' cmd' 2026-03-10T07:28:04.671 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.676 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.676 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.676 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.677 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_cls 2026-03-10T07:28:04.677 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_cls.log 2026-03-10T07:28:04.677 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ cls: /' 2026-03-10T07:28:04.687 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.687 INFO:tasks.workunit.client.0.vm00.stderr:++ echo cmd 2026-03-10T07:28:04.690 INFO:tasks.workunit.client.0.vm00.stdout:test cmd on pid 60395 2026-03-10T07:28:04.690 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=cmd 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60395 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test cmd on pid 60395' 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60395 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cmd 2>&1 | tee ceph_test_neorados_cmd.log | sed "s/^/ cmd: /"' 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ cmd: /' 2026-03-10T07:28:04.696 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_cmd.log 2026-03-10T07:28:04.697 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_cmd 2026-03-10T07:28:04.697 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s handler_error 2026-03-10T07:28:04.697 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' handler_error' 2026-03-10T07:28:04.722 INFO:tasks.workunit.client.0.vm00.stderr:++ echo handler_error 2026-03-10T07:28:04.723 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.726 INFO:tasks.workunit.client.0.vm00.stdout:test handler_error on pid 60437 2026-03-10T07:28:04.726 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=handler_error 2026-03-10T07:28:04.726 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60437 2026-03-10T07:28:04.726 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test handler_error on pid 60437' 2026-03-10T07:28:04.726 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60437 2026-03-10T07:28:04.726 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.726 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.726 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s io 2026-03-10T07:28:04.727 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' io' 2026-03-10T07:28:04.728 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_handler_error 2>&1 | tee ceph_test_neorados_handler_error.log | sed "s/^/ handler_error: /"' 2026-03-10T07:28:04.732 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.734 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.735 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.735 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.735 INFO:tasks.workunit.client.0.vm00.stderr:++ echo io 2026-03-10T07:28:04.735 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_handler_error.log 2026-03-10T07:28:04.735 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ handler_error: /' 2026-03-10T07:28:04.738 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_handler_error 2026-03-10T07:28:04.739 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.742 INFO:tasks.workunit.client.0.vm00.stdout:test io on pid 60469 2026-03-10T07:28:04.742 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=io 2026-03-10T07:28:04.742 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60469 2026-03-10T07:28:04.742 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test io on pid 60469' 2026-03-10T07:28:04.742 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60469 2026-03-10T07:28:04.742 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.742 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.752 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_io 2>&1 | tee ceph_test_neorados_io.log | sed "s/^/ io: /"' 2026-03-10T07:28:04.752 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s ec_io 2026-03-10T07:28:04.754 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' ec_io' 2026-03-10T07:28:04.755 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.755 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.755 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.755 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.755 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ io: /' 2026-03-10T07:28:04.756 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_io.log 2026-03-10T07:28:04.757 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.759 INFO:tasks.workunit.client.0.vm00.stderr:++ echo ec_io 2026-03-10T07:28:04.759 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_io 2026-03-10T07:28:04.759 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=ec_io 2026-03-10T07:28:04.761 INFO:tasks.workunit.client.0.vm00.stdout:test ec_io on pid 60488 2026-03-10T07:28:04.761 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60488 2026-03-10T07:28:04.761 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test ec_io on pid 60488' 2026-03-10T07:28:04.761 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60488 2026-03-10T07:28:04.761 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.761 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.761 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_io 2>&1 | tee ceph_test_neorados_ec_io.log | sed "s/^/ ec_io: /"' 2026-03-10T07:28:04.762 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.766 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:04 vm03 bash[23382]: cluster 2026-03-10T07:28:02.572789+0000 mgr.y (mgr.24407) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:04.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:04 vm03 bash[23382]: cluster 2026-03-10T07:28:02.572789+0000 mgr.y (mgr.24407) 104 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:28:04.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:04 vm03 bash[23382]: audit 2026-03-10T07:28:02.947797+0000 mgr.y (mgr.24407) 105 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:04.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:04 vm03 bash[23382]: audit 2026-03-10T07:28:02.947797+0000 mgr.y (mgr.24407) 105 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:04.766 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.766 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.766 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_ec_io 2026-03-10T07:28:04.767 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s list 2026-03-10T07:28:04.767 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' list' 2026-03-10T07:28:04.768 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_ec_io.log 2026-03-10T07:28:04.769 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ ec_io: /' 2026-03-10T07:28:04.775 INFO:tasks.workunit.client.0.vm00.stderr:++ echo list 2026-03-10T07:28:04.776 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.778 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=list 2026-03-10T07:28:04.778 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60513 2026-03-10T07:28:04.778 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test list on pid 60513' 2026-03-10T07:28:04.778 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60513 2026-03-10T07:28:04.778 INFO:tasks.workunit.client.0.vm00.stdout:test list on pid 60513 2026-03-10T07:28:04.778 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.778 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.781 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_list 2>&1 | tee ceph_test_neorados_list.log | sed "s/^/ list: /"' 2026-03-10T07:28:04.781 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s ec_list 2026-03-10T07:28:04.785 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' ec_list' 2026-03-10T07:28:04.788 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.789 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.789 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.789 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.790 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.791 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ list: /' 2026-03-10T07:28:04.791 INFO:tasks.workunit.client.0.vm00.stderr:++ echo ec_list 2026-03-10T07:28:04.791 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_list.log 2026-03-10T07:28:04.791 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=ec_list 2026-03-10T07:28:04.792 INFO:tasks.workunit.client.0.vm00.stdout:test ec_list on pid 60535 2026-03-10T07:28:04.792 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60535 2026-03-10T07:28:04.792 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test ec_list on pid 60535' 2026-03-10T07:28:04.792 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60535 2026-03-10T07:28:04.792 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.792 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.792 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_list 2>&1 | tee ceph_test_neorados_ec_list.log | sed "s/^/ ec_list: /"' 2026-03-10T07:28:04.792 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_list 2026-03-10T07:28:04.793 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.793 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.793 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.793 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.793 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_ec_list 2026-03-10T07:28:04.806 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s misc 2026-03-10T07:28:04.806 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' misc' 2026-03-10T07:28:04.806 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_ec_list.log 2026-03-10T07:28:04.807 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ ec_list: /' 2026-03-10T07:28:04.810 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.810 INFO:tasks.workunit.client.0.vm00.stderr:++ echo misc 2026-03-10T07:28:04.821 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=misc 2026-03-10T07:28:04.821 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60567 2026-03-10T07:28:04.821 INFO:tasks.workunit.client.0.vm00.stdout:test misc on pid 60567 2026-03-10T07:28:04.821 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test misc on pid 60567' 2026-03-10T07:28:04.821 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60567 2026-03-10T07:28:04.821 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.821 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.821 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_misc 2>&1 | tee ceph_test_neorados_misc.log | sed "s/^/ misc: /"' 2026-03-10T07:28:04.822 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.822 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.822 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.822 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.822 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s pool 2026-03-10T07:28:04.823 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' pool' 2026-03-10T07:28:04.823 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_misc 2026-03-10T07:28:04.825 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_misc.log 2026-03-10T07:28:04.826 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ misc: /' 2026-03-10T07:28:04.836 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.836 INFO:tasks.workunit.client.0.vm00.stderr:++ echo pool 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stdout:test pool on pid 60591 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=pool 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60591 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test pool on pid 60591' 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60591 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s read_operations 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' read_operations' 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_pool 2>&1 | tee ceph_test_neorados_pool.log | sed "s/^/ pool: /"' 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.839 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.840 INFO:tasks.workunit.client.0.vm00.stderr:++ echo read_operations 2026-03-10T07:28:04.840 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_pool 2026-03-10T07:28:04.846 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_pool.log 2026-03-10T07:28:04.846 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ pool: /' 2026-03-10T07:28:04.846 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.848 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=read_operations 2026-03-10T07:28:04.848 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60599 2026-03-10T07:28:04.848 INFO:tasks.workunit.client.0.vm00.stdout:test read_operations on pid 60599 2026-03-10T07:28:04.848 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test read_operations on pid 60599' 2026-03-10T07:28:04.848 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60599 2026-03-10T07:28:04.848 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.848 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.853 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_read_operations 2>&1 | tee ceph_test_neorados_read_operations.log | sed "s/^/ read_operations: /"' 2026-03-10T07:28:04.853 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s snapshots 2026-03-10T07:28:04.853 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' snapshots' 2026-03-10T07:28:04.853 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.854 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.854 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.854 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.854 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.854 INFO:tasks.workunit.client.0.vm00.stderr:++ echo snapshots 2026-03-10T07:28:04.856 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_read_operations 2026-03-10T07:28:04.860 INFO:tasks.workunit.client.0.vm00.stdout:test snapshots on pid 60619 2026-03-10T07:28:04.860 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=snapshots 2026-03-10T07:28:04.860 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60619 2026-03-10T07:28:04.860 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test snapshots on pid 60619' 2026-03-10T07:28:04.860 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60619 2026-03-10T07:28:04.860 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.860 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.861 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_read_operations.log 2026-03-10T07:28:04.861 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ read_operations: /' 2026-03-10T07:28:04.866 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s watch_notify 2026-03-10T07:28:04.870 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_snapshots 2>&1 | tee ceph_test_neorados_snapshots.log | sed "s/^/ snapshots: /"' 2026-03-10T07:28:04.871 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' watch_notify' 2026-03-10T07:28:04.871 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.872 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.872 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.872 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.872 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ snapshots: /' 2026-03-10T07:28:04.872 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_snapshots.log 2026-03-10T07:28:04.874 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.878 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_snapshots 2026-03-10T07:28:04.878 INFO:tasks.workunit.client.0.vm00.stderr:++ echo watch_notify 2026-03-10T07:28:04.880 INFO:tasks.workunit.client.0.vm00.stdout:test watch_notify on pid 60645 2026-03-10T07:28:04.880 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=watch_notify 2026-03-10T07:28:04.880 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60645 2026-03-10T07:28:04.880 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test watch_notify on pid 60645' 2026-03-10T07:28:04.880 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60645 2026-03-10T07:28:04.880 INFO:tasks.workunit.client.0.vm00.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T07:28:04.880 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.882 INFO:tasks.workunit.client.0.vm00.stderr:++ printf %25s write_operations 2026-03-10T07:28:04.883 INFO:tasks.workunit.client.0.vm00.stderr:+ r=' write_operations' 2026-03-10T07:28:04.883 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_watch_notify 2>&1 | tee ceph_test_neorados_watch_notify.log | sed "s/^/ watch_notify: /"' 2026-03-10T07:28:04.883 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.883 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.883 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.884 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.887 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_watch_notify 2026-03-10T07:28:04.887 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_watch_notify.log 2026-03-10T07:28:04.887 INFO:tasks.workunit.client.0.vm00.stderr:++ awk '{print $1}' 2026-03-10T07:28:04.887 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ watch_notify: /' 2026-03-10T07:28:04.890 INFO:tasks.workunit.client.0.vm00.stderr:++ echo write_operations 2026-03-10T07:28:04.894 INFO:tasks.workunit.client.0.vm00.stderr:+ ff=write_operations 2026-03-10T07:28:04.894 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60671 2026-03-10T07:28:04.895 INFO:tasks.workunit.client.0.vm00.stdout:test write_operations on pid 60671 2026-03-10T07:28:04.895 INFO:tasks.workunit.client.0.vm00.stderr:+ echo 'test write_operations on pid 60671' 2026-03-10T07:28:04.895 INFO:tasks.workunit.client.0.vm00.stderr:+ pids[$f]=60671 2026-03-10T07:28:04.895 INFO:tasks.workunit.client.0.vm00.stderr:+ ret=0 2026-03-10T07:28:04.895 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T07:28:04.895 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:28:04.895 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59774 2026-03-10T07:28:04.895 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59774 2026-03-10T07:28:04.899 INFO:tasks.workunit.client.0.vm00.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_write_operations 2>&1 | tee ceph_test_neorados_write_operations.log | sed "s/^/ write_operations: /"' 2026-03-10T07:28:04.902 INFO:tasks.workunit.client.0.vm00.stderr:+ '[' -z '' ']' 2026-03-10T07:28:04.902 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.902 INFO:tasks.workunit.client.0.vm00.stderr:+ case $- in 2026-03-10T07:28:04.902 INFO:tasks.workunit.client.0.vm00.stderr:+ return 2026-03-10T07:28:04.902 INFO:tasks.workunit.client.0.vm00.stderr:+ sed 's/^/ write_operations: /' 2026-03-10T07:28:04.903 INFO:tasks.workunit.client.0.vm00.stderr:+ tee ceph_test_neorados_write_operations.log 2026-03-10T07:28:04.904 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph_test_neorados_write_operations 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [==========] Running 12 tests from 1 test suite. 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [----------] Global test environment set-up. 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [----------] 12 tests from AsioRados 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadCallback 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadCallback (1 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadFuture 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadFuture (1 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadYield 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadYield (1 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteCallback 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteCallback (13 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteFuture 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteFuture (15 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteYield 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteYield (17 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationCallback 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationCallback (0 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationFuture 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationFuture (1 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationYield 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationYield (0 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationCallback 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationCallback (14 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationFuture 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationFuture (16 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationYield 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationYield (14 ms) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [----------] 12 tests from AsioRados (93 ms total) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [----------] Global test environment tear-down 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [==========] 12 tests from 1 test suite ran. (1776 ms total) 2026-03-10T07:28:05.868 INFO:tasks.workunit.client.0.vm00.stdout: api_asio: [ PASSED ] 12 tests. 2026-03-10T07:28:05.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.428576+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.428576+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.464130+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.100:0/1004993114' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T07:28:05.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.464130+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.100:0/1004993114' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T07:28:05.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: cluster 2026-03-10T07:28:04.532710+0000 mon.a (mon.0) 804 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T07:28:05.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: cluster 2026-03-10T07:28:04.532710+0000 mon.a (mon.0) 804 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T07:28:05.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.568677+0000 mon.a (mon.0) 805 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.568677+0000 mon.a (mon.0) 805 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: cluster 2026-03-10T07:28:04.573200+0000 mgr.y (mgr.24407) 106 : cluster [DBG] pgmap v62: 260 pgs: 128 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:28:05.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: cluster 2026-03-10T07:28:04.573200+0000 mgr.y (mgr.24407) 106 : cluster [DBG] pgmap v62: 260 pgs: 128 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:28:05.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.576679+0000 mon.a (mon.0) 806 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.576679+0000 mon.a (mon.0) 806 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.576766+0000 mon.a (mon.0) 807 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.576766+0000 mon.a (mon.0) 807 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.608866+0000 mon.a (mon.0) 808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.608866+0000 mon.a (mon.0) 808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.610356+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.610356+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.612001+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.612001+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.615083+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.615083+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.616864+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.616864+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.791575+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.791575+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.792233+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.888 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.792233+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.838595+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.838595+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.839281+0000 mon.a (mon.0) 812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:05 vm00 bash[28005]: audit 2026-03-10T07:28:04.839281+0000 mon.a (mon.0) 812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.428576+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.428576+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.464130+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.100:0/1004993114' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.464130+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.100:0/1004993114' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: cluster 2026-03-10T07:28:04.532710+0000 mon.a (mon.0) 804 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: cluster 2026-03-10T07:28:04.532710+0000 mon.a (mon.0) 804 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.568677+0000 mon.a (mon.0) 805 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.568677+0000 mon.a (mon.0) 805 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: cluster 2026-03-10T07:28:04.573200+0000 mgr.y (mgr.24407) 106 : cluster [DBG] pgmap v62: 260 pgs: 128 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: cluster 2026-03-10T07:28:04.573200+0000 mgr.y (mgr.24407) 106 : cluster [DBG] pgmap v62: 260 pgs: 128 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.576679+0000 mon.a (mon.0) 806 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.576679+0000 mon.a (mon.0) 806 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.576766+0000 mon.a (mon.0) 807 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.576766+0000 mon.a (mon.0) 807 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.608866+0000 mon.a (mon.0) 808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.608866+0000 mon.a (mon.0) 808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.610356+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.610356+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.612001+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.612001+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.615083+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.615083+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.616864+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.616864+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.791575+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.791575+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.792233+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.792233+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.838595+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.838595+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.839281+0000 mon.a (mon.0) 812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:05.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:05 vm00 bash[20701]: audit 2026-03-10T07:28:04.839281+0000 mon.a (mon.0) 812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.428576+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.428576+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.464130+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.100:0/1004993114' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.464130+0000 mon.b (mon.1) 30 : audit [DBG] from='client.? 192.168.123.100:0/1004993114' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: cluster 2026-03-10T07:28:04.532710+0000 mon.a (mon.0) 804 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: cluster 2026-03-10T07:28:04.532710+0000 mon.a (mon.0) 804 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.568677+0000 mon.a (mon.0) 805 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.568677+0000 mon.a (mon.0) 805 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: cluster 2026-03-10T07:28:04.573200+0000 mgr.y (mgr.24407) 106 : cluster [DBG] pgmap v62: 260 pgs: 128 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: cluster 2026-03-10T07:28:04.573200+0000 mgr.y (mgr.24407) 106 : cluster [DBG] pgmap v62: 260 pgs: 128 unknown, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.576679+0000 mon.a (mon.0) 806 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.576679+0000 mon.a (mon.0) 806 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.576766+0000 mon.a (mon.0) 807 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.576766+0000 mon.a (mon.0) 807 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.608866+0000 mon.a (mon.0) 808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.608866+0000 mon.a (mon.0) 808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.610356+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.610356+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.612001+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.612001+0000 mon.a (mon.0) 809 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.615083+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.615083+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.616864+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.616864+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.791575+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.791575+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.792233+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.792233+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.838595+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.838595+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.839281+0000 mon.a (mon.0) 812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:05 vm03 bash[23382]: audit 2026-03-10T07:28:04.839281+0000 mon.a (mon.0) 812 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [==========] Running 11 tests from 3 test suites. 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] Global test environment set-up. 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 7 tests from LibRadosList 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.ListObjects 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.ListObjects (176 ms) 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.ListObjectsZeroInName 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.ListObjectsZeroInName (45 ms) 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.ListObjectsNS 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo1,foo2,foo3 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo1 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo2 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo3 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo1,foo4,foo5 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo4 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo5 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo1 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo6,foo7 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo7 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo6 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo4 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo5 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns2:foo7 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns2:foo6 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo1 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo1 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo2 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo3 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.ListObjectsNS (137 ms) 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.ListObjectsStart 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 1 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 10 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 13 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 7 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 14 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 0 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 15 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 11 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 5 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 8 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 6 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 3 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 4 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 12 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 9 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 2 0 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-10T07:28:06.281 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.ListObjectsStart (151 ms) 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.ListObjectsCursor 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: x cursor=MIN 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=1 cursor=7:02547ec2:::1:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=10 cursor=7:52ea6a34:::10:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=13 cursor=7:566253c9:::13:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=7 cursor=7:5c6b0b28:::7:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=14 cursor=7:62a1935d:::14:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=0 cursor=7:6cac518f:::0:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=15 cursor=7:863748b0:::15:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=11 cursor=7:89d3ae78:::11:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=5 cursor=7:b29083e3:::5:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=8 cursor=7:bd63b0f1:::8:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=6 cursor=7:c4fdafeb:::6:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=3 cursor=7:cfc208b3:::3:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=4 cursor=7:d83876eb:::4:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=12 cursor=7:de5d7c5f:::12:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=9 cursor=7:e960b815:::9:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > oid=2 cursor=7:f905c69b:::2:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: FIRST> seek to MIN oid=1 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=1 cursor=7:02547ec2:::1:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:02547ec2:::1:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:02547ec2:::1:head -> 1 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=10 cursor=7:52ea6a34:::10:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:52ea6a34:::10:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:52ea6a34:::10:head -> 10 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=13 cursor=7:566253c9:::13:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:566253c9:::13:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:566253c9:::13:head -> 13 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=7 cursor=7:5c6b0b28:::7:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:5c6b0b28:::7:head 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:5c6b0b28:::7:head -> 7 2026-03-10T07:28:06.282 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=14 cursor=7:62a1935d:::14:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:62a1935d:::14:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:62a1935d:::14:head -> 14 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=0 cursor=7:6cac518f:::0:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:6cac518f:::0:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:6cac518f:::0:head -> 0 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=15 cursor=7:863748b0:::15:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:863748b0:::15:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:863748b0:::15:head -> 15 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=11 cursor=7:89d3ae78:::11:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:89d3ae78:::11:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:89d3ae78:::11:head -> 11 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=5 cursor=7:b29083e3:::5:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:b29083e3:::5:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:b29083e3:::5:head -> 5 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=8 cursor=7:bd63b0f1:::8:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:bd63b0f1:::8:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:bd63b0f1:::8:head -> 8 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=6 cursor=7:c4fdafeb:::6:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:c4fdafeb:::6:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:c4fdafeb:::6:head -> 6 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=3 cursor=7:cfc208b3:::3:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:cfc208b3:::3:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:cfc208b3:::3:head -> 3 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=4 cursor=7:d83876eb:::4:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:d83876eb:::4:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:d83876eb:::4:head -> 4 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=12 cursor=7:de5d7c5f:::12:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:de5d7c5f:::12:head 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:de5d7c5f:::12:head -> 12 2026-03-10T07:28:06.322 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=9 cursor=7:e960b815:::9:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:e960b815:::9:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:e960b815:::9:head -> 9 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : oid=2 cursor=7:f905c69b:::2:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:f905c69b:::2:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:f905c69b:::2:head -> 2 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:566253c9:::13:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:566253c9:::13:head expected=7:566253c9:::13:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:566253c9:::13:head -> 13 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=13 expected=13 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:52ea6a34:::10:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:52ea6a34:::10:head expected=7:52ea6a34:::10:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:52ea6a34:::10:head -> 10 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=10 expected=10 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:b29083e3:::5:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:b29083e3:::5:head expected=7:b29083e3:::5:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:b29083e3:::5:head -> 5 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=5 expected=5 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:bd63b0f1:::8:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:bd63b0f1:::8:head expected=7:bd63b0f1:::8:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:bd63b0f1:::8:head -> 8 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=8 expected=8 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:f905c69b:::2:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:f905c69b:::2:head expected=7:f905c69b:::2:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:f905c69b:::2:head -> 2 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=2 expected=2 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:e960b815:::9:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:e960b815:::9:head expected=7:e960b815:::9:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:e960b815:::9:head -> 9 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=9 expected=9 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:863748b0:::15:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:863748b0:::15:head expected=7:863748b0:::15:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:863748b0:::15:head -> 15 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=15 expected=15 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:6cac518f:::0:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:6cac518f:::0:head expected=7:6cac518f:::0:head 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:6cac518f:::0:head -> 0 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=0 expected=0 2026-03-10T07:28:06.323 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:62a1935d:::14:head 2026-03-10T07:28:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: cluster 2026-03-10T07:28:05.435833+0000 mon.a (mon.0) 813 : cluster [WRN] Health check failed: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: cluster 2026-03-10T07:28:05.435833+0000 mon.a (mon.0) 813 : cluster [WRN] Health check failed: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512137+0000 mon.a (mon.0) 814 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512137+0000 mon.a (mon.0) 814 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512204+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512204+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512492+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512492+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512583+0000 mon.a (mon.0) 817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512583+0000 mon.a (mon.0) 817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512676+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512676+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512813+0000 mon.a (mon.0) 819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.512813+0000 mon.a (mon.0) 819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.568433+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/3812604880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.568433+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/3812604880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.569009+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/504955467' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.569009+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/504955467' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: cluster 2026-03-10T07:28:05.614904+0000 mon.a (mon.0) 820 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: cluster 2026-03-10T07:28:05.614904+0000 mon.a (mon.0) 820 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.654635+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.654635+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.656988+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.656988+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.657160+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.657160+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.671293+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/2974621785' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.671293+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/2974621785' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.676719+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.100:0/2663216387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.676719+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.100:0/2663216387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.676846+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/1808518326' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.676846+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/1808518326' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.676935+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/4063648944' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.676935+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/4063648944' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677015+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/3128733323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677015+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/3128733323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677085+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.100:0/202164564' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677085+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.100:0/202164564' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677174+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/430241567' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677174+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/430241567' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677382+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/2829140741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677382+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/2829140741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677538+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/3250380112' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677538+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/3250380112' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677621+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/495247001' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.677621+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/495247001' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.684044+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.684044+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.684353+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.684353+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.684501+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.684501+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691043+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691043+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691102+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691102+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691143+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691143+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691196+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691196+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691310+0000 mon.a (mon.0) 828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691310+0000 mon.a (mon.0) 828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691637+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691637+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691739+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691739+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691836+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691836+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691928+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.691928+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.692043+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.692043+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.692130+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.692130+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.692216+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.692216+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.704841+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.704841+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.704963+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.704963+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.705723+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.705723+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.705900+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:05.705900+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:06.421073+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:06 vm00 bash[28005]: audit 2026-03-10T07:28:06.421073+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: cluster 2026-03-10T07:28:05.435833+0000 mon.a (mon.0) 813 : cluster [WRN] Health check failed: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: cluster 2026-03-10T07:28:05.435833+0000 mon.a (mon.0) 813 : cluster [WRN] Health check failed: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512137+0000 mon.a (mon.0) 814 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512137+0000 mon.a (mon.0) 814 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512204+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512204+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512492+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512492+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512583+0000 mon.a (mon.0) 817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512583+0000 mon.a (mon.0) 817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512676+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512676+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512813+0000 mon.a (mon.0) 819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.512813+0000 mon.a (mon.0) 819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.568433+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/3812604880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.568433+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/3812604880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.569009+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/504955467' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.569009+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/504955467' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: cluster 2026-03-10T07:28:05.614904+0000 mon.a (mon.0) 820 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: cluster 2026-03-10T07:28:05.614904+0000 mon.a (mon.0) 820 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.654635+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.654635+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.656988+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.656988+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.657160+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.657160+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.671293+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/2974621785' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.671293+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/2974621785' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.676719+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.100:0/2663216387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.676719+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.100:0/2663216387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.676846+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/1808518326' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.676846+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/1808518326' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.676935+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/4063648944' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.676935+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/4063648944' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677015+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/3128733323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677015+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/3128733323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677085+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.100:0/202164564' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677085+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.100:0/202164564' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677174+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/430241567' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677174+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/430241567' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677382+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/2829140741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677382+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/2829140741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677538+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/3250380112' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677538+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/3250380112' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677621+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/495247001' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.677621+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/495247001' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.684044+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.684044+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.684353+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.684353+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.684501+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.684501+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691043+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691043+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691102+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691102+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691143+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691143+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691196+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691196+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691310+0000 mon.a (mon.0) 828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691310+0000 mon.a (mon.0) 828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691637+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691637+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691739+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691739+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691836+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691836+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691928+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.691928+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.692043+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.692043+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.692130+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.692130+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.692216+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.692216+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.704841+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.704841+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.704963+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.704963+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.705723+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.705723+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.705900+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:05.705900+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:06.421073+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:07.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:06 vm00 bash[20701]: audit 2026-03-10T07:28:06.421073+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: cluster 2026-03-10T07:28:05.435833+0000 mon.a (mon.0) 813 : cluster [WRN] Health check failed: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: cluster 2026-03-10T07:28:05.435833+0000 mon.a (mon.0) 813 : cluster [WRN] Health check failed: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512137+0000 mon.a (mon.0) 814 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512137+0000 mon.a (mon.0) 814 : audit [INF] from='client.? 192.168.123.100:0/2808442191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm00-59629-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512204+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512204+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm00-59704-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512492+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512492+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.100:0/698788281' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm00-59782-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512583+0000 mon.a (mon.0) 817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512583+0000 mon.a (mon.0) 817 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512676+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512676+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm00-60490-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512813+0000 mon.a (mon.0) 819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.512813+0000 mon.a (mon.0) 819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm00-60537-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.568433+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/3812604880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.568433+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/3812604880' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.569009+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/504955467' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.569009+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/504955467' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: cluster 2026-03-10T07:28:05.614904+0000 mon.a (mon.0) 820 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: cluster 2026-03-10T07:28:05.614904+0000 mon.a (mon.0) 820 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.654635+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.654635+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.656988+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.656988+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.657160+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.657160+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.671293+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/2974621785' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.671293+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.100:0/2974621785' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.676719+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.100:0/2663216387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.676719+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.100:0/2663216387' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.676846+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/1808518326' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.676846+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.100:0/1808518326' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.676935+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/4063648944' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.676935+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.100:0/4063648944' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677015+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/3128733323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677015+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.100:0/3128733323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677085+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.100:0/202164564' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677085+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.100:0/202164564' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677174+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/430241567' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677174+0000 mon.c (mon.2) 92 : audit [INF] from='client.? 192.168.123.100:0/430241567' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677382+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/2829140741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677382+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.100:0/2829140741' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677538+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/3250380112' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677538+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.100:0/3250380112' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677621+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/495247001' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.677621+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.100:0/495247001' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.684044+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.684044+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.684353+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.684353+0000 mon.a (mon.0) 822 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.684501+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.684501+0000 mon.a (mon.0) 823 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691043+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691043+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691102+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691102+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691143+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691143+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691196+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691196+0000 mon.a (mon.0) 827 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691310+0000 mon.a (mon.0) 828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691310+0000 mon.a (mon.0) 828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691637+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691637+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691739+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691739+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691836+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691836+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691928+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.691928+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.692043+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.692043+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.692130+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.692130+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.692216+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.692216+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.704841+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.704841+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.704963+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.704963+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.705723+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.705723+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.705900+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:05.705900+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:06.421073+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:07.268 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:06 vm03 bash[23382]: audit 2026-03-10T07:28:06.421073+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:07.927 INFO:tasks.workunit.client.0.vm00.stdout: a cls: Running main() from gmock_main.cc 2026-03-10T07:28:07.928 INFO:tasks.workunit.client.0.vm00.stdout: cls: [==========] Running 1 test from 1 test suite. 2026-03-10T07:28:07.928 INFO:tasks.workunit.client.0.vm00.stdout: cls: [----------] Global test environment set-up. 2026-03-10T07:28:07.928 INFO:tasks.workunit.client.0.vm00.stdout: cls: [----------] 1 test from NeoRadosCls 2026-03-10T07:28:07.928 INFO:tasks.workunit.client.0.vm00.stdout: cls: [ RUN ] NeoRadosCls.DNE 2026-03-10T07:28:07.928 INFO:tasks.workunit.client.0.vm00.stdout: cls: [ OK ] NeoRadosCls.DNE (3235 ms) 2026-03-10T07:28:07.928 INFO:tasks.workunit.client.0.vm00.stdout: cls: [----------] 1 test from NeoRadosCls (3235 ms total) 2026-03-10T07:28:07.928 INFO:tasks.workunit.client.0.vm00.stdout: cls: 2026-03-10T07:28:07.928 INFO:tasks.workunit.client.0.vm00.stdout: cls: [----------] Global test environment tear-down 2026-03-10T07:28:07.928 INFO:tasks.workunit.client.0.vm00.stdout: cls: [==========] 1 test from 1 test suite ran. (3235 ms total) 2026-03-10T07:28:07.928 INFO:tasks.workunit.client.0.vm00.stdout: cls: [ PASSED ] 1 test. 2026-03-10T07:28:07.951 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: Running main() from gmock_main.cc 2026-03-10T07:28:07.951 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [==========] Running 1 test from 1 test suite. 2026-03-10T07:28:07.951 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [----------] Global test environment set-up. 2026-03-10T07:28:07.951 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [----------] 1 test from neocls_handler_error 2026-03-10T07:28:07.951 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [ RUN ] neocls_handler_error.test_handler_error 2026-03-10T07:28:07.951 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [ OK ] neocls_handler_error.test_handler_error (3180 ms) 2026-03-10T07:28:07.951 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [----------] 1 test from neocls_handler_error (3180 ms total) 2026-03-10T07:28:07.951 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: 2026-03-10T07:28:07.951 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [----------] Global test environment tear-down 2026-03-10T07:28:07.951 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [==========] 1 test from 1 test suite ran. (3181 ms total) 2026-03-10T07:28:07.951 INFO:tasks.workunit.client.0.vm00.stdout: handler_error: [ PASSED ] 1 test. 2026-03-10T07:28:07.985 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: Running main() from gmock_main.cc 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [==========] Running 3 tests from 1 test suite. 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [----------] Global test environment set-up. 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.MonDescribePP 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ OK ] LibRadosCmd.MonDescribePP (233 ms) 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.OSDCmdPP 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ OK ] LibRadosCmd.OSDCmdPP (79 ms) 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.PGCmdPP 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ OK ] LibRadosCmd.PGCmdPP (3300 ms) 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd (3612 ms total) 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [----------] Global test environment tear-down 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [==========] 3 tests from 1 test suite ran. (3612 ms total) 2026-03-10T07:28:07.986 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd_pp: [ PASSED ] 3 tests. 2026-03-10T07:28:08.041 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: Running main() from gmock_main.cc 2026-03-10T07:28:08.041 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [==========] Running 4 tests from 1 test suite. 2026-03-10T07:28:08.041 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [----------] Global test environment set-up. 2026-03-10T07:28:08.041 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [----------] 4 tests from LibRadosCmd 2026-03-10T07:28:08.041 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ RUN ] LibRadosCmd.MonDescribe 2026-03-10T07:28:08.041 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ OK ] LibRadosCmd.MonDescribe (64 ms) 2026-03-10T07:28:08.041 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ RUN ] LibRadosCmd.OSDCmd 2026-03-10T07:28:08.041 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ OK ] LibRadosCmd.OSDCmd (156 ms) 2026-03-10T07:28:08.041 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ RUN ] LibRadosCmd.PGCmd 2026-03-10T07:28:08.041 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ OK ] LibRadosCmd.PGCmd (3292 ms) 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ RUN ] LibRadosCmd.WatchLog 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.616718+0000 mon.a [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.616765+0000 mon.a [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.616832+0000 mon.a [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.616880+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.617674+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.617709+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.617747+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.617782+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.617812+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.617843+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.617872+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.042 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.617904+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: cluster 2026-03-10T07:28:06.574023+0000 mgr.y (mgr.24407) 107 : cluster [DBG] pgmap v64: 1220 pgs: 960 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: cluster 2026-03-10T07:28:06.574023+0000 mgr.y (mgr.24407) 107 : cluster [DBG] pgmap v64: 1220 pgs: 960 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.616718+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.616718+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.616765+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.616765+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.616832+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.616832+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.616880+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.616880+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617674+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617674+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617709+0000 mon.a (mon.0) 846 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617709+0000 mon.a (mon.0) 846 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617747+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617747+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617782+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617782+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617812+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617812+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617843+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617843+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617872+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617872+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617904+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617904+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617935+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617935+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617970+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.617970+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.618000+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.618000+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.618029+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.618029+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: cluster 2026-03-10T07:28:06.730650+0000 mon.a (mon.0) 857 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: cluster 2026-03-10T07:28:06.730650+0000 mon.a (mon.0) 857 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.754493+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.754493+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.786941+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:06.786941+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:07.424687+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:08.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:07 vm03 bash[23382]: audit 2026-03-10T07:28:07.424687+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: cluster 2026-03-10T07:28:06.574023+0000 mgr.y (mgr.24407) 107 : cluster [DBG] pgmap v64: 1220 pgs: 960 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: cluster 2026-03-10T07:28:06.574023+0000 mgr.y (mgr.24407) 107 : cluster [DBG] pgmap v64: 1220 pgs: 960 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.616718+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.616718+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.616765+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.616765+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.616832+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.616832+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.616880+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.616880+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617674+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617674+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617709+0000 mon.a (mon.0) 846 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617709+0000 mon.a (mon.0) 846 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617747+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617747+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617782+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617782+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617812+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617812+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617843+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617843+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617872+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617872+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617904+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617904+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617935+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617935+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617970+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.617970+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.618000+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.618000+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.618029+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.618029+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: cluster 2026-03-10T07:28:06.730650+0000 mon.a (mon.0) 857 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: cluster 2026-03-10T07:28:06.730650+0000 mon.a (mon.0) 857 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.754493+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.754493+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.786941+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:06.786941+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:07.424687+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:07 vm00 bash[28005]: audit 2026-03-10T07:28:07.424687+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: cluster 2026-03-10T07:28:06.574023+0000 mgr.y (mgr.24407) 107 : cluster [DBG] pgmap v64: 1220 pgs: 960 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: cluster 2026-03-10T07:28:06.574023+0000 mgr.y (mgr.24407) 107 : cluster [DBG] pgmap v64: 1220 pgs: 960 unknown, 128 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.616718+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.616718+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.616765+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.616765+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.100:0/2611126073' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm00-60001-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.616832+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.616832+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.100:0/1571320800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm00-60155-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.616880+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.616880+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm00-59637-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617674+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617674+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617709+0000 mon.a (mon.0) 846 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617709+0000 mon.a (mon.0) 846 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm00-59645-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617747+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617747+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm00-59650-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617782+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617782+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm00-59738-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617812+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617812+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm00-59712-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617843+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617843+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm00-59879-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617872+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617872+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617904+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617904+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm00-59956-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617935+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617935+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617970+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.617970+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.618000+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.618000+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.618029+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.618029+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: cluster 2026-03-10T07:28:06.730650+0000 mon.a (mon.0) 857 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: cluster 2026-03-10T07:28:06.730650+0000 mon.a (mon.0) 857 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.754493+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.754493+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.786941+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:06.786941+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:08.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:07.424687+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:08.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:07 vm00 bash[20701]: audit 2026-03-10T07:28:07.424687+0000 mon.a (mon.0) 859 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.617935+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm00-59964-1","app": "ra api_c_read_operations: Running main() from gmock_main.cc 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [==========] Running 17 tests from 1 test suite. 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [----------] Global test environment set-up. 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.NewDelete 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.NewDelete (0 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.SetOpFlags 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.SetOpFlags (550 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertExists 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertExists (65 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertVersion 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertVersion (67 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpXattr 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpXattr (58 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Read 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Read (39 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Checksum 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Checksum (44 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.RWOrderedRead 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.RWOrderedRead (143 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ShortRead 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ShortRead (49 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Exec 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Exec (50 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ExecUserBuf 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ExecUserBuf (424 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat (65 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat2 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat2 (79 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Omap 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Omap (31 ms) 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.OmapNuls 2026-03-10T07:28:08.874 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.OmapNuls (27 ms) 2026-03-10T07:28:08.875 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.GetXattrs 2026-03-10T07:28:08.875 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.GetXattrs (39 ms) 2026-03-10T07:28:08.875 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpExt 2026-03-10T07:28:08.875 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpExt (6 ms) 2026-03-10T07:28:08.875 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest (1736 ms total) 2026-03-10T07:28:08.875 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: 2026-03-10T07:28:08.875 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [----------] Global test environment tear-down 2026-03-10T07:28:08.875 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [==========] 17 tests from 1 test suite ran. (4365 ms total) 2026-03-10T07:28:08.875 INFO:tasks.workunit.client.0.vm00.stdout: api_c_read_operations: [ PASSED ] 17 tests. 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout:dos","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.617970+0000 mon.a [INF] from='client.? 192.168.123.100:0/1550929222' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm00-59973-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.618000+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60028-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.618029+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60027-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.754493+0000 mon.c [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:06.786941+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:07.849896+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:07.849987+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:07.850027+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:07.850062+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:07.870737+0000 mon.b [INF] from='client.? 192.168.123.100:0/3868149432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:07.874278+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:07.951668+0000 mon.b [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:07.958302+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:07.958715+0000 mon.b [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:07.976222+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:07.995021+0000 mon.b [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.010466+0000 mon.b [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.032186+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.032501+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.041660+0000 mon.b [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.046616+0000 mon.b [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.046946+0000 client.admin [INF] onexx 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.117933+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.118021+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.120179+0000 mon.b [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.135760+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.794982+0000 mon.a [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.856694+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.031 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.856733+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.032 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.856755+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.032 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.884465+0000 mon.b [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.849896+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.849896+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.849987+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.849987+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.850027+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.850027+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.850062+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.850062+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: cluster 2026-03-10T07:28:07.856994+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: cluster 2026-03-10T07:28:07.856994+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.870737+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/3868149432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.870737+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/3868149432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.874278+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.874278+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.951668+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.951668+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.958302+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.958302+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.958715+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.958715+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.976222+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.976222+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.995021+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:07.995021+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.010466+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.010466+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.032186+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.032186+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.032501+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.032501+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.041660+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.041660+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.046616+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.046616+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: cluster 2026-03-10T07:28:08.046946+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: cluster 2026-03-10T07:28:08.046946+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.117933+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.117933+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.118021+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.118021+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.120179+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.120179+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.135760+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.135760+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.426562+0000 mon.a (mon.0) 873 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.426562+0000 mon.a (mon.0) 873 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.794982+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.794982+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.800503+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.800503+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.856694+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.856694+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.856733+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.856733+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.856755+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.856755+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: cluster 2026-03-10T07:28:08.869854+0000 mon.a (mon.0) 878 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: cluster 2026-03-10T07:28:08.869854+0000 mon.a (mon.0) 878 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.884465+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.884465+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.884856+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.884856+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.886992+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.886992+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.887335+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.887335+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.897940+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.897940+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.906359+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.267 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:09 vm03 bash[23382]: audit 2026-03-10T07:28:08.906359+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.849896+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.849896+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.849987+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.849987+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.850027+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.850027+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.850062+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.850062+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: cluster 2026-03-10T07:28:07.856994+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T07:28:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: cluster 2026-03-10T07:28:07.856994+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.870737+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/3868149432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.870737+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/3868149432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.874278+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.874278+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.951668+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.951668+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.958302+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.958302+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.958715+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.958715+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.976222+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.976222+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.995021+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:07.995021+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.010466+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.010466+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.032186+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.032186+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.032501+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.032501+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.041660+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.041660+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.046616+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.046616+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: cluster 2026-03-10T07:28:08.046946+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: cluster 2026-03-10T07:28:08.046946+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.117933+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.117933+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.118021+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.118021+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.120179+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.120179+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.135760+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.135760+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.426562+0000 mon.a (mon.0) 873 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.426562+0000 mon.a (mon.0) 873 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.794982+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.794982+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.800503+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.800503+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.856694+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.856694+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.856733+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.856733+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.856755+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.856755+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: cluster 2026-03-10T07:28:08.869854+0000 mon.a (mon.0) 878 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: cluster 2026-03-10T07:28:08.869854+0000 mon.a (mon.0) 878 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.884465+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.884465+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.884856+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.884856+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.886992+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.886992+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.887335+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.887335+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.897940+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.897940+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.906359+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:09 vm00 bash[28005]: audit 2026-03-10T07:28:08.906359+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.849896+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.849896+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm00-60006-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.849987+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.849987+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm00-60490-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.850027+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.850027+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm00-60537-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.850062+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.850062+0000 mon.a (mon.0) 863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: cluster 2026-03-10T07:28:07.856994+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: cluster 2026-03-10T07:28:07.856994+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.870737+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/3868149432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.870737+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/3868149432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.874278+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.874278+0000 mon.a (mon.0) 865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.951668+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.951668+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.958302+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.958302+0000 mon.a (mon.0) 866 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.958715+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.958715+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.976222+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.976222+0000 mon.a (mon.0) 867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.995021+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:07.995021+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.010466+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.010466+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.032186+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.032186+0000 mon.a (mon.0) 868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.032501+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.032501+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.041660+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.041660+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.046616+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.046616+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: cluster 2026-03-10T07:28:08.046946+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: cluster 2026-03-10T07:28:08.046946+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.117933+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.117933+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.118021+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.118021+0000 mon.a (mon.0) 871 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.120179+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.120179+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.135760+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.135760+0000 mon.a (mon.0) 872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.426562+0000 mon.a (mon.0) 873 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.426562+0000 mon.a (mon.0) 873 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:09.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.794982+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.794982+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.800503+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.800503+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.856694+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.856694+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm00-59629-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.856733+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.856733+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm00-59973-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.856755+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.856755+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm00-59964-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: cluster 2026-03-10T07:28:08.869854+0000 mon.a (mon.0) 878 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: cluster 2026-03-10T07:28:08.869854+0000 mon.a (mon.0) 878 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.884465+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.884465+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.884856+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.884856+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.886992+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.886992+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.887335+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.887335+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.897940+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.897940+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.906359+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:09.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:09 vm00 bash[20701]: audit 2026-03-10T07:28:08.906359+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: cluster 2026-03-10T07:28:08.574553+0000 mgr.y (mgr.24407) 108 : cluster [DBG] pgmap v67: 892 pgs: 664 unknown, 96 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: cluster 2026-03-10T07:28:08.574553+0000 mgr.y (mgr.24407) 108 : cluster [DBG] pgmap v67: 892 pgs: 664 unknown, 96 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:08.983265+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/135026206' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:08.983265+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/135026206' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:08.983718+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:08.983718+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.004217+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/3676534907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.004217+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/3676534907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.004530+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.004530+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.004692+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.004692+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.015600+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.015600+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.041001+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.041001+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: cluster 2026-03-10T07:28:09.086881+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: cluster 2026-03-10T07:28:09.086881+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.093273+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.093273+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.118752+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.118752+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.144693+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.144693+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.149899+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.149899+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.165139+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.165139+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.166322+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.166322+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.170369+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.170369+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.228961+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.228961+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.229129+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.229129+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.229537+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.229537+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.238272+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.238272+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.240967+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.240967+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.248690+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.248690+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.249907+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.249907+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.252876+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.252876+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.253920+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.253920+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.428208+0000 mon.a (mon.0) 894 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.428208+0000 mon.a (mon.0) 894 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.816046+0000 mon.c (mon.2) 105 : audit [DBG] from='client.? 192.168.123.100:0/1050437885' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.816046+0000 mon.c (mon.2) 105 : audit [DBG] from='client.? 192.168.123.100:0/1050437885' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.906931+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.906931+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.906982+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.906982+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.907013+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.907013+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.907034+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.907034+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.907057+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.907057+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.907076+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.907076+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.907099+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.907099+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.919745+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.919745+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.934601+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.934601+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.935796+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.935796+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.943615+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.943615+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: cluster 2026-03-10T07:28:09.949204+0000 mon.a (mon.0) 902 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: cluster 2026-03-10T07:28:09.949204+0000 mon.a (mon.0) 902 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.954357+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.954357+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.964106+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.964106+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.964362+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.964362+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.973357+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:09.973357+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.011203+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.011203+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.025835+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.025835+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.042390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.042390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.044639+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.044639+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.062805+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.062805+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.062946+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.062946+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.078919+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.078919+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.084401+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:10 vm00 bash[28005]: audit 2026-03-10T07:28:10.084401+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: cluster 2026-03-10T07:28:08.574553+0000 mgr.y (mgr.24407) 108 : cluster [DBG] pgmap v67: 892 pgs: 664 unknown, 96 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: cluster 2026-03-10T07:28:08.574553+0000 mgr.y (mgr.24407) 108 : cluster [DBG] pgmap v67: 892 pgs: 664 unknown, 96 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:08.983265+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/135026206' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:08.983265+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/135026206' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:08.983718+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:08.983718+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.004217+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/3676534907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.004217+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/3676534907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.004530+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.004530+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.004692+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.004692+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.015600+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.015600+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.041001+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.041001+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: cluster 2026-03-10T07:28:09.086881+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: cluster 2026-03-10T07:28:09.086881+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.093273+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.093273+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.118752+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.118752+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.144693+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.144693+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.149899+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.149899+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.165139+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.165139+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.166322+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.166322+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.170369+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.170369+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.228961+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.228961+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.229129+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.229129+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.229537+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.229537+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.238272+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.238272+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.240967+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.240967+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.248690+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.248690+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.249907+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.249907+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.252876+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.252876+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.253920+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.253920+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.428208+0000 mon.a (mon.0) 894 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.428208+0000 mon.a (mon.0) 894 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.816046+0000 mon.c (mon.2) 105 : audit [DBG] from='client.? 192.168.123.100:0/1050437885' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.816046+0000 mon.c (mon.2) 105 : audit [DBG] from='client.? 192.168.123.100:0/1050437885' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.906931+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.906931+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.906982+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.906982+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.907013+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.907013+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.907034+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.907034+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.907057+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.907057+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.907076+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.907076+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.907099+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.907099+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.919745+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.919745+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.934601+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.934601+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.935796+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.935796+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.943615+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.943615+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: cluster 2026-03-10T07:28:09.949204+0000 mon.a (mon.0) 902 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: cluster 2026-03-10T07:28:09.949204+0000 mon.a (mon.0) 902 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.954357+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.954357+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.964106+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.964106+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.964362+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.964362+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.973357+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:09.973357+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.011203+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.011203+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.025835+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.025835+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.042390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.042390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.044639+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.044639+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.062805+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.062805+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.062946+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.062946+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.078919+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.078919+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.084401+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.387 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:10 vm00 bash[20701]: audit 2026-03-10T07:28:10.084401+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.884856+0000 mon.b [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dis list_parallel: process_1_[60198]: starting. 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60198]: creating pool ceph_test_rados_list_parallel.vm00-60176 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60198]: created object 0... 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60198]: created object 25... 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60198]: created object 49... 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60198]: finishing. 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_1_[60198]: shutting down. 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60199]: starting. 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60199]: listing objects. 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60199]: listed object 0... 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60199]: listed object 25... 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60199]: saw 50 objects 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_2_[60199]: shutting down. 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[60964]: starting. 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[60964]: creating pool ceph_test_rados_list_parallel.vm00-60176 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[60964]: created object 0... 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[60964]: created object 25... 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[60964]: created object 49... 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[60964]: finishing. 2026-03-10T07:28:10.401 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_3_[60964]: shutting down. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[60965]: starting. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[60965]: listing objects. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[60965]: listed object 0... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[60965]: listed object 25... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[60965]: saw 45 objects 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_4_[60965]: shutting down. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[60966]: starting. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[60966]: removed 25 objects... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[60966]: removed half of the objects 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[60966]: removed 50 objects... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[60966]: removed 50 objects 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_5_[60966]: shutting down. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61063]: starting. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61063]: creating pool ceph_test_rados_list_parallel.vm00-60176 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61063]: created object 0... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61063]: created object 25... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61063]: created object 49... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61063]: finishing. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_6_[61063]: shutting down. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61064]: starting. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61064]: listing objects. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61064]: listed object 0... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61064]: listed object 25... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61064]: listed object 50... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61064]: saw 53 objects 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_7_[61064]: shutting down. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61065]: starting. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61065]: added 25 objects... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61065]: added half of the objects 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61065]: added 50 objects... 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61065]: added 50 objects 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_8_[61065]: shutting down. 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.402 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: cluster 2026-03-10T07:28:08.574553+0000 mgr.y (mgr.24407) 108 : cluster [DBG] pgmap v67: 892 pgs: 664 unknown, 96 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: cluster 2026-03-10T07:28:08.574553+0000 mgr.y (mgr.24407) 108 : cluster [DBG] pgmap v67: 892 pgs: 664 unknown, 96 creating+peering, 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:08.983265+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/135026206' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:08.983265+0000 mon.c (mon.2) 98 : audit [INF] from='client.? 192.168.123.100:0/135026206' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:08.983718+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:08.983718+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.004217+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/3676534907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.004217+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.100:0/3676534907' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.004530+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.004530+0000 mon.c (mon.2) 100 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.004692+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.004692+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.015600+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.015600+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.041001+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.041001+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: cluster 2026-03-10T07:28:09.086881+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: cluster 2026-03-10T07:28:09.086881+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.093273+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.093273+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.118752+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.118752+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.144693+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.144693+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.149899+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.149899+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.165139+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.165139+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.166322+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.166322+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.170369+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.170369+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.228961+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.228961+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.229129+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.229129+0000 mon.a (mon.0) 889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.229537+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.229537+0000 mon.a (mon.0) 890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.238272+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.238272+0000 mon.c (mon.2) 102 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.240967+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.240967+0000 mon.a (mon.0) 891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.248690+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.248690+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.249907+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.249907+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.252876+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.252876+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.253920+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.253920+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.428208+0000 mon.a (mon.0) 894 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.428208+0000 mon.a (mon.0) 894 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.816046+0000 mon.c (mon.2) 105 : audit [DBG] from='client.? 192.168.123.100:0/1050437885' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.816046+0000 mon.c (mon.2) 105 : audit [DBG] from='client.? 192.168.123.100:0/1050437885' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.906931+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.906931+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.906982+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.906982+0000 mon.a (mon.0) 896 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.907013+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.907013+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.907034+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.907034+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.907057+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:10.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.907057+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.907076+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.907076+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.907099+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.907099+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm00-59645-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.919745+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.919745+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.934601+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.934601+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.100:0/1785602416' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.935796+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.935796+0000 mon.c (mon.2) 107 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.943615+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.943615+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: cluster 2026-03-10T07:28:09.949204+0000 mon.a (mon.0) 902 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: cluster 2026-03-10T07:28:09.949204+0000 mon.a (mon.0) 902 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.954357+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.954357+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.964106+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.964106+0000 mon.c (mon.2) 109 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.964362+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.964362+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.973357+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:09.973357+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.011203+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.011203+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.025835+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.518 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.025835+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.042390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.042390+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.044639+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.044639+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.062805+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.062805+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.062946+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.062946+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.078919+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.078919+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.084401+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.519 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:10 vm03 bash[23382]: audit 2026-03-10T07:28:10.084401+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61531]: starting. 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61531]: creating pool ceph_test_rados_list_parallel.vm00-60176 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61531]: created object 0... 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61531]: created object 25... 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61531]: created object 49... 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61531]: finishing. 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_9_[61531]: shutting down. 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61532]: starting. 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61532]: listing objects. 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61532]: listed object 0... 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61532]: listed object 25... 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61532]: listed object 50... 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61532]: listed object 75... 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61532]: saw 100 objects 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_10_[61532]: shutting down. 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61534]: starting. 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61534]: added 25 objects... 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61534]: added half of the objects 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61534]: added 50 objects... 2026-03-10T07:28:10.726 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61534]: added 50 objects 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_12_[61534]: shutting down. 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61535]: starting. 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61535]: removed 25 objects... 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61535]: removed half of the objects 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61535]: removed 50 objects... 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61535]: removed 50 objects 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_13_[61535]: shutting down. 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61533]: starting. 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61533]: added 25 objects... 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61533]: added half of the objects 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61533]: added 50 objects... 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61533]: added 50 objects 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_11_[61533]: shutting down. 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[61730]: starting. 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[61730]: creating pool ceph_test_rados_list_parallel.vm00-60176 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[61730]: created object 0... 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[61730]: created object 25... 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[61730]: created object 49... 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[61730]: finishing. 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_14_[61730]: shutting down. 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[61731]: starting. 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[61731]: listing objects. 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[61731]: listed object 0... 2026-03-10T07:28:10.727 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[61731]: listed object 25... 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[61731]: listed object 50... 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[61731]: listed object 75... 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[61731]: listed object 100... 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[61731]: listed object 125... 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[61731]: saw 150 objects 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_15_[61731]: shutting down. 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[61732]: starting. 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[61732]: added 25 objects... 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[61732]: added half of the objects 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[61732]: added 50 objects... 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[61732]: added 50 objects 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: process_16_[61732]: shutting down. 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******************************* 2026-03-10T07:28:11.048 INFO:tasks.workunit.client.0.vm00.stdout: list_parallel: ******* SUCCESS ********** 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.101181+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.101181+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.101277+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.101277+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.102127+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.102127+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.102715+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.102715+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.112151+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.112151+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.112306+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.112306+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.113114+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.113114+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.119422+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.119422+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.199515+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.199515+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.430168+0000 mon.a (mon.0) 916 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.430168+0000 mon.a (mon.0) 916 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: cluster 2026-03-10T07:28:10.876577+0000 mon.a (mon.0) 917 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: cluster 2026-03-10T07:28:10.876577+0000 mon.a (mon.0) 917 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968537+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968537+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968606+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968606+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968667+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968667+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968696+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968696+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968759+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968759+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968787+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968787+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968826+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968826+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968859+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.968859+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.977209+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:10.977209+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.014239+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.014239+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.014718+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.014718+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.015074+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.015074+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.019531+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.019531+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.023425+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.023425+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: cluster 2026-03-10T07:28:11.030603+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: cluster 2026-03-10T07:28:11.030603+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.033710+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/3203008128' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.033710+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/3203008128' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.053217+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.053217+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.058647+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.058647+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.058825+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.058825+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.058959+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.058959+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.065657+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.065657+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.065794+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.065794+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.065911+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.065911+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.102106+0000 mon.a (mon.0) 934 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:11 vm00 bash[28005]: audit 2026-03-10T07:28:11.102106+0000 mon.a (mon.0) 934 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.383 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:28:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:28:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.101181+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.101181+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.101277+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.101277+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.102127+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.102127+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.102715+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.102715+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.112151+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.112151+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.112306+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.112306+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.113114+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.113114+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.119422+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.119422+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.199515+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.199515+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.430168+0000 mon.a (mon.0) 916 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.430168+0000 mon.a (mon.0) 916 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: cluster 2026-03-10T07:28:10.876577+0000 mon.a (mon.0) 917 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: cluster 2026-03-10T07:28:10.876577+0000 mon.a (mon.0) 917 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968537+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968537+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968606+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968606+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968667+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968667+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968696+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968696+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968759+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968759+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968787+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968787+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968826+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968826+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968859+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.968859+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.977209+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:10.977209+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.014239+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.014239+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.014718+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.014718+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.015074+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.015074+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.019531+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.019531+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.023425+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.023425+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: cluster 2026-03-10T07:28:11.030603+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: cluster 2026-03-10T07:28:11.030603+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.033710+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/3203008128' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.033710+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/3203008128' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.053217+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.053217+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.058647+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.058647+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.058825+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.058825+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.058959+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.058959+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.065657+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.065657+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.065794+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.065794+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.065911+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.065911+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.102106+0000 mon.a (mon.0) 934 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:11 vm00 bash[20701]: audit 2026-03-10T07:28:11.102106+0000 mon.a (mon.0) 934 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.101181+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.101181+0000 mon.a (mon.0) 910 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.101277+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.101277+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.102127+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.102127+0000 mon.c (mon.2) 112 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.102715+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.102715+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.112151+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.112151+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.112306+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.112306+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.113114+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.113114+0000 mon.c (mon.2) 113 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.119422+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.119422+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.199515+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.199515+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.430168+0000 mon.a (mon.0) 916 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.430168+0000 mon.a (mon.0) 916 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: cluster 2026-03-10T07:28:10.876577+0000 mon.a (mon.0) 917 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: cluster 2026-03-10T07:28:10.876577+0000 mon.a (mon.0) 917 : cluster [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968537+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968537+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968606+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968606+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968667+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968667+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968696+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968696+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968759+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968759+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968787+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968787+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968826+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968826+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968859+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.968859+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.977209+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:10.977209+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.014239+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.014239+0000 mon.c (mon.2) 114 : audit [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.014718+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.014718+0000 mon.c (mon.2) 115 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.015074+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.015074+0000 mon.c (mon.2) 116 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.019531+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.019531+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.023425+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.023425+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: cluster 2026-03-10T07:28:11.030603+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: cluster 2026-03-10T07:28:11.030603+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.033710+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/3203008128' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.033710+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.100:0/3203008128' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.053217+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.053217+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.058647+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.058647+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.058825+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.058825+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.058959+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.058959+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.065657+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.065657+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.065794+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.065794+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.065911+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.065911+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.102106+0000 mon.a (mon.0) 934 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:11.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:11 vm03 bash[23382]: audit 2026-03-10T07:28:11.102106+0000 mon.a (mon.0) 934 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: Running main() from gmock_main.cc 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [==========] Running 3 tests from 1 test suite. 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [----------] Global test environment set-up. 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [----------] 3 tests from NeoRadosCmd 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ RUN ] NeoRadosCmd.MonDescribe 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ OK ] NeoRadosCmd.MonDescribe (2020 ms) 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ RUN ] NeoRadosCmd.OSDCmd 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ OK ] NeoRadosCmd.OSDCmd (2159 ms) 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ RUN ] NeoRadosCmd.PGCmd 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ OK ] NeoRadosCmd.PGCmd (3179 ms) 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [----------] 3 tests from NeoRadosCmd (7358 ms total) 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [----------] Global test environment tear-down 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [==========] 3 tests from 1 test suite ran. (7359 ms total) 2026-03-10T07:28:12.089 INFO:tasks.workunit.client.0.vm00.stdout: cmd: [ PASSED ] 3 tests. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60387]: starting. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60387]: creating pool ceph_test_rados_delete_pools_parallel.vm00-60278 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60387]: created object 0... 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60387]: created object 25... 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60387]: created object 49... 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60387]: finishing. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_1_[60387]: shutting down. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_2_[60388]: starting. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_2_[60388]: deleting pool ceph_test_rados_delete_pools_parallel.vm00-60278 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_2_[60388]: shutting down. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******************************* 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61021]: starting. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61021]: creating pool ceph_test_rados_delete_pools_parallel.vm00-60278 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61021]: created object 0... 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61021]: created object 25... 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61021]: created object 49... 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61021]: finishing. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_3_[61021]: shutting down. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******************************* 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61023]: starting. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61023]: listing objects. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61023]: listed object 0... 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61023]: listed object 25... 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61023]: saw 50 objects 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_5_[61023]: shutting down. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******************************* 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_4_[61022]: starting. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_4_[61022]: deleting pool ceph_test_rados_delete_pools_parallel.vm00-60278 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: process_4_[61022]: shutting down. 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******************************* 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******************************* 2026-03-10T07:28:12.099 INFO:tasks.workunit.client.0.vm00.stdout: delete_pools_parallel: ******* SUCCESS ********** 2026-03-10T07:28:12.102 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60266]: starting. 2026-03-10T07:28:12.102 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60266]: creating pool ceph_test_rados_open_pools_parallel.vm00-60224 2026-03-10T07:28:12.102 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60266]: created object 0... 2026-03-10T07:28:12.102 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60266]: created object 25... 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60266]: created object 49... 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60266]: finishing. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_1_[60266]: shutting down. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_2_[60267]: starting. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_2_[60267]: rados_pool_create. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_2_[60267]: rados_ioctx_create. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_2_[60267]: shutting down. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: ******************************* 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61048]: starting. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61048]: creating pool ceph_test_rados_open_pools_parallel.vm00-60224 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61048]: created object 0... 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61048]: created object 25... 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61048]: created object 49... 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61048]: finishing. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_3_[61048]: shutting down. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: ******************************* 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_4_[61049]: starting. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_4_[61049]: rados_pool_create. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_4_[61049]: rados_ioctx_create. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: process_4_[61049]: shutting down. 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: ******************************* 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: ******************************* 2026-03-10T07:28:12.103 INFO:tasks.workunit.client.0.vm00.stdout: open_pools_parallel: ******* SUCCESS ********** 2026-03-10T07:28:12.132 INFO:tasks.workunit.client.0.vm00.stdout:patch 2026-03-10T07:28:12.132 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.886992+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:12.132 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.887335+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:12.132 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.897940+0000 mon.b [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:08.906359+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.101181+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.101277+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.102127+0000 mon.c [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.102715+0000 mon.b [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.112151+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.112306+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.113114+0000 mon.c [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.119422+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.199515+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.876577+0000 mon.a [WRN] Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.968537+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm00-59973-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.968606+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm00-59964-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.968667+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm00-60490-1"}]': finished 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.968696+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.968759+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.968787+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.968826+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm00-59712-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.968859+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm00-59738-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:10.977209+0000 mon.b [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.014239+0000 mon.c [INF] from='client.? 192.168.123.100:0/2132039165' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.014718+0000 mon.c [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.015074+0000 mon.c [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.019531+0000 mon.c [INF] from='client.? 192.168.123.100:0/3620224513' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.023425+0000 mon.b [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.033710+0000 mon.b [INF] from='client.? 192.168.123.100:0/3203008128' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.053217+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:12.133 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.058647+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]: dispatch 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.058825+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.058959+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.065657+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]: dispatch 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.065794+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.065911+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:11.102106+0000 mon.a [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.037509+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.037583+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.037631+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.037686+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.037717+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.037751+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.037791+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.037825+0000 mon.a [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:12.248 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.052776+0000 mon.b [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: cluster 2026-03-10T07:28:10.575143+0000 mgr.y (mgr.24407) 109 : cluster [DBG] pgmap v70: 900 pgs: 224 creating+peering, 192 unknown, 484 active+clean; 470 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 68 KiB/s wr, 173 op/s 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: cluster 2026-03-10T07:28:10.575143+0000 mgr.y (mgr.24407) 109 : cluster [DBG] pgmap v70: 900 pgs: 224 creating+peering, 192 unknown, 484 active+clean; 470 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 68 KiB/s wr, 173 op/s 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:11.431605+0000 mon.a (mon.0) 935 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:11.431605+0000 mon.a (mon.0) 935 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:11.841092+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/3356601691' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:11.841092+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/3356601691' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037509+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037509+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037583+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037583+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037631+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037631+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037686+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037686+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037717+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037717+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037751+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037751+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037791+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037791+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037825+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.037825+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.052776+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.052776+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: cluster 2026-03-10T07:28:12.064366+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: cluster 2026-03-10T07:28:12.064366+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.070534+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2167105743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.070534+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2167105743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.075277+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.075277+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.103030+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.103030+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.104168+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.104168+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.104542+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.104542+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.105096+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.105096+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.108157+0000 mon.a (mon.0) 949 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.108157+0000 mon.a (mon.0) 949 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.130503+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.130503+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.137845+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.137845+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: cluster 2026-03-10T07:28:12.142074+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: cluster 2026-03-10T07:28:12.142074+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.169936+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.169936+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.170044+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.517 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:12 vm03 bash[23382]: audit 2026-03-10T07:28:12.170044+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: cluster 2026-03-10T07:28:10.575143+0000 mgr.y (mgr.24407) 109 : cluster [DBG] pgmap v70: 900 pgs: 224 creating+peering, 192 unknown, 484 active+clean; 470 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 68 KiB/s wr, 173 op/s 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: cluster 2026-03-10T07:28:10.575143+0000 mgr.y (mgr.24407) 109 : cluster [DBG] pgmap v70: 900 pgs: 224 creating+peering, 192 unknown, 484 active+clean; 470 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 68 KiB/s wr, 173 op/s 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:11.431605+0000 mon.a (mon.0) 935 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:11.431605+0000 mon.a (mon.0) 935 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:11.841092+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/3356601691' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:11.841092+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/3356601691' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037509+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037509+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037583+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037583+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037631+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037631+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037686+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037686+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037717+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037717+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037751+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037751+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037791+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037791+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037825+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.037825+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.052776+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.052776+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: cluster 2026-03-10T07:28:12.064366+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: cluster 2026-03-10T07:28:12.064366+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.070534+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2167105743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.070534+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2167105743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.075277+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.075277+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.103030+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.103030+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.104168+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.104168+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.104542+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.104542+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.105096+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.105096+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.108157+0000 mon.a (mon.0) 949 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.108157+0000 mon.a (mon.0) 949 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.130503+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.130503+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.137845+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.137845+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: cluster 2026-03-10T07:28:12.142074+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: cluster 2026-03-10T07:28:12.142074+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.169936+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.169936+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.170044+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:12 vm00 bash[28005]: audit 2026-03-10T07:28:12.170044+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: cluster 2026-03-10T07:28:10.575143+0000 mgr.y (mgr.24407) 109 : cluster [DBG] pgmap v70: 900 pgs: 224 creating+peering, 192 unknown, 484 active+clean; 470 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 68 KiB/s wr, 173 op/s 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: cluster 2026-03-10T07:28:10.575143+0000 mgr.y (mgr.24407) 109 : cluster [DBG] pgmap v70: 900 pgs: 224 creating+peering, 192 unknown, 484 active+clean; 470 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 68 KiB/s wr, 173 op/s 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:11.431605+0000 mon.a (mon.0) 935 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:11.431605+0000 mon.a (mon.0) 935 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:11.841092+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/3356601691' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:11.841092+0000 mon.b (mon.1) 55 : audit [DBG] from='client.? 192.168.123.100:0/3356601691' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037509+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037509+0000 mon.a (mon.0) 936 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037583+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037583+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm00-59645-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037631+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037631+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm00-60537-1"}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037686+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037686+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-4", "overlaypool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037717+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037717+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm00-60006-1"}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037751+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037751+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm00-59637-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037791+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037791+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm00-59629-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037825+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.037825+0000 mon.a (mon.0) 943 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm00-60490-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.052776+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.052776+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/4094726258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: cluster 2026-03-10T07:28:12.064366+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: cluster 2026-03-10T07:28:12.064366+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.070534+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2167105743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.070534+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2167105743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.075277+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.075277+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.103030+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.103030+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.104168+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.104168+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.104542+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.104542+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.105096+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.105096+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.108157+0000 mon.a (mon.0) 949 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.108157+0000 mon.a (mon.0) 949 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.130503+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.130503+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.137845+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.137845+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: cluster 2026-03-10T07:28:12.142074+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T07:28:12.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: cluster 2026-03-10T07:28:12.142074+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T07:28:12.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.169936+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.169936+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:12.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.170044+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:12.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:12 vm00 bash[20701]: audit 2026-03-10T07:28:12.170044+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:13.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:28:12 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:12.248632+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:12.248632+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:12.256329+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:12.256329+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: cluster 2026-03-10T07:28:12.265814+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: cluster 2026-03-10T07:28:12.265814+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:12.266518+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:12.266518+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:12.432920+0000 mon.a (mon.0) 954 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:12.432920+0000 mon.a (mon.0) 954 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: cluster 2026-03-10T07:28:13.038722+0000 mon.a (mon.0) 955 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: cluster 2026-03-10T07:28:13.038722+0000 mon.a (mon.0) 955 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053139+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053139+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053241+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053241+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053334+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053334+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053407+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053407+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053451+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053451+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053514+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.053514+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.054536+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.054536+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: cluster 2026-03-10T07:28:13.112053+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: cluster 2026-03-10T07:28:13.112053+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.112721+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.112721+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.112921+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.112921+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.132864+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.132864+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.142286+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.142286+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.142808+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.142808+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.142886+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.142886+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.142963+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:13.298 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:13 vm03 bash[23382]: audit 2026-03-10T07:28:13.142963+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.070534+0000 mon.b [INF] from='client.? 192.168.123.100:0/2167105743' entity='client api_io_pp: Running main() from gmock_main.cc 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [==========] Running 39 tests from 2 test suites. 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] Global test environment set-up. 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: seed 59650 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TooBigPP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.TooBigPP (0 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SimpleWritePP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.SimpleWritePP (715 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadOpPP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadOpPP (49 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SparseReadOpPP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.SparseReadOpPP (34 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP (26 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP2 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP2 (17 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.Checksum 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.Checksum (7 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadIntoBufferlist 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadIntoBufferlist (18 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.OverlappingWriteRoundTripPP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.OverlappingWriteRoundTripPP (36 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP (31 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP2 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP2 (15 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.AppendRoundTripPP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.AppendRoundTripPP (24 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TruncTestPP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.TruncTestPP (138 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RemoveTestPP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.RemoveTestPP (57 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrsRoundTripPP 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrsRoundTripPP (54 ms) 2026-03-10T07:28:13.364 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RmXattrPP 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.RmXattrPP (559 ms) 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrListPP (14 ms) 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CrcZeroWrite 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.CrcZeroWrite (13 ms) 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtPP 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtPP (8 ms) 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtDNEPP 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtDNEPP (11 ms) 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtMismatchPP 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtMismatchPP (19 ms) 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP (1847 ms total) 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SimpleWritePP 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SimpleWritePP (1247 ms) 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.ReadOpPP 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.ReadOpPP (49 ms) 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SparseReadOpPP 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SparseReadOpPP (13 ms) 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RoundTripPP 2026-03-10T07:28:13.365 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP (7 ms) 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:12.248632+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:12.248632+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:12.256329+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:12.256329+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: cluster 2026-03-10T07:28:12.265814+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: cluster 2026-03-10T07:28:12.265814+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:12.266518+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:12.266518+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:12.432920+0000 mon.a (mon.0) 954 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:12.432920+0000 mon.a (mon.0) 954 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: cluster 2026-03-10T07:28:13.038722+0000 mon.a (mon.0) 955 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: cluster 2026-03-10T07:28:13.038722+0000 mon.a (mon.0) 955 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053139+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053139+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053241+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053241+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053334+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053334+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053407+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053407+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053451+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053451+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053514+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.053514+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.054536+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.054536+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: cluster 2026-03-10T07:28:13.112053+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: cluster 2026-03-10T07:28:13.112053+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.112721+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.112721+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.112921+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.112921+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:13.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.132864+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.132864+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.142286+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.142286+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.142808+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.142808+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.142886+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.142886+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.142963+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:13 vm00 bash[28005]: audit 2026-03-10T07:28:13.142963+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:12.248632+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:12.248632+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:12.256329+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:12.256329+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: cluster 2026-03-10T07:28:12.265814+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: cluster 2026-03-10T07:28:12.265814+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:12.266518+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:12.266518+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:12.432920+0000 mon.a (mon.0) 954 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:12.432920+0000 mon.a (mon.0) 954 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: cluster 2026-03-10T07:28:13.038722+0000 mon.a (mon.0) 955 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: cluster 2026-03-10T07:28:13.038722+0000 mon.a (mon.0) 955 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053139+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053139+0000 mon.a (mon.0) 956 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm00-59712-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053241+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053241+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm00-59738-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053334+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]': finished 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053334+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]': finished 2026-03-10T07:28:13.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053407+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053407+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053451+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053451+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053514+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.053514+0000 mon.a (mon.0) 961 : audit [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.054536+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.054536+0000 mon.a (mon.0) 962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: cluster 2026-03-10T07:28:13.112053+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: cluster 2026-03-10T07:28:13.112053+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.112721+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.112721+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.112921+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.112921+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.132864+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.132864+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.142286+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.142286+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.142808+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.142808+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.142886+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.142886+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.142963+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:13.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:13 vm00 bash[20701]: audit 2026-03-10T07:28:13.142963+0000 mon.a (mon.0) 967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:14.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: cluster 2026-03-10T07:28:12.576171+0000 mgr.y (mgr.24407) 110 : cluster [DBG] pgmap v73: 900 pgs: 64 creating+peering, 384 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 23 KiB/s wr, 116 op/s 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: cluster 2026-03-10T07:28:12.576171+0000 mgr.y (mgr.24407) 110 : cluster [DBG] pgmap v73: 900 pgs: 64 creating+peering, 384 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 23 KiB/s wr, 116 op/s 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:12.955912+0000 mgr.y (mgr.24407) 111 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:12.955912+0000 mgr.y (mgr.24407) 111 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.291324+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.291324+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.359431+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.359431+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.360137+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.360137+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.434013+0000 mon.a (mon.0) 970 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.434013+0000 mon.a (mon.0) 970 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.527398+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.527398+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.528983+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:14.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:14 vm03 bash[23382]: audit 2026-03-10T07:28:13.528983+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: cluster 2026-03-10T07:28:12.576171+0000 mgr.y (mgr.24407) 110 : cluster [DBG] pgmap v73: 900 pgs: 64 creating+peering, 384 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 23 KiB/s wr, 116 op/s 2026-03-10T07:28:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: cluster 2026-03-10T07:28:12.576171+0000 mgr.y (mgr.24407) 110 : cluster [DBG] pgmap v73: 900 pgs: 64 creating+peering, 384 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 23 KiB/s wr, 116 op/s 2026-03-10T07:28:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:12.955912+0000 mgr.y (mgr.24407) 111 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:12.955912+0000 mgr.y (mgr.24407) 111 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.291324+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T07:28:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.291324+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T07:28:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.359431+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.359431+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.360137+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.360137+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.434013+0000 mon.a (mon.0) 970 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.434013+0000 mon.a (mon.0) 970 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.527398+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.527398+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.528983+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:14 vm00 bash[28005]: audit 2026-03-10T07:28:13.528983+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: cluster 2026-03-10T07:28:12.576171+0000 mgr.y (mgr.24407) 110 : cluster [DBG] pgmap v73: 900 pgs: 64 creating+peering, 384 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 23 KiB/s wr, 116 op/s 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: cluster 2026-03-10T07:28:12.576171+0000 mgr.y (mgr.24407) 110 : cluster [DBG] pgmap v73: 900 pgs: 64 creating+peering, 384 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 23 KiB/s wr, 116 op/s 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:12.955912+0000 mgr.y (mgr.24407) 111 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:12.955912+0000 mgr.y (mgr.24407) 111 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.291324+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.291324+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.359431+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.359431+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.360137+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.360137+0000 mon.a (mon.0) 969 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.434013+0000 mon.a (mon.0) 970 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.434013+0000 mon.a (mon.0) 970 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.527398+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.527398+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.528983+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:14 vm00 bash[20701]: audit 2026-03-10T07:28:13.528983+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T07:28:15.302 INFO:tasks.workunit.client.0.vm00.stdout: api_io_p.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.075277+0000 mon.c [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.103030+0000 mon.a [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.104168+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.104542+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm00-59637-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.105096+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.108157+0000 mon.a [INF] from='client.? 192.168.123.100:0/2876161174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.130503+0000 mon.b [INF] from='client.? 192.168.123.100:0/2250161017' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.137845+0000 mon.c [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.142074+0000 client.admin [INF] threexx 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.169936+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: got: 2026-03-10T07:28:12.170044+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm00-60537-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ OK ] LibRadosCmd.WatchLog (7391 ms) 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [----------] 4 tests from LibRadosCmd (10913 ms total) 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [----------] Global test environment tear-down 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [==========] 4 tests from 1 test suite ran. (10913 ms total) 2026-03-10T07:28:15.303 INFO:tasks.workunit.client.0.vm00.stdout: api_cmd: [ PASSED ] 4 tests. 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: Running main() from gmock_main.cc 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [==========] Running 9 tests from 2 test suites. 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] Global test environment set-up. 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] 5 tests from LibRadosStat 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStat.Stat 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStat.Stat (415 ms) 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStat.Stat2 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStat.Stat2 (298 ms) 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStat.StatNS 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStat.StatNS (63 ms) 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStat.ClusterStat 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStat.ClusterStat (0 ms) 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStat.PoolStat 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStat.PoolStat (20 ms) 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] 5 tests from LibRadosStat (796 ms total) 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] 4 tests from LibRadosStatEC 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStatEC.Stat 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStatEC.Stat (1283 ms) 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStatEC.StatNS 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStatEC.StatNS (74 ms) 2026-03-10T07:28:15.406 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStatEC.ClusterStat 2026-03-10T07:28:15.407 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStatEC.ClusterStat (0 ms) 2026-03-10T07:28:15.407 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ RUN ] LibRadosStatEC.PoolStat 2026-03-10T07:28:15.407 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ OK ] LibRadosStatEC.PoolStat (2 ms) 2026-03-10T07:28:15.407 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] 4 tests from LibRadosStatEC (1359 ms total) 2026-03-10T07:28:15.407 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: 2026-03-10T07:28:15.407 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [----------] Global test environment tear-down 2026-03-10T07:28:15.407 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [==========] 9 tests from 2 test suites ran. (11099 ms total) 2026-03-10T07:28:15.407 INFO:tasks.workunit.client.0.vm00.stdout: api_stat: [ PASSED ] 9 tests. 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: Running main() from gmock_main.cc 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [==========] Running 9 tests from 2 test suites. 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] Global test environment set-up. 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: seed 59973 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPP 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPP (512 ms) 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.Stat2Mtime2PP 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatPP.Stat2Mtime2PP (84 ms) 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.ClusterStatPP 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatPP.ClusterStatPP (1 ms) 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.PoolStatPP 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatPP.PoolStatPP (25 ms) 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPPNS 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPPNS (60 ms) 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP (683 ms total) 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPP 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPP (1346 ms) 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.ClusterStatPP 2026-03-10T07:28:15.411 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.ClusterStatPP (1 ms) 2026-03-10T07:28:15.412 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.PoolStatPP 2026-03-10T07:28:15.412 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.PoolStatPP (9 ms) 2026-03-10T07:28:15.412 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPPNS 2026-03-10T07:28:15.412 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPPNS (11 ms) 2026-03-10T07:28:15.412 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP (1367 ms total) 2026-03-10T07:28:15.412 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: 2026-03-10T07:28:15.412 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [----------] Global test environment tear-down 2026-03-10T07:28:15.412 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [==========] 9 tests from 2 test suites ran. (11101 ms total) 2026-03-10T07:28:15.412 INFO:tasks.workunit.client.0.vm00.stdout: api_stat_pp: [ PASSED ] 9 tests. 2026-03-10T07:28:15.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.328831+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.328831+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.328914+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.328914+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.328955+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.328955+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.328996+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.328996+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.329160+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.329160+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.329209+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.329209+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.337756+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.337756+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: cluster 2026-03-10T07:28:14.357808+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: cluster 2026-03-10T07:28:14.357808+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.365353+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.365353+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.367642+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.367642+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.367909+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.367909+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.395087+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.395087+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.395269+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.395269+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.395904+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.395904+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.396782+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/686224101' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.396782+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/686224101' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.483581+0000 mon.a (mon.0) 982 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.483581+0000 mon.a (mon.0) 982 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.483784+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.483784+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.499839+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.499839+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: cluster 2026-03-10T07:28:14.576839+0000 mgr.y (mgr.24407) 112 : cluster [DBG] pgmap v76: 772 pgs: 64 creating+peering, 256 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: cluster 2026-03-10T07:28:14.576839+0000 mgr.y (mgr.24407) 112 : cluster [DBG] pgmap v76: 772 pgs: 64 creating+peering, 256 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.646206+0000 mon.a (mon.0) 985 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:14.646206+0000 mon.a (mon.0) 985 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]: dispatch 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: cluster 2026-03-10T07:28:15.330017+0000 mon.a (mon.0) 986 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: cluster 2026-03-10T07:28:15.330017+0000 mon.a (mon.0) 986 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343440+0000 mon.a (mon.0) 987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343440+0000 mon.a (mon.0) 987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343493+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:15.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343493+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343543+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343543+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343569+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343569+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343592+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343592+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343618+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343618+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343643+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]': finished 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.343643+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]': finished 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: cluster 2026-03-10T07:28:15.353642+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: cluster 2026-03-10T07:28:15.353642+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.367023+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.367023+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.376468+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.376468+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.376623+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.376623+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.385294+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.385294+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.387678+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/2931939624' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.387678+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/2931939624' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.391931+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.391931+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.392457+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/1228821655' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.392457+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/1228821655' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.394797+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.394797+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.394900+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.394900+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.394985+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.394985+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.395050+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:15 vm03 bash[23382]: audit 2026-03-10T07:28:15.395050+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.328831+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.328831+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.328914+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.328914+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.328955+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.328955+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.328996+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.328996+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.329160+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.329160+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.329209+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.329209+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.337756+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.337756+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: cluster 2026-03-10T07:28:14.357808+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: cluster 2026-03-10T07:28:14.357808+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.365353+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.365353+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.367642+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.367642+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.367909+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.367909+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.395087+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.395087+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.395269+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.395269+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.395904+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.395904+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.396782+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/686224101' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.396782+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/686224101' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.483581+0000 mon.a (mon.0) 982 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.483581+0000 mon.a (mon.0) 982 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.483784+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.483784+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.499839+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.499839+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: cluster 2026-03-10T07:28:14.576839+0000 mgr.y (mgr.24407) 112 : cluster [DBG] pgmap v76: 772 pgs: 64 creating+peering, 256 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: cluster 2026-03-10T07:28:14.576839+0000 mgr.y (mgr.24407) 112 : cluster [DBG] pgmap v76: 772 pgs: 64 creating+peering, 256 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.646206+0000 mon.a (mon.0) 985 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:14.646206+0000 mon.a (mon.0) 985 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: cluster 2026-03-10T07:28:15.330017+0000 mon.a (mon.0) 986 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: cluster 2026-03-10T07:28:15.330017+0000 mon.a (mon.0) 986 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343440+0000 mon.a (mon.0) 987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343440+0000 mon.a (mon.0) 987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343493+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343493+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343543+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343543+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343569+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343569+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343592+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343592+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343618+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343618+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343643+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.343643+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]': finished 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: cluster 2026-03-10T07:28:15.353642+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: cluster 2026-03-10T07:28:15.353642+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.367023+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.367023+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.376468+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.376468+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.376623+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.376623+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.385294+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.385294+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.387678+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/2931939624' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.387678+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/2931939624' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.391931+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.391931+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.392457+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/1228821655' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.392457+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/1228821655' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.394797+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.394797+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.394900+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.394900+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.394985+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.394985+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.395050+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:15 vm00 bash[28005]: audit 2026-03-10T07:28:15.395050+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.328831+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.328831+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm00-60490-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.328914+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.328914+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.328955+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.328955+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.328996+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.328996+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.329160+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.329160+0000 mon.a (mon.0) 976 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-4"}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.329209+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.329209+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm00-59650-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T07:28:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.337756+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.337756+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.100:0/3386357376' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: cluster 2026-03-10T07:28:14.357808+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: cluster 2026-03-10T07:28:14.357808+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.365353+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.365353+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.367642+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.367642+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/2132153704' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.367909+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.367909+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/178234551' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.395087+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.395087+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.395269+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.395269+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.395904+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.395904+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.396782+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/686224101' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.396782+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/686224101' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.483581+0000 mon.a (mon.0) 982 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.483581+0000 mon.a (mon.0) 982 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.483784+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.483784+0000 mon.a (mon.0) 983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.499839+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.499839+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: cluster 2026-03-10T07:28:14.576839+0000 mgr.y (mgr.24407) 112 : cluster [DBG] pgmap v76: 772 pgs: 64 creating+peering, 256 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: cluster 2026-03-10T07:28:14.576839+0000 mgr.y (mgr.24407) 112 : cluster [DBG] pgmap v76: 772 pgs: 64 creating+peering, 256 unknown, 452 active+clean; 459 KiB data, 312 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.646206+0000 mon.a (mon.0) 985 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:14.646206+0000 mon.a (mon.0) 985 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: cluster 2026-03-10T07:28:15.330017+0000 mon.a (mon.0) 986 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: cluster 2026-03-10T07:28:15.330017+0000 mon.a (mon.0) 986 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343440+0000 mon.a (mon.0) 987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343440+0000 mon.a (mon.0) 987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm00-60537-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343493+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343493+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-4", "tierpool": "test-rados-api-vm00-59782-4-cache"}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343543+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343543+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm00-59973-7"}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343569+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343569+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm00-59964-7"}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343592+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343592+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343618+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343618+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm00-59629-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343643+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.343643+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-60006-6", "pg_num": 4}]': finished 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: cluster 2026-03-10T07:28:15.353642+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: cluster 2026-03-10T07:28:15.353642+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.367023+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.367023+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.100:0/602914706' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.376468+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.376468+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.376623+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.376623+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.385294+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.385294+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.387678+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/2931939624' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.387678+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/2931939624' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.391931+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.391931+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.392457+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/1228821655' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.392457+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/1228821655' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.394797+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.394797+0000 mon.a (mon.0) 997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.394900+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.394900+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.394985+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.394985+0000 mon.a (mon.0) 999 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.395050+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:15 vm00 bash[20701]: audit 2026-03-10T07:28:15.395050+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: Running main() from gmock_main.cc 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [==========] Running 24 tests from 2 test suites. 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] Global test environment set-up. 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] 14 tests from LibRadosIo 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.SimpleWrite 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.SimpleWrite (740 ms) 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.TooBig 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.TooBig (0 ms) 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.ReadTimeout 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: no timeout :/ 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: no timeout :/ 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: no timeout :/ 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: no timeout :/ 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: no timeout :/ 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.ReadTimeout (50 ms) 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.RoundTrip 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.RoundTrip (24 ms) 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.Checksum 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.Checksum (20 ms) 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.OverlappingWriteRoundTrip 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.OverlappingWriteRoundTrip (32 ms) 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.WriteFullRoundTrip 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.WriteFullRoundTrip (26 ms) 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.AppendRoundTrip 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.AppendRoundTrip (25 ms) 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.ZeroLenZero 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.ZeroLenZero (5 ms) 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.TruncTest 2026-03-10T07:28:15.938 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.TruncTest (27 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.RemoveTest 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.RemoveTest (24 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.XattrsRoundTrip 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.XattrsRoundTrip (143 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.RmXattr 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.RmXattr (532 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIo.XattrIter 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIo.XattrIter (118 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] 14 tests from LibRadosIo (1766 ms total) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] 10 tests from LibRadosIoEC 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.SimpleWrite 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.SimpleWrite (1262 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.RoundTrip 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.RoundTrip (23 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.OverlappingWriteRoundTrip 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.OverlappingWriteRoundTrip (22 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.WriteFullRoundTrip 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.WriteFullRoundTrip (17 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.AppendRoundTrip 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.AppendRoundTrip (41 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.TruncTest 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.TruncTest (20 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.RemoveTest 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.RemoveTest (25 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.XattrsRoundTrip 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.XattrsRoundTrip (17 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.RmXattr 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.RmXattr (33 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ RUN ] LibRadosIoEC.XattrIter 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ OK ] LibRadosIoEC.XattrIter (10 ms) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] 10 tests from LibRadosIoEC (1470 ms total) 2026-03-10T07:28:15.939 INFO:tasks.workunit.client.0.vm00.stdout: api_io: 2026-03-10T07:28:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [----------] Global test environment tear-down 2026-03-10T07:28:15.949 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [==========] 24 tests from 2 test suites ran. (11956 ms total) 2026-03-10T07:28:15.949 INFO:tasks.workunit.client.0.vm00.stdout: api_io: [ PASSED ] 24 tests. 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.504586+0000 mon.a (mon.0) 1001 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.504586+0000 mon.a (mon.0) 1001 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.756523+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.756523+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.756727+0000 mgr.y (mgr.24407) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.756727+0000 mgr.y (mgr.24407) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.758028+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.758028+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.758209+0000 mgr.y (mgr.24407) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.758209+0000 mgr.y (mgr.24407) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.758553+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.758553+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.758683+0000 mgr.y (mgr.24407) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.758683+0000 mgr.y (mgr.24407) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.759128+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.759128+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.759264+0000 mgr.y (mgr.24407) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.759264+0000 mgr.y (mgr.24407) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.759588+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.759588+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.759721+0000 mgr.y (mgr.24407) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.759721+0000 mgr.y (mgr.24407) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.759991+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.759991+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.760109+0000 mgr.y (mgr.24407) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.760109+0000 mgr.y (mgr.24407) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.760353+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.760353+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.760470+0000 mgr.y (mgr.24407) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.760470+0000 mgr.y (mgr.24407) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.760940+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.760940+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.761059+0000 mgr.y (mgr.24407) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.761059+0000 mgr.y (mgr.24407) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.761331+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.761331+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.761444+0000 mgr.y (mgr.24407) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.761444+0000 mgr.y (mgr.24407) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.761775+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.761775+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.761893+0000 mgr.y (mgr.24407) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.761893+0000 mgr.y (mgr.24407) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: cluster 2026-03-10T07:28:15.877380+0000 mon.a (mon.0) 1012 : cluster [WRN] Health check update: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: cluster 2026-03-10T07:28:15.877380+0000 mon.a (mon.0) 1012 : cluster [WRN] Health check update: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884371+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884371+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884413+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884413+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884444+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884444+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884782+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884782+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884814+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884814+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884840+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.884840+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: cluster 2026-03-10T07:28:15.892335+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: cluster 2026-03-10T07:28:15.892335+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.901532+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.901532+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.906559+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:16.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.906559+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.912829+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.912829+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.922351+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:15.922351+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:16.129047+0000 mon.b (mon.1) 71 : audit [DBG] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:16.129047+0000 mon.b (mon.1) 71 : audit [DBG] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:16.131489+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:16.131489+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:16.132918+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: audit 2026-03-10T07:28:16.132918+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: cluster 2026-03-10T07:28:16.224939+0000 osd.5 (osd.5) 3 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: cluster 2026-03-10T07:28:16.224939+0000 osd.5 (osd.5) 3 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: cluster 2026-03-10T07:28:16.226393+0000 osd.5 (osd.5) 4 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:16.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:16 vm03 bash[23382]: cluster 2026-03-10T07:28:16.226393+0000 osd.5 (osd.5) 4 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.504586+0000 mon.a (mon.0) 1001 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.504586+0000 mon.a (mon.0) 1001 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.756523+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.756523+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.756727+0000 mgr.y (mgr.24407) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.756727+0000 mgr.y (mgr.24407) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.758028+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.758028+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.758209+0000 mgr.y (mgr.24407) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.758209+0000 mgr.y (mgr.24407) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.758553+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.758553+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.758683+0000 mgr.y (mgr.24407) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.758683+0000 mgr.y (mgr.24407) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.759128+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.759128+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.759264+0000 mgr.y (mgr.24407) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.759264+0000 mgr.y (mgr.24407) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.759588+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.759588+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.759721+0000 mgr.y (mgr.24407) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.759721+0000 mgr.y (mgr.24407) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.759991+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.759991+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.760109+0000 mgr.y (mgr.24407) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.760109+0000 mgr.y (mgr.24407) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.760353+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.760353+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.760470+0000 mgr.y (mgr.24407) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.760470+0000 mgr.y (mgr.24407) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.760940+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.760940+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.761059+0000 mgr.y (mgr.24407) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.761059+0000 mgr.y (mgr.24407) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.761331+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.761331+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.761444+0000 mgr.y (mgr.24407) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.761444+0000 mgr.y (mgr.24407) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.761775+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.761775+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.761893+0000 mgr.y (mgr.24407) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.761893+0000 mgr.y (mgr.24407) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: cluster 2026-03-10T07:28:15.877380+0000 mon.a (mon.0) 1012 : cluster [WRN] Health check update: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: cluster 2026-03-10T07:28:15.877380+0000 mon.a (mon.0) 1012 : cluster [WRN] Health check update: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884371+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884371+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884413+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884413+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884444+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884444+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884782+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884782+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884814+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884814+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884840+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.884840+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: cluster 2026-03-10T07:28:15.892335+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: cluster 2026-03-10T07:28:15.892335+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.901532+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.901532+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.906559+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.906559+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.912829+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.912829+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.922351+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:15.922351+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:16.129047+0000 mon.b (mon.1) 71 : audit [DBG] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:16.129047+0000 mon.b (mon.1) 71 : audit [DBG] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:16.131489+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:16.131489+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:16.132918+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: audit 2026-03-10T07:28:16.132918+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: cluster 2026-03-10T07:28:16.224939+0000 osd.5 (osd.5) 3 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: cluster 2026-03-10T07:28:16.224939+0000 osd.5 (osd.5) 3 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: cluster 2026-03-10T07:28:16.226393+0000 osd.5 (osd.5) 4 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:16 vm00 bash[20701]: cluster 2026-03-10T07:28:16.226393+0000 osd.5 (osd.5) 4 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.504586+0000 mon.a (mon.0) 1001 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.504586+0000 mon.a (mon.0) 1001 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.756523+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.756523+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.756727+0000 mgr.y (mgr.24407) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.756727+0000 mgr.y (mgr.24407) 113 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.758028+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.758028+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.758209+0000 mgr.y (mgr.24407) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.758209+0000 mgr.y (mgr.24407) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.758553+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.758553+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.758683+0000 mgr.y (mgr.24407) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.758683+0000 mgr.y (mgr.24407) 115 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.759128+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.759128+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.759264+0000 mgr.y (mgr.24407) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.759264+0000 mgr.y (mgr.24407) 116 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.759588+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.759588+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.759721+0000 mgr.y (mgr.24407) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.759721+0000 mgr.y (mgr.24407) 117 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.759991+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.759991+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.760109+0000 mgr.y (mgr.24407) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.760109+0000 mgr.y (mgr.24407) 118 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.760353+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.760353+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.760470+0000 mgr.y (mgr.24407) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.760470+0000 mgr.y (mgr.24407) 119 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.760940+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.760940+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.761059+0000 mgr.y (mgr.24407) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.761059+0000 mgr.y (mgr.24407) 120 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.761331+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.761331+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.761444+0000 mgr.y (mgr.24407) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.761444+0000 mgr.y (mgr.24407) 121 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.761775+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.761775+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.761893+0000 mgr.y (mgr.24407) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.761893+0000 mgr.y (mgr.24407) 122 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: cluster 2026-03-10T07:28:15.877380+0000 mon.a (mon.0) 1012 : cluster [WRN] Health check update: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: cluster 2026-03-10T07:28:15.877380+0000 mon.a (mon.0) 1012 : cluster [WRN] Health check update: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884371+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884371+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884413+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884413+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm00-59645-16"}]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884444+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884444+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884782+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884782+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884814+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884814+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884840+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.884840+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: cluster 2026-03-10T07:28:15.892335+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: cluster 2026-03-10T07:28:15.892335+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.901532+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.901532+0000 mon.a (mon.0) 1020 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.906559+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.906559+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.912829+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.912829+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/33750537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.922351+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:15.922351+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:16.129047+0000 mon.b (mon.1) 71 : audit [DBG] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:16.129047+0000 mon.b (mon.1) 71 : audit [DBG] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:16.131489+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:16.131489+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:16.132918+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: audit 2026-03-10T07:28:16.132918+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: cluster 2026-03-10T07:28:16.224939+0000 osd.5 (osd.5) 3 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:16.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: cluster 2026-03-10T07:28:16.224939+0000 osd.5 (osd.5) 3 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: cluster 2026-03-10T07:28:16.226393+0000 osd.5 (osd.5) 4 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:16.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:16 vm00 bash[28005]: cluster 2026-03-10T07:28:16.226393+0000 osd.5 (osd.5) 4 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: Running main() from gmock_main.cc 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [==========] Running 3 tests from 1 test suite. 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [----------] Global test environment set-up. 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [----------] 3 tests from NeoradosList 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [ RUN ] NeoradosList.ListObjects 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [ OK ] NeoradosList.ListObjects (3084 ms) 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [ RUN ] NeoradosList.ListObjectsNS 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [ OK ] NeoradosList.ListObjectsNS (3123 ms) 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [ RUN ] NeoradosList.ListObjectsMany 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [ OK ] NeoradosList.ListObjectsMany (5895 ms) 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [----------] 3 tests from NeoradosList (12102 ms total) 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [----------] Global test environment tear-down 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [==========] 3 tests from 1 test suite ran. (12102 ms total) 2026-03-10T07:28:16.978 INFO:tasks.workunit.client.0.vm00.stdout: list: [ PASSED ] 3 tests. 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:15.763068+0000 osd.3 (osd.3) 3 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:15.763068+0000 osd.3 (osd.3) 3 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:15.788686+0000 osd.3 (osd.3) 4 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:15.788686+0000 osd.3 (osd.3) 4 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.235153+0000 osd.1 (osd.1) 3 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.235153+0000 osd.1 (osd.1) 3 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.238743+0000 osd.1 (osd.1) 4 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.238743+0000 osd.1 (osd.1) 4 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.368306+0000 osd.7 (osd.7) 3 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.368306+0000 osd.7 (osd.7) 3 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.369698+0000 osd.7 (osd.7) 4 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.369698+0000 osd.7 (osd.7) 4 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.506082+0000 mon.a (mon.0) 1024 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.506082+0000 mon.a (mon.0) 1024 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.577350+0000 mgr.y (mgr.24407) 123 : cluster [DBG] pgmap v79: 832 pgs: 1 active, 3 creating+activating, 268 unknown, 560 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 433 op/s 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.577350+0000 mgr.y (mgr.24407) 123 : cluster [DBG] pgmap v79: 832 pgs: 1 active, 3 creating+activating, 268 unknown, 560 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 433 op/s 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.752934+0000 osd.3 (osd.3) 5 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.752934+0000 osd.3 (osd.3) 5 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.889841+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.889841+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.889897+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.889897+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.889925+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.889925+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.889956+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.889956+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.911787+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: cluster 2026-03-10T07:28:16.911787+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.920173+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.920173+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.921640+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.921640+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.923768+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.923768+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.928005+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.928005+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.928972+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.928972+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.929216+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.929216+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.932895+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.932895+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.933016+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.933016+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.933140+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.933140+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.933270+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.933270+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.944404+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/767241137' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.944404+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/767241137' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.944892+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.944892+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.957630+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.957630+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.969242+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:16.969242+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.032626+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.032626+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.036145+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.036145+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.038904+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.038904+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.040993+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.040993+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.042147+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.042147+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.044387+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:17 vm03 bash[23382]: audit 2026-03-10T07:28:17.044387+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:15.763068+0000 osd.3 (osd.3) 3 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:15.763068+0000 osd.3 (osd.3) 3 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:15.788686+0000 osd.3 (osd.3) 4 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:15.788686+0000 osd.3 (osd.3) 4 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.235153+0000 osd.1 (osd.1) 3 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.235153+0000 osd.1 (osd.1) 3 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.238743+0000 osd.1 (osd.1) 4 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.238743+0000 osd.1 (osd.1) 4 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.368306+0000 osd.7 (osd.7) 3 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.368306+0000 osd.7 (osd.7) 3 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.369698+0000 osd.7 (osd.7) 4 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.369698+0000 osd.7 (osd.7) 4 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.506082+0000 mon.a (mon.0) 1024 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.506082+0000 mon.a (mon.0) 1024 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.577350+0000 mgr.y (mgr.24407) 123 : cluster [DBG] pgmap v79: 832 pgs: 1 active, 3 creating+activating, 268 unknown, 560 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 433 op/s 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.577350+0000 mgr.y (mgr.24407) 123 : cluster [DBG] pgmap v79: 832 pgs: 1 active, 3 creating+activating, 268 unknown, 560 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 433 op/s 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.752934+0000 osd.3 (osd.3) 5 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.752934+0000 osd.3 (osd.3) 5 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.889841+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.889841+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.889897+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.889897+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.889925+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.889925+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.889956+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.889956+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.911787+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: cluster 2026-03-10T07:28:16.911787+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.920173+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.920173+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.921640+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.921640+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.923768+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.923768+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.928005+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.928005+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.928972+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.928972+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.929216+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.929216+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.932895+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.932895+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.933016+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.933016+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.933140+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.933140+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.933270+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.933270+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.944404+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/767241137' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.944404+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/767241137' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.944892+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.944892+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.957630+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.957630+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.969242+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:16.969242+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.032626+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.032626+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.036145+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.036145+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.038904+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.038904+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.040993+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.040993+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.042147+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.042147+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.044387+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:17 vm00 bash[28005]: audit 2026-03-10T07:28:17.044387+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:15.763068+0000 osd.3 (osd.3) 3 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:15.763068+0000 osd.3 (osd.3) 3 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:15.788686+0000 osd.3 (osd.3) 4 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:15.788686+0000 osd.3 (osd.3) 4 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.235153+0000 osd.1 (osd.1) 3 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.235153+0000 osd.1 (osd.1) 3 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.238743+0000 osd.1 (osd.1) 4 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.238743+0000 osd.1 (osd.1) 4 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.368306+0000 osd.7 (osd.7) 3 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.368306+0000 osd.7 (osd.7) 3 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.369698+0000 osd.7 (osd.7) 4 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.369698+0000 osd.7 (osd.7) 4 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.506082+0000 mon.a (mon.0) 1024 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.506082+0000 mon.a (mon.0) 1024 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.577350+0000 mgr.y (mgr.24407) 123 : cluster [DBG] pgmap v79: 832 pgs: 1 active, 3 creating+activating, 268 unknown, 560 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 433 op/s 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.577350+0000 mgr.y (mgr.24407) 123 : cluster [DBG] pgmap v79: 832 pgs: 1 active, 3 creating+activating, 268 unknown, 560 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 433 op/s 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.752934+0000 osd.3 (osd.3) 5 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.752934+0000 osd.3 (osd.3) 5 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.889841+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.889841+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "overlaypool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.889897+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.889897+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.889925+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.889925+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.889956+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.889956+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.911787+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T07:28:17.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: cluster 2026-03-10T07:28:16.911787+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.920173+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.920173+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.921640+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.921640+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.923768+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.923768+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.928005+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.928005+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.928972+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.928972+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.929216+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.929216+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.932895+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.932895+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.933016+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.933016+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.933140+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.933140+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.933270+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.933270+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.944404+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/767241137' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.944404+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.100:0/767241137' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.944892+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.944892+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.957630+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.957630+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.969242+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:16.969242+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.032626+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.032626+0000 mon.b (mon.1) 77 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.036145+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.036145+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.038904+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.038904+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.040993+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.040993+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.042147+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.042147+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.044387+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:17 vm00 bash[20701]: audit 2026-03-10T07:28:17.044387+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:17.896 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: Running main() from gmock_main.cc 2026-03-10T07:28:17.896 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [==========] Running 2 tests from 1 test suite. 2026-03-10T07:28:17.896 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [----------] Global test environment set-up. 2026-03-10T07:28:17.896 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [----------] 2 tests from NeoRadosECIo 2026-03-10T07:28:17.897 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [ RUN ] NeoRadosECIo.SimpleWrite 2026-03-10T07:28:17.897 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [ OK ] NeoRadosECIo.SimpleWrite (6226 ms) 2026-03-10T07:28:17.897 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [ RUN ] NeoRadosECIo.ReadOp 2026-03-10T07:28:17.897 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [ OK ] NeoRadosECIo.ReadOp (6895 ms) 2026-03-10T07:28:17.897 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [----------] 2 tests from NeoRadosECIo (13121 ms total) 2026-03-10T07:28:17.897 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: 2026-03-10T07:28:17.897 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [----------] Global test environment tear-down 2026-03-10T07:28:17.897 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [==========] 2 tests from 1 test suite ran. (13121 ms total) 2026-03-10T07:28:17.897 INFO:tasks.workunit.client.0.vm00.stdout: ec_io: [ PASSED ] 2 tests. 2026-03-10T07:28:18.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:16.452737+0000 osd.0 (osd.0) 3 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:16.452737+0000 osd.0 (osd.0) 3 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:16.454909+0000 osd.0 (osd.0) 4 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:16.454909+0000 osd.0 (osd.0) 4 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:16.755136+0000 osd.3 (osd.3) 6 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:16.755136+0000 osd.3 (osd.3) 6 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:17.323184+0000 osd.7 (osd.7) 5 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:17.323184+0000 osd.7 (osd.7) 5 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:17.324100+0000 osd.7 (osd.7) 6 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:17.324100+0000 osd.7 (osd.7) 6 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.506931+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.506931+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:17.891360+0000 mon.a (mon.0) 1042 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:17.891360+0000 mon.a (mon.0) 1042 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897002+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897002+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897057+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897057+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897112+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897112+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897137+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897137+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897160+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897160+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897186+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897186+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897213+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897213+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897238+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.897238+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:17.904427+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: cluster 2026-03-10T07:28:17.904427+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.951353+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.951353+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.956556+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.956556+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.966008+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.966008+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.971091+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.971091+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.971587+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.971587+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.977032+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.977032+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.979559+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.979559+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.980114+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.980114+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.981048+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2285480512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.981048+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2285480512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.986145+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/848364817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.986145+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/848364817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.989684+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.989684+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.999742+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:17.999742+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.000851+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.000851+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.022802+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.022802+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.022908+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.022908+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.022946+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.022946+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.023012+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.023012+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.023487+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.023487+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.041021+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.041021+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.050650+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.050650+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.052594+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.767 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:18 vm03 bash[23382]: audit 2026-03-10T07:28:18.052594+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:16.452737+0000 osd.0 (osd.0) 3 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:16.452737+0000 osd.0 (osd.0) 3 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:16.454909+0000 osd.0 (osd.0) 4 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:16.454909+0000 osd.0 (osd.0) 4 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:16.755136+0000 osd.3 (osd.3) 6 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:16.755136+0000 osd.3 (osd.3) 6 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:17.323184+0000 osd.7 (osd.7) 5 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:17.323184+0000 osd.7 (osd.7) 5 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:17.324100+0000 osd.7 (osd.7) 6 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:17.324100+0000 osd.7 (osd.7) 6 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.506931+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.506931+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:17.891360+0000 mon.a (mon.0) 1042 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:17.891360+0000 mon.a (mon.0) 1042 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897002+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897002+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897057+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]': finished 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897057+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]': finished 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897112+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897112+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897137+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897137+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897160+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897160+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897186+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897186+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897213+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897213+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897238+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.897238+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:17.904427+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: cluster 2026-03-10T07:28:17.904427+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.951353+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.951353+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.956556+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.956556+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.966008+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.966008+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.971091+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.971091+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.971587+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.971587+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.977032+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.977032+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.979559+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.979559+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.980114+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.980114+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.981048+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2285480512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.981048+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2285480512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.986145+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/848364817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.986145+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/848364817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.989684+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.989684+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.999742+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:17.999742+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.000851+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.000851+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.022802+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.022802+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.022908+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.022908+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.022946+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.022946+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.023012+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.023012+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.023487+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.023487+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.041021+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.041021+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.050650+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.050650+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.052594+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:18 vm00 bash[20701]: audit 2026-03-10T07:28:18.052594+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:16.452737+0000 osd.0 (osd.0) 3 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:16.452737+0000 osd.0 (osd.0) 3 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:16.454909+0000 osd.0 (osd.0) 4 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:16.454909+0000 osd.0 (osd.0) 4 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:16.755136+0000 osd.3 (osd.3) 6 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:16.755136+0000 osd.3 (osd.3) 6 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:17.323184+0000 osd.7 (osd.7) 5 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:17.323184+0000 osd.7 (osd.7) 5 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:17.324100+0000 osd.7 (osd.7) 6 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:17.324100+0000 osd.7 (osd.7) 6 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.506931+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.506931+0000 mon.a (mon.0) 1041 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:17.891360+0000 mon.a (mon.0) 1042 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:17.891360+0000 mon.a (mon.0) 1042 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897002+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897002+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? 192.168.123.100:0/4067061545' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm00-60490-2"}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897057+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897057+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-60006-6", "mode": "writeback"}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897083+0000 mon.a (mon.0) 1045 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897112+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897112+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.100:0/3598865356' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm00-59629-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897137+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897137+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897160+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897160+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm00-59761-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897186+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897186+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm00-59879-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897213+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897213+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897238+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.897238+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm00-59650-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:17.904427+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: cluster 2026-03-10T07:28:17.904427+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.951353+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.951353+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.100:0/3797887715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.956556+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.956556+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.966008+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.966008+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.971091+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.971091+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.971587+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.971587+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.977032+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.977032+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.979559+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.979559+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.100:0/3923560261' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.980114+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.980114+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.100:0/751840027' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.981048+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2285480512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.981048+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.100:0/2285480512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.986145+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/848364817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.986145+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.100:0/848364817' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.989684+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.989684+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.999742+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:17.999742+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.000851+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.000851+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.022802+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.022802+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.022908+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.022908+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.022946+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.022946+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.023012+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.023012+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.023487+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.023487+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.041021+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.041021+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.050650+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.050650+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.052594+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:18 vm00 bash[28005]: audit 2026-03-10T07:28:18.052594+0000 mon.a (mon.0) 1062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:18.931 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: Running main() from gmock_main.cc 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [==========] Running 16 tests from 2 test suites. 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] Global test environment set-up. 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: seed 59738 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusivePP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusivePP (763 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedPP (39 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusiveDurPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusiveDurPP (1078 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedDurPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedDurPP (1017 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockMayRenewPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockMayRenewPP (25 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.UnlockPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.UnlockPP (15 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.ListLockersPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.ListLockersPP (16 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.BreakLockPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockPP.BreakLockPP (10 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP (2964 ms total) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusivePP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusivePP (1466 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedPP (15 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusiveDurPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusiveDurPP (1118 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedDurPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedDurPP (1006 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockMayRenewPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockMayRenewPP (6 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.UnlockPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.UnlockPP (4 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.ListLockersPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.ListLockersPP (6 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.BreakLockPP 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.BreakLockPP (5 ms) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP (3626 ms total) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [----------] Global test environment tear-down 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [==========] 16 tests from 2 test suites ran. (14887 ms total) 2026-03-10T07:28:18.932 INFO:tasks.workunit.client.0.vm00.stdout: api_lock_pp: [ PASSED ] 16 tests. 2026-03-10T07:28:18.952 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: Running main() from gmock_main.cc 2026-03-10T07:28:18.952 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [==========] Running 16 tests from 2 test suites. 2026-03-10T07:28:18.952 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] Global test environment set-up. 2026-03-10T07:28:18.952 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] 8 tests from LibRadosLock 2026-03-10T07:28:18.952 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusive 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.LockExclusive (672 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.LockShared 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.LockShared (53 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusiveDur 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.LockExclusiveDur (1058 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.LockSharedDur 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.LockSharedDur (1010 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.LockMayRenew 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.LockMayRenew (29 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.Unlock 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.Unlock (6 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.ListLockers 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.ListLockers (22 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLock.BreakLock 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLock.BreakLock (12 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] 8 tests from LibRadosLock (2862 ms total) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] 8 tests from LibRadosLockEC 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusive 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusive (1540 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.LockShared 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.LockShared (147 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusiveDur 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusiveDur (1055 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.LockSharedDur 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.LockSharedDur (1007 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.LockMayRenew 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.LockMayRenew (6 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.Unlock 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.Unlock (5 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.ListLockers 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.ListLockers (7 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ RUN ] LibRadosLockEC.BreakLock 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ OK ] LibRadosLockEC.BreakLock (3 ms) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] 8 tests from LibRadosLockEC (3770 ms total) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [----------] Global test environment tear-down 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [==========] 16 tests from 2 test suites ran. (14915 ms total) 2026-03-10T07:28:18.953 INFO:tasks.workunit.client.0.vm00.stdout: api_lock: [ PASSED ] 16 tests. 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: Running main() from gmock_main.cc 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [==========] Running 6 tests from 1 test suite. 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [----------] Global test environment set-up. 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [----------] 6 tests from NeoRadosPools 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolList 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolList (1883 ms) 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolLookup 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolLookup (2233 ms) 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolLookupOtherInstance 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolLookupOtherInstance (2064 ms) 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolDelete 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolDelete (4334 ms) 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateDelete 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolCreateDelete (1557 ms) 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateWithCrushRule 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ OK ] NeoRadosPools.PoolCreateWithCrushRule (2052 ms) 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [----------] 6 tests from NeoRadosPools (14123 ms total) 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [----------] Global test environment tear-down 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [==========] 6 tests from 1 test suite ran. (14123 ms total) 2026-03-10T07:28:18.981 INFO:tasks.workunit.client.0.vm00.stdout: pool: [ PASSED ] 6 tests. 2026-03-10T07:28:19.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.507767+0000 mon.a (mon.0) 1063 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.507767+0000 mon.a (mon.0) 1063 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: cluster 2026-03-10T07:28:18.577804+0000 mgr.y (mgr.24407) 124 : cluster [DBG] pgmap v82: 712 pgs: 1 active, 3 creating+activating, 292 unknown, 416 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 428 op/s 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: cluster 2026-03-10T07:28:18.577804+0000 mgr.y (mgr.24407) 124 : cluster [DBG] pgmap v82: 712 pgs: 1 active, 3 creating+activating, 292 unknown, 416 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 428 op/s 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902649+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902649+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902693+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902693+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902724+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902724+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902750+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902750+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902777+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902777+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902801+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902801+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902826+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.902826+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.946816+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.946816+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.956781+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.956781+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: cluster 2026-03-10T07:28:18.974096+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: cluster 2026-03-10T07:28:18.974096+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.974678+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.974678+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.977016+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.977016+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.982991+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.982991+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.983256+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.983256+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.998832+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:19 vm00 bash[20701]: audit 2026-03-10T07:28:18.998832+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.507767+0000 mon.a (mon.0) 1063 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.507767+0000 mon.a (mon.0) 1063 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: cluster 2026-03-10T07:28:18.577804+0000 mgr.y (mgr.24407) 124 : cluster [DBG] pgmap v82: 712 pgs: 1 active, 3 creating+activating, 292 unknown, 416 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 428 op/s 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: cluster 2026-03-10T07:28:18.577804+0000 mgr.y (mgr.24407) 124 : cluster [DBG] pgmap v82: 712 pgs: 1 active, 3 creating+activating, 292 unknown, 416 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 428 op/s 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902649+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902649+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902693+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902693+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:19.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902724+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902724+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902750+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902750+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902777+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902777+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902801+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902801+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902826+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.902826+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.946816+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.946816+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.956781+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.956781+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: cluster 2026-03-10T07:28:18.974096+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: cluster 2026-03-10T07:28:18.974096+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.974678+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.974678+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.977016+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.977016+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.982991+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.982991+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.983256+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.983256+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.998832+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:19.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:19 vm00 bash[28005]: audit 2026-03-10T07:28:18.998832+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:20.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.507767+0000 mon.a (mon.0) 1063 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:20.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.507767+0000 mon.a (mon.0) 1063 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:20.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: cluster 2026-03-10T07:28:18.577804+0000 mgr.y (mgr.24407) 124 : cluster [DBG] pgmap v82: 712 pgs: 1 active, 3 creating+activating, 292 unknown, 416 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 428 op/s 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: cluster 2026-03-10T07:28:18.577804+0000 mgr.y (mgr.24407) 124 : cluster [DBG] pgmap v82: 712 pgs: 1 active, 3 creating+activating, 292 unknown, 416 active+clean; 72 MiB data, 513 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 18 MiB/s wr, 428 op/s 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902649+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902649+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm00-59712-10"}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902693+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902693+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm00-59738-10"}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902724+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902724+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm00-60537-2"}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902750+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902750+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902777+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902777+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902801+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902801+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902826+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.902826+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.946816+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.946816+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.956781+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.956781+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: cluster 2026-03-10T07:28:18.974096+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: cluster 2026-03-10T07:28:18.974096+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.974678+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.974678+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.977016+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.977016+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.982991+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.982991+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.983256+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.983256+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.998832+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:20.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:19 vm03 bash[23382]: audit 2026-03-10T07:28:18.998832+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:20.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.623849+0000 mon.a (mon.0) 1076 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:20.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.623849+0000 mon.a (mon.0) 1076 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:20.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.911623+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:20.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.911623+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:20.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.911678+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:28:20.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.911678+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.911704+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.911704+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.911729+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.911729+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: cluster 2026-03-10T07:28:19.922614+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: cluster 2026-03-10T07:28:19.922614+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.923761+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.923761+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.937048+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.937048+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.958061+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.958061+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.973391+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:19.973391+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:20.624991+0000 mon.a (mon.0) 1085 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:20 vm00 bash[28005]: audit 2026-03-10T07:28:20.624991+0000 mon.a (mon.0) 1085 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.623849+0000 mon.a (mon.0) 1076 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.623849+0000 mon.a (mon.0) 1076 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.911623+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.911623+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.911678+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.911678+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.911704+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.911704+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.911729+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.911729+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: cluster 2026-03-10T07:28:19.922614+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: cluster 2026-03-10T07:28:19.922614+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.923761+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.923761+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.937048+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.937048+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.958061+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.958061+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.973391+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:19.973391+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:20.624991+0000 mon.a (mon.0) 1085 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:20 vm00 bash[20701]: audit 2026-03-10T07:28:20.624991+0000 mon.a (mon.0) 1085 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:21.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.623849+0000 mon.a (mon.0) 1076 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.623849+0000 mon.a (mon.0) 1076 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.911623+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.911623+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm00-59650-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.911678+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.911678+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.911704+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.911704+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.911729+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.911729+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm00-60537-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: cluster 2026-03-10T07:28:19.922614+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: cluster 2026-03-10T07:28:19.922614+0000 mon.a (mon.0) 1081 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.923761+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.923761+0000 mon.a (mon.0) 1082 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.937048+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.937048+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.958061+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.958061+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/428033476' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.973391+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:19.973391+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:20.624991+0000 mon.a (mon.0) 1085 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:21.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:20 vm03 bash[23382]: audit 2026-03-10T07:28:20.624991+0000 mon.a (mon.0) 1085 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: Running main() from gmock_main.cc 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [==========] Running 16 tests from 2 test suites. 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] Global test environment set-up. 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotify 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotify (1345 ms) 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotifyTimeout 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotifyTimeout (15 ms) 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP (1360 ms total) 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP 2026-03-10T07:28:21.152 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 (147 ms) 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 (3457 ms) 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 (7 ms) 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 (6 ms) 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94775971818944 notify_id 352187318274 notifier_gid 15087 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 (7 ms) 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94775971818944 notify_id 352187318273 notifier_gid 15087 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 (6 ms) 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94775971818944 notify_id 352187318274 notifier_gid 15087 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 (4 ms) 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94775971818944 notify_id 352187318275 notifier_gid 15087 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 (6 ms) 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94775971835504 notify_id 352187318274 notifier_gid 15087 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 (6 ms) 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94775971818944 notify_id 352187318276 notifier_gid 15087 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 (5 ms) 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: trying... 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94775971818944 notify_id 352187318277 notifier_gid 15087 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: timed out 2026-03-10T07:28:21.153 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: flushing 2026-03-10T07:28:21.382 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:28:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:28:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: cluster 2026-03-10T07:28:20.578369+0000 mgr.y (mgr.24407) 125 : cluster [DBG] pgmap v85: 592 pgs: 64 creating+peering, 72 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 4 op/s 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: cluster 2026-03-10T07:28:20.578369+0000 mgr.y (mgr.24407) 125 : cluster [DBG] pgmap v85: 592 pgs: 64 creating+peering, 72 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 4 op/s 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: cluster 2026-03-10T07:28:20.881102+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: cluster 2026-03-10T07:28:20.881102+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: cluster 2026-03-10T07:28:20.881326+0000 mon.a (mon.0) 1087 : cluster [WRN] pool 'PoolQuotaPP_vm00-59637-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: cluster 2026-03-10T07:28:20.881326+0000 mon.a (mon.0) 1087 : cluster [WRN] pool 'PoolQuotaPP_vm00-59637-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: cluster 2026-03-10T07:28:20.882578+0000 mon.a (mon.0) 1088 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: cluster 2026-03-10T07:28:20.882578+0000 mon.a (mon.0) 1088 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.888127+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.888127+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.888173+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.888173+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.888227+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.888227+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: cluster 2026-03-10T07:28:20.904439+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: cluster 2026-03-10T07:28:20.904439+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.909252+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.909252+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.959729+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/859832237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.959729+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/859832237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.960180+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/437666200' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.960180+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/437666200' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.960965+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.960965+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.961366+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.961366+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.966777+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.966777+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.969397+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:20.969397+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:21.626150+0000 mon.a (mon.0) 1097 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:21 vm00 bash[20701]: audit 2026-03-10T07:28:21.626150+0000 mon.a (mon.0) 1097 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: cluster 2026-03-10T07:28:20.578369+0000 mgr.y (mgr.24407) 125 : cluster [DBG] pgmap v85: 592 pgs: 64 creating+peering, 72 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 4 op/s 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: cluster 2026-03-10T07:28:20.578369+0000 mgr.y (mgr.24407) 125 : cluster [DBG] pgmap v85: 592 pgs: 64 creating+peering, 72 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 4 op/s 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: cluster 2026-03-10T07:28:20.881102+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: cluster 2026-03-10T07:28:20.881102+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: cluster 2026-03-10T07:28:20.881326+0000 mon.a (mon.0) 1087 : cluster [WRN] pool 'PoolQuotaPP_vm00-59637-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: cluster 2026-03-10T07:28:20.881326+0000 mon.a (mon.0) 1087 : cluster [WRN] pool 'PoolQuotaPP_vm00-59637-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: cluster 2026-03-10T07:28:20.882578+0000 mon.a (mon.0) 1088 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: cluster 2026-03-10T07:28:20.882578+0000 mon.a (mon.0) 1088 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.888127+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.888127+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.888173+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.888173+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.888227+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.888227+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: cluster 2026-03-10T07:28:20.904439+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: cluster 2026-03-10T07:28:20.904439+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.909252+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.909252+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.959729+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/859832237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.959729+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/859832237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.960180+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/437666200' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.960180+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/437666200' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.960965+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.960965+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.961366+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.961366+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.966777+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.966777+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.969397+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:20.969397+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:21.626150+0000 mon.a (mon.0) 1097 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:21 vm00 bash[28005]: audit 2026-03-10T07:28:21.626150+0000 mon.a (mon.0) 1097 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: cluster 2026-03-10T07:28:20.578369+0000 mgr.y (mgr.24407) 125 : cluster [DBG] pgmap v85: 592 pgs: 64 creating+peering, 72 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 4 op/s 2026-03-10T07:28:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: cluster 2026-03-10T07:28:20.578369+0000 mgr.y (mgr.24407) 125 : cluster [DBG] pgmap v85: 592 pgs: 64 creating+peering, 72 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 4 op/s 2026-03-10T07:28:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: cluster 2026-03-10T07:28:20.881102+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: cluster 2026-03-10T07:28:20.881102+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: cluster 2026-03-10T07:28:20.881326+0000 mon.a (mon.0) 1087 : cluster [WRN] pool 'PoolQuotaPP_vm00-59637-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T07:28:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: cluster 2026-03-10T07:28:20.881326+0000 mon.a (mon.0) 1087 : cluster [WRN] pool 'PoolQuotaPP_vm00-59637-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T07:28:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: cluster 2026-03-10T07:28:20.882578+0000 mon.a (mon.0) 1088 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:28:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: cluster 2026-03-10T07:28:20.882578+0000 mon.a (mon.0) 1088 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:28:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.888127+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.888127+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm00-60001-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.888173+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.888173+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? 192.168.123.100:0/4105017521' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm00-59629-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.888227+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.888227+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm00-59761-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: cluster 2026-03-10T07:28:20.904439+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: cluster 2026-03-10T07:28:20.904439+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.909252+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.909252+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.959729+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/859832237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.959729+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.100:0/859832237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.960180+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/437666200' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.960180+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.100:0/437666200' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.960965+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.960965+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.961366+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.961366+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.966777+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.966777+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.969397+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:20.969397+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:21.626150+0000 mon.a (mon.0) 1097 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:22.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:21 vm03 bash[23382]: audit 2026-03-10T07:28:21.626150+0000 mon.a (mon.0) 1097 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:23.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:28:22 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:28:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.892945+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.892945+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.892979+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.892979+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.893000+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.893000+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.893026+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.893026+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.893084+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.893084+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: cluster 2026-03-10T07:28:21.898804+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: cluster 2026-03-10T07:28:21.898804+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.932133+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.932133+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.934378+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:21.934378+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:22.395265+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:22.395265+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:22.397633+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:22.397633+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:22.628175+0000 mon.a (mon.0) 1106 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:23.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:23 vm03 bash[23382]: audit 2026-03-10T07:28:22.628175+0000 mon.a (mon.0) 1106 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.892945+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.892945+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.892979+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.892979+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.893000+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.893000+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.893026+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.893026+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.893084+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.893084+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: cluster 2026-03-10T07:28:21.898804+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: cluster 2026-03-10T07:28:21.898804+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.932133+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.932133+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.892945+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.892945+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm00-60537-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.892979+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.892979+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? 192.168.123.100:0/3306878155' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm00-59956-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.893000+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.893000+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.893026+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.893026+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.893084+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.893084+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: cluster 2026-03-10T07:28:21.898804+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: cluster 2026-03-10T07:28:21.898804+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.932133+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.932133+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.934378+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:21.934378+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:22.395265+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:22.395265+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:22.397633+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:22.397633+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:22.628175+0000 mon.a (mon.0) 1106 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:23 vm00 bash[20701]: audit 2026-03-10T07:28:22.628175+0000 mon.a (mon.0) 1106 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.934378+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:21.934378+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:22.395265+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:22.395265+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:22.397633+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:22.397633+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:22.628175+0000 mon.a (mon.0) 1106 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:23.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:23 vm00 bash[28005]: audit 2026-03-10T07:28:22.628175+0000 mon.a (mon.0) 1106 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:24.072 INFO:tasks.workunit.client.0.vm00.stdout: api_watp: [ RUN ] LibRadosIoECPP.RoundTripPP2 2026-03-10T07:28:24.072 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP2 (8 ms) 2026-03-10T07:28:24.072 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.OverlappingWriteRoundTripPP 2026-03-10T07:28:24.072 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.OverlappingWriteRoundTripPP (20 ms) 2026-03-10T07:28:24.072 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP 2026-03-10T07:28:24.072 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP (15 ms) 2026-03-10T07:28:24.072 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP2 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP2 (18 ms) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.AppendRoundTripPP 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.AppendRoundTripPP (19 ms) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.TruncTestPP 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.TruncTestPP (18 ms) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RemoveTestPP 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RemoveTestPP (15 ms) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrsRoundTripPP 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrsRoundTripPP (12 ms) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RmXattrPP 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RmXattrPP (35 ms) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CrcZeroWrite 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CrcZeroWrite (6419 ms) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrListPP 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrListPP (1218 ms) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtPP 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtPP (16 ms) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtDNEPP 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtDNEPP (18 ms) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtMismatchPP 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtMismatchPP (26 ms) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP (9173 ms total) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [----------] Global test environment tear-down 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [==========] 39 tests from 2 test suites ran. (20082 ms total) 2026-03-10T07:28:24.073 INFO:tasks.workunit.client.0.vm00.stdout: api_io_pp: [ PASSED ] 39 tests. 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: cluster 2026-03-10T07:28:22.578847+0000 mgr.y (mgr.24407) 126 : cluster [DBG] pgmap v88: 760 pgs: 32 creating+peering, 272 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 9 op/s 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: cluster 2026-03-10T07:28:22.578847+0000 mgr.y (mgr.24407) 126 : cluster [DBG] pgmap v88: 760 pgs: 32 creating+peering, 272 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 9 op/s 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:22.963866+0000 mgr.y (mgr.24407) 127 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:22.963866+0000 mgr.y (mgr.24407) 127 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:22.988934+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:22.988934+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:22.988975+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:22.988975+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.032490+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.032490+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.032598+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.032598+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: cluster 2026-03-10T07:28:23.042423+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: cluster 2026-03-10T07:28:23.042423+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.054360+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.054360+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.069842+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.069842+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.070683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.070683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.119273+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.119273+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.136504+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.136504+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.629437+0000 mon.a (mon.0) 1114 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.629437+0000 mon.a (mon.0) 1114 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.818922+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.818922+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.828748+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.828748+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.997971+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.997971+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.998012+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.998012+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.998033+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.998033+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.998054+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:23.998054+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:24.008315+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:24.008315+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: cluster 2026-03-10T07:28:24.024068+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: cluster 2026-03-10T07:28:24.024068+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:24.050832+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/4190482673' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:24.050832+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/4190482673' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:24.057456+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:24.057456+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:24.058128+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:24 vm00 bash[20701]: audit 2026-03-10T07:28:24.058128+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: cluster 2026-03-10T07:28:22.578847+0000 mgr.y (mgr.24407) 126 : cluster [DBG] pgmap v88: 760 pgs: 32 creating+peering, 272 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 9 op/s 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: cluster 2026-03-10T07:28:22.578847+0000 mgr.y (mgr.24407) 126 : cluster [DBG] pgmap v88: 760 pgs: 32 creating+peering, 272 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 9 op/s 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:22.963866+0000 mgr.y (mgr.24407) 127 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:22.963866+0000 mgr.y (mgr.24407) 127 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:22.988934+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:22.988934+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:22.988975+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:22.988975+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.032490+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.032490+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.032598+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.032598+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: cluster 2026-03-10T07:28:23.042423+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: cluster 2026-03-10T07:28:23.042423+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.054360+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.054360+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.069842+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.069842+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.070683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.070683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.119273+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.119273+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.136504+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.136504+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.629437+0000 mon.a (mon.0) 1114 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.629437+0000 mon.a (mon.0) 1114 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.818922+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.818922+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.828748+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.828748+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.997971+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.997971+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.998012+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:24.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.998012+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.998033+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.998033+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.998054+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:23.998054+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:24.008315+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:24.008315+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: cluster 2026-03-10T07:28:24.024068+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: cluster 2026-03-10T07:28:24.024068+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:24.050832+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/4190482673' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:24.050832+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/4190482673' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:24.057456+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:24.057456+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:24.058128+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:24 vm00 bash[28005]: audit 2026-03-10T07:28:24.058128+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: cluster 2026-03-10T07:28:22.578847+0000 mgr.y (mgr.24407) 126 : cluster [DBG] pgmap v88: 760 pgs: 32 creating+peering, 272 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 9 op/s 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: cluster 2026-03-10T07:28:22.578847+0000 mgr.y (mgr.24407) 126 : cluster [DBG] pgmap v88: 760 pgs: 32 creating+peering, 272 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 19 MiB/s wr, 9 op/s 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:22.963866+0000 mgr.y (mgr.24407) 127 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:22.963866+0000 mgr.y (mgr.24407) 127 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:22.988934+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:22.988934+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:22.988975+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:22.988975+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.032490+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.032490+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.032598+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.032598+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.100:0/3878142846' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: cluster 2026-03-10T07:28:23.042423+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: cluster 2026-03-10T07:28:23.042423+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T07:28:24.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.054360+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.054360+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.069842+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.069842+0000 mon.a (mon.0) 1111 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.070683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.070683+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.119273+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.119273+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.136504+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.136504+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.629437+0000 mon.a (mon.0) 1114 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.629437+0000 mon.a (mon.0) 1114 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.818922+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.818922+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.828748+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.828748+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.997971+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.997971+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.100:0/1626434449' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm00-59629-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.998012+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.998012+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.998033+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.998033+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm00-59650-23"}]': finished 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.998054+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:23.998054+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:24.008315+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:24.008315+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/4073564080' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: cluster 2026-03-10T07:28:24.024068+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: cluster 2026-03-10T07:28:24.024068+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:24.050832+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/4190482673' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:24.050832+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/4190482673' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:24.057456+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:24.057456+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:24.058128+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:24.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:24 vm03 bash[23382]: audit 2026-03-10T07:28:24.058128+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:25.388 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: Running main() from gmock_main.cc 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [==========] Running 11 tests from 2 test suites. 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] Global test environment set-up. 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify_test_cb 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify (622 ms) 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch2Delete 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94439371292640 err -107 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch2Delete (81 ms) 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94439371292640 err -107 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete (1049 ms) 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 14736 notify_id 309237645313 cookie 94439371304480 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2 (27 ms) 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchNotify2 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 14736 notify_id 309237645312 cookie 94439371304480 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchNotify2 (20 ms) 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioNotify 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 14736 notify_id 309237645313 cookie 94439371336736 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioNotify (12 ms) 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Multi 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 14736 notify_id 309237645313 cookie 94439371336736 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 14736 notify_id 309237645313 cookie 94439371398592 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Multi (11 ms) 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Timeout 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 14736 notify_id 309237645312 cookie 94439371383936 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 14736 notify_id 313532612610 cookie 94439371383936 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Timeout (3007 ms) 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch3Timeout 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: waiting up to 1024 for osd to time us out ... 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94439371383936 err -107 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_cb from 14736 notify_id 343597383683 cookie 94439371383936 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch3Timeout (5008 ms) 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete2 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: waiting up to 30 for disconnect notification ... 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94439371383936 err -107 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete2 (1017 ms) 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify (10855 ms total) 2026-03-10T07:28:25.389 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: 2026-03-10T07:28:25.390 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC 2026-03-10T07:28:25.390 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotifyEC.WatchNotify 2026-03-10T07:28:25.390 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: watch_notify_test_cb 2026-03-10T07:28:25.390 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ OK ] LibRadosWatchNotifyEC.WatchNotify (1427 ms) 2026-03-10T07:28:25.390 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC (1427 ms total) 2026-03-10T07:28:25.390 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: 2026-03-10T07:28:25.390 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [----------] Global test environment tear-down 2026-03-10T07:28:25.390 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [==========] 11 tests from 2 test suites ran. (21053 ms total) 2026-03-10T07:28:25.390 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify: [ PASSED ] 11 tests. 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:25 vm00 bash[20701]: audit 2026-03-10T07:28:24.442682+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:25 vm00 bash[20701]: audit 2026-03-10T07:28:24.442682+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:25 vm00 bash[20701]: audit 2026-03-10T07:28:24.444862+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:25 vm00 bash[20701]: audit 2026-03-10T07:28:24.444862+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:25 vm00 bash[20701]: audit 2026-03-10T07:28:24.630382+0000 mon.a (mon.0) 1124 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:25 vm00 bash[20701]: audit 2026-03-10T07:28:24.630382+0000 mon.a (mon.0) 1124 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:25 vm00 bash[28005]: audit 2026-03-10T07:28:24.442682+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:25 vm00 bash[28005]: audit 2026-03-10T07:28:24.442682+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:25 vm00 bash[28005]: audit 2026-03-10T07:28:24.444862+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:25 vm00 bash[28005]: audit 2026-03-10T07:28:24.444862+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:25 vm00 bash[28005]: audit 2026-03-10T07:28:24.630382+0000 mon.a (mon.0) 1124 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:25.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:25 vm00 bash[28005]: audit 2026-03-10T07:28:24.630382+0000 mon.a (mon.0) 1124 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:25.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:25 vm03 bash[23382]: audit 2026-03-10T07:28:24.442682+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:25 vm03 bash[23382]: audit 2026-03-10T07:28:24.442682+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:25 vm03 bash[23382]: audit 2026-03-10T07:28:24.444862+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:25 vm03 bash[23382]: audit 2026-03-10T07:28:24.444862+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:25.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:25 vm03 bash[23382]: audit 2026-03-10T07:28:24.630382+0000 mon.a (mon.0) 1124 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:25.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:25 vm03 bash[23382]: audit 2026-03-10T07:28:24.630382+0000 mon.a (mon.0) 1124 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:26.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: cluster 2026-03-10T07:28:24.579357+0000 mgr.y (mgr.24407) 128 : cluster [DBG] pgmap v91: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: cluster 2026-03-10T07:28:24.579357+0000 mgr.y (mgr.24407) 128 : cluster [DBG] pgmap v91: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.276007+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.276007+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.276040+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.276040+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.276063+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.276063+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: cluster 2026-03-10T07:28:25.284652+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: cluster 2026-03-10T07:28:25.284652+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.318093+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.318093+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.392697+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.392697+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.633730+0000 mon.a (mon.0) 1130 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:25.633730+0000 mon.a (mon.0) 1130 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: cluster 2026-03-10T07:28:25.884615+0000 mon.a (mon.0) 1131 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: cluster 2026-03-10T07:28:25.884615+0000 mon.a (mon.0) 1131 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:26.300185+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:26 vm00 bash[28005]: audit 2026-03-10T07:28:26.300185+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: cluster 2026-03-10T07:28:24.579357+0000 mgr.y (mgr.24407) 128 : cluster [DBG] pgmap v91: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: cluster 2026-03-10T07:28:24.579357+0000 mgr.y (mgr.24407) 128 : cluster [DBG] pgmap v91: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.276007+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.276007+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.276040+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.276040+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.276063+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.276063+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: cluster 2026-03-10T07:28:25.284652+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: cluster 2026-03-10T07:28:25.284652+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.318093+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.318093+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.392697+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.392697+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.633730+0000 mon.a (mon.0) 1130 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:25.633730+0000 mon.a (mon.0) 1130 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: cluster 2026-03-10T07:28:25.884615+0000 mon.a (mon.0) 1131 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: cluster 2026-03-10T07:28:25.884615+0000 mon.a (mon.0) 1131 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:26.300185+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:26 vm00 bash[20701]: audit 2026-03-10T07:28:26.300185+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:26.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: cluster 2026-03-10T07:28:24.579357+0000 mgr.y (mgr.24407) 128 : cluster [DBG] pgmap v91: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:26.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: cluster 2026-03-10T07:28:24.579357+0000 mgr.y (mgr.24407) 128 : cluster [DBG] pgmap v91: 720 pgs: 32 creating+peering, 232 unknown, 456 active+clean; 144 MiB data, 826 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:26.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.276007+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:26.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.276007+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm00-60001-12"}]': finished 2026-03-10T07:28:26.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.276040+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:26.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.276040+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-60139-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:26.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.276063+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:26.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.276063+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:26.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: cluster 2026-03-10T07:28:25.284652+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T07:28:26.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: cluster 2026-03-10T07:28:25.284652+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T07:28:26.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.318093+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.318093+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.392697+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.392697+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]: dispatch 2026-03-10T07:28:26.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.633730+0000 mon.a (mon.0) 1130 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:26.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:25.633730+0000 mon.a (mon.0) 1130 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:26.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: cluster 2026-03-10T07:28:25.884615+0000 mon.a (mon.0) 1131 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:26.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: cluster 2026-03-10T07:28:25.884615+0000 mon.a (mon.0) 1131 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:26.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:26.300185+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:26.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:26 vm03 bash[23382]: audit 2026-03-10T07:28:26.300185+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-7"}]': finished 2026-03-10T07:28:27.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:26.150440+0000 osd.6 (osd.6) 3 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:26.150440+0000 osd.6 (osd.6) 3 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:26.190028+0000 osd.6 (osd.6) 4 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:26.190028+0000 osd.6 (osd.6) 4 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:26.350657+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:26.350657+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:26.367770+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/1551656875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:26.367770+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/1551656875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:26.367892+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2720793804' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:26.367892+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2720793804' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:26.499333+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:26.499333+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:26.520599+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:26.520599+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:26.579882+0000 mgr.y (mgr.24407) 129 : cluster [DBG] pgmap v94: 720 pgs: 32 creating+peering, 128 unknown, 560 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.2 KiB/s wr, 5 op/s 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:26.579882+0000 mgr.y (mgr.24407) 129 : cluster [DBG] pgmap v94: 720 pgs: 32 creating+peering, 128 unknown, 560 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.2 KiB/s wr, 5 op/s 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:26.638609+0000 mon.a (mon.0) 1136 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:26.638609+0000 mon.a (mon.0) 1136 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:27.300817+0000 mon.a (mon.0) 1137 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:27.300817+0000 mon.a (mon.0) 1137 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:27.306119+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:27.306119+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:27.306390+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: audit 2026-03-10T07:28:27.306390+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:27.354703+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:27 vm00 bash[20701]: cluster 2026-03-10T07:28:27.354703+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:26.150440+0000 osd.6 (osd.6) 3 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:26.150440+0000 osd.6 (osd.6) 3 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:26.190028+0000 osd.6 (osd.6) 4 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:26.190028+0000 osd.6 (osd.6) 4 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:26.350657+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:26.350657+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:26.367770+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/1551656875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:26.367770+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/1551656875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:26.367892+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2720793804' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:26.367892+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2720793804' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:26.499333+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:26.499333+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:26.520599+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:26.520599+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:26.579882+0000 mgr.y (mgr.24407) 129 : cluster [DBG] pgmap v94: 720 pgs: 32 creating+peering, 128 unknown, 560 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.2 KiB/s wr, 5 op/s 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:26.579882+0000 mgr.y (mgr.24407) 129 : cluster [DBG] pgmap v94: 720 pgs: 32 creating+peering, 128 unknown, 560 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.2 KiB/s wr, 5 op/s 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:26.638609+0000 mon.a (mon.0) 1136 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:26.638609+0000 mon.a (mon.0) 1136 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:27.300817+0000 mon.a (mon.0) 1137 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:27.300817+0000 mon.a (mon.0) 1137 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:27.306119+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:27.306119+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:27.306390+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: audit 2026-03-10T07:28:27.306390+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:27.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:27.354703+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T07:28:27.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:27 vm00 bash[28005]: cluster 2026-03-10T07:28:27.354703+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:26.150440+0000 osd.6 (osd.6) 3 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:26.150440+0000 osd.6 (osd.6) 3 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:26.190028+0000 osd.6 (osd.6) 4 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:26.190028+0000 osd.6 (osd.6) 4 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:26.350657+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:26.350657+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:26.367770+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/1551656875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:26.367770+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.100:0/1551656875' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:26.367892+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2720793804' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:26.367892+0000 mon.b (mon.1) 102 : audit [INF] from='client.? 192.168.123.100:0/2720793804' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:26.499333+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:26.499333+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:26.520599+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:26.520599+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:28.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:26.579882+0000 mgr.y (mgr.24407) 129 : cluster [DBG] pgmap v94: 720 pgs: 32 creating+peering, 128 unknown, 560 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.2 KiB/s wr, 5 op/s 2026-03-10T07:28:28.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:26.579882+0000 mgr.y (mgr.24407) 129 : cluster [DBG] pgmap v94: 720 pgs: 32 creating+peering, 128 unknown, 560 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.2 KiB/s wr, 5 op/s 2026-03-10T07:28:28.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:26.638609+0000 mon.a (mon.0) 1136 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:28.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:26.638609+0000 mon.a (mon.0) 1136 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:28.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:27.300817+0000 mon.a (mon.0) 1137 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:28:28.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:27.300817+0000 mon.a (mon.0) 1137 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:28:28.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:27.306119+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:28.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:27.306119+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm00-59629-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:28.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:27.306390+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:28.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: audit 2026-03-10T07:28:27.306390+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:28.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:27.354703+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T07:28:28.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:27 vm03 bash[23382]: cluster 2026-03-10T07:28:27.354703+0000 mon.a (mon.0) 1140 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: cluster 2026-03-10T07:28:27.024271+0000 osd.2 (osd.2) 3 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: cluster 2026-03-10T07:28:27.024271+0000 osd.2 (osd.2) 3 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: cluster 2026-03-10T07:28:27.025076+0000 osd.2 (osd.2) 4 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: cluster 2026-03-10T07:28:27.025076+0000 osd.2 (osd.2) 4 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: cluster 2026-03-10T07:28:27.166084+0000 osd.6 (osd.6) 5 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: cluster 2026-03-10T07:28:27.166084+0000 osd.6 (osd.6) 5 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: cluster 2026-03-10T07:28:27.167746+0000 osd.6 (osd.6) 6 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: cluster 2026-03-10T07:28:27.167746+0000 osd.6 (osd.6) 6 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:27.468142+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:27.468142+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:27.684913+0000 mon.a (mon.0) 1142 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:29.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:27.684913+0000 mon.a (mon.0) 1142 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:29.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:28.407661+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:29.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:28.407661+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:29.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: cluster 2026-03-10T07:28:28.456741+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T07:28:29.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: cluster 2026-03-10T07:28:28.456741+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T07:28:29.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:28.463137+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:28.463137+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:28.468313+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:28.468313+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:28.471186+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:28 vm03 bash[23382]: audit 2026-03-10T07:28:28.471186+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: cluster 2026-03-10T07:28:27.024271+0000 osd.2 (osd.2) 3 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: cluster 2026-03-10T07:28:27.024271+0000 osd.2 (osd.2) 3 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: cluster 2026-03-10T07:28:27.025076+0000 osd.2 (osd.2) 4 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: cluster 2026-03-10T07:28:27.025076+0000 osd.2 (osd.2) 4 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: cluster 2026-03-10T07:28:27.166084+0000 osd.6 (osd.6) 5 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: cluster 2026-03-10T07:28:27.166084+0000 osd.6 (osd.6) 5 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: cluster 2026-03-10T07:28:27.167746+0000 osd.6 (osd.6) 6 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: cluster 2026-03-10T07:28:27.167746+0000 osd.6 (osd.6) 6 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:27.468142+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:27.468142+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:27.684913+0000 mon.a (mon.0) 1142 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:27.684913+0000 mon.a (mon.0) 1142 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:28.407661+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:28.407661+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: cluster 2026-03-10T07:28:28.456741+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: cluster 2026-03-10T07:28:28.456741+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:28.463137+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:28.463137+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:28.468313+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:28.468313+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:28.471186+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:28 vm00 bash[28005]: audit 2026-03-10T07:28:28.471186+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: cluster 2026-03-10T07:28:27.024271+0000 osd.2 (osd.2) 3 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:29.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: cluster 2026-03-10T07:28:27.024271+0000 osd.2 (osd.2) 3 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:29.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: cluster 2026-03-10T07:28:27.025076+0000 osd.2 (osd.2) 4 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:29.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: cluster 2026-03-10T07:28:27.025076+0000 osd.2 (osd.2) 4 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:29.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: cluster 2026-03-10T07:28:27.166084+0000 osd.6 (osd.6) 5 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:29.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: cluster 2026-03-10T07:28:27.166084+0000 osd.6 (osd.6) 5 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: cluster 2026-03-10T07:28:27.167746+0000 osd.6 (osd.6) 6 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: cluster 2026-03-10T07:28:27.167746+0000 osd.6 (osd.6) 6 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:27.468142+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:27.468142+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:27.684913+0000 mon.a (mon.0) 1142 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:27.684913+0000 mon.a (mon.0) 1142 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:28.407661+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:28.407661+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: cluster 2026-03-10T07:28:28.456741+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: cluster 2026-03-10T07:28:28.456741+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:28.463137+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:28.463137+0000 mon.a (mon.0) 1145 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]: dispatch 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:28.468313+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:28.468313+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:28.471186+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:28 vm00 bash[20701]: audit 2026-03-10T07:28:28.471186+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:29.423 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: Running main() from gmock_main.cc 2026-03-10T07:28:29.423 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [==========] Running 3 tests from 1 test suite. 2026-03-10T07:28:29.423 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [----------] Global test environment set-up. 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [----------] 3 tests from NeoradosECList 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ RUN ] NeoradosECList.ListObjects 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ OK ] NeoradosECList.ListObjects (7254 ms) 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsNS 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsNS (6859 ms) 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsMany 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsMany (10500 ms) 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [----------] 3 tests from NeoradosECList (24613 ms total) 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [----------] Global test environment tear-down 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [==========] 3 tests from 1 test suite ran. (24613 ms total) 2026-03-10T07:28:29.424 INFO:tasks.workunit.client.0.vm00.stdout: ec_list: [ PASSED ] 3 tests. 2026-03-10T07:28:30.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: cluster 2026-03-10T07:28:28.580341+0000 mgr.y (mgr.24407) 130 : cluster [DBG] pgmap v97: 712 pgs: 32 creating+peering, 224 unknown, 456 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:28:30.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: cluster 2026-03-10T07:28:28.580341+0000 mgr.y (mgr.24407) 130 : cluster [DBG] pgmap v97: 712 pgs: 32 creating+peering, 224 unknown, 456 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:28:30.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:28.691482+0000 mon.a (mon.0) 1147 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:30.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:28.691482+0000 mon.a (mon.0) 1147 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:30.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.411990+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:30.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.411990+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:30.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.412233+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:30.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.412233+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: cluster 2026-03-10T07:28:29.474904+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: cluster 2026-03-10T07:28:29.474904+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.481170+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/396423711' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.481170+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/396423711' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.481342+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/329659876' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.481342+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/329659876' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.580337+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.580337+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.580439+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.580439+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.604847+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.604847+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.612721+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:30.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:29 vm03 bash[23382]: audit 2026-03-10T07:28:29.612721+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:30.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: cluster 2026-03-10T07:28:28.580341+0000 mgr.y (mgr.24407) 130 : cluster [DBG] pgmap v97: 712 pgs: 32 creating+peering, 224 unknown, 456 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: cluster 2026-03-10T07:28:28.580341+0000 mgr.y (mgr.24407) 130 : cluster [DBG] pgmap v97: 712 pgs: 32 creating+peering, 224 unknown, 456 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:28.691482+0000 mon.a (mon.0) 1147 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:28.691482+0000 mon.a (mon.0) 1147 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.411990+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.411990+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.412233+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.412233+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: cluster 2026-03-10T07:28:29.474904+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: cluster 2026-03-10T07:28:29.474904+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.481170+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/396423711' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.481170+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/396423711' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.481342+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/329659876' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.481342+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/329659876' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.580337+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.580337+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.580439+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.580439+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.604847+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.604847+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.612721+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:29 vm00 bash[20701]: audit 2026-03-10T07:28:29.612721+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: cluster 2026-03-10T07:28:28.580341+0000 mgr.y (mgr.24407) 130 : cluster [DBG] pgmap v97: 712 pgs: 32 creating+peering, 224 unknown, 456 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: cluster 2026-03-10T07:28:28.580341+0000 mgr.y (mgr.24407) 130 : cluster [DBG] pgmap v97: 712 pgs: 32 creating+peering, 224 unknown, 456 active+clean; 144 MiB data, 888 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:28.691482+0000 mon.a (mon.0) 1147 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:28.691482+0000 mon.a (mon.0) 1147 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.411990+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.411990+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? 192.168.123.100:0/326972495' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm00-60537-3"}]': finished 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.412233+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.412233+0000 mon.a (mon.0) 1149 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: cluster 2026-03-10T07:28:29.474904+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: cluster 2026-03-10T07:28:29.474904+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.481170+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/396423711' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.481170+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.100:0/396423711' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.481342+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/329659876' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.481342+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.100:0/329659876' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.580337+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.580337+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.580439+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.580439+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.604847+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.604847+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.612721+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:30.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:29 vm00 bash[28005]: audit 2026-03-10T07:28:29.612721+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:29.694694+0000 mon.a (mon.0) 1154 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:29.694694+0000 mon.a (mon.0) 1154 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.307436+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]: dispatch 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.307436+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]: dispatch 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.431788+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.431788+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.432008+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.432008+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.432039+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.432039+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.432070+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]': finished 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.432070+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]': finished 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: cluster 2026-03-10T07:28:30.480887+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T07:28:31.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: cluster 2026-03-10T07:28:30.480887+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T07:28:31.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.498489+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.498489+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.498508+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:31.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.498508+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:31.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.506939+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:30 vm03 bash[23382]: audit 2026-03-10T07:28:30.506939+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:29.694694+0000 mon.a (mon.0) 1154 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:29.694694+0000 mon.a (mon.0) 1154 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.307436+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]: dispatch 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.307436+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]: dispatch 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.431788+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.431788+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.432008+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.432008+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.432039+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.432039+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.432070+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]': finished 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.432070+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]': finished 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: cluster 2026-03-10T07:28:30.480887+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: cluster 2026-03-10T07:28:30.480887+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.498489+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.498489+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.498508+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.498508+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.506939+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:30 vm00 bash[20701]: audit 2026-03-10T07:28:30.506939+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.089 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:28:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:28:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:29.694694+0000 mon.a (mon.0) 1154 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:29.694694+0000 mon.a (mon.0) 1154 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.307436+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]: dispatch 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.307436+0000 mon.a (mon.0) 1155 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]: dispatch 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.431788+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.431788+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.432008+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.432008+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm00-59629-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.432039+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.432039+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.432070+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]': finished 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.432070+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4"}]': finished 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: cluster 2026-03-10T07:28:30.480887+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: cluster 2026-03-10T07:28:30.480887+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.498489+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.498489+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.498508+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.498508+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]: dispatch 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.506939+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.090 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:30 vm00 bash[28005]: audit 2026-03-10T07:28:30.506939+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: Running main() from gmock_main.cc 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [==========] Running 8 tests from 2 test suites. 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] Global test environment set-up. 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibradosCWriteOps.NewDelete 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibradosCWriteOps.NewDelete (0 ms) 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps (0 ms total) 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.assertExists 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.assertExists (3419 ms) 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteOpAssertVersion 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteOpAssertVersion (3182 ms) 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Xattrs 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Xattrs (3350 ms) 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Write 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Write (2541 ms) 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Exec 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Exec (3029 ms) 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteSame 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteSame (3139 ms) 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.CmpExt 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.CmpExt (8860 ms) 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps (27520 ms total) 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [----------] Global test environment tear-down 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [==========] 8 tests from 2 test suites ran. (27520 ms total) 2026-03-10T07:28:31.985 INFO:tasks.workunit.client.0.vm00.stdout: api_c_write_operations: [ PASSED ] 8 tests. 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [==========] Running 4 tests from 1 test suite. 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [----------] Global test environment set-up. 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [----------] 4 tests from LibRadosService 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ RUN ] LibRadosService.RegisterEarly 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ OK ] LibRadosService.RegisterEarly (5017 ms) 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ RUN ] LibRadosService.RegisterLate 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ OK ] LibRadosService.RegisterLate (20 ms) 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ RUN ] LibRadosService.StatusFormat 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: cluster: 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: id: 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: health: HEALTH_WARN 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 14 pool(s) do not have an application enabled 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: services: 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 7m) 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: mgr: y(active, since 2m), standbys: x 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: osd: 8 osds: 8 up (since 2m), 8 in (since 3m) 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: laundry: 2 daemons active (1 hosts) 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: data: 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: pools: 32 pools, 892 pgs 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: objects: 199 objects, 455 KiB 2026-03-10T07:28:31.990 INFO:tasks.workunit.client.0.vm00.stdout: api_service: usage: 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: pgs: 74.439% pgs unknown 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 10.762% pgs not active 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 664 unknown 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 132 active+clean 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 96 creating+peering 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: io: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: client: 1.2 KiB/s rd, 1 op/s rd, 0 op/s wr 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: cluster: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: id: 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: health: HEALTH_WARN 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 13 pool(s) do not have an application enabled 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: services: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 7m) 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: mgr: y(active, since 2m), standbys: x 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: osd: 8 osds: 8 up (since 2m), 8 in (since 3m) 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: foo: 16 portals active (1 hosts, 3 zones) 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: laundry: 1 daemon active (1 hosts) 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: data: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: pools: 30 pools, 900 pgs 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: objects: 326 objects, 470 KiB 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: usage: 312 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: pgs: 21.333% pgs unknown 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 24.889% pgs not active 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 484 active+clean 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 224 creating+peering 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 192 unknown 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: io: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: client: 3.0 KiB/s rd, 68 KiB/s wr, 41 op/s rd, 132 op/s wr 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ OK ] LibRadosService.StatusFormat (2421 ms) 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ RUN ] LibRadosService.Status 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ OK ] LibRadosService.Status (20057 ms) 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [----------] 4 tests from LibRadosService (27515 ms total) 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [----------] Global test environment tear-down 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [==========] 4 tests from 1 test suite ran. (27515 ms total) 2026-03-10T07:28:31.991 INFO:tasks.workunit.client.0.vm00.stdout: api_service: [ PASSED ] 4 tests. 2026-03-10T07:28:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: cluster 2026-03-10T07:28:30.581142+0000 mgr.y (mgr.24407) 131 : cluster [DBG] pgmap v100: 680 pgs: 64 creating+peering, 5 creating+activating, 3 active+clean+snaptrim, 64 unknown, 544 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:28:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: cluster 2026-03-10T07:28:30.581142+0000 mgr.y (mgr.24407) 131 : cluster [DBG] pgmap v100: 680 pgs: 64 creating+peering, 5 creating+activating, 3 active+clean+snaptrim, 64 unknown, 544 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:28:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.731611+0000 mon.a (mon.0) 1163 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.731611+0000 mon.a (mon.0) 1163 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: cluster 2026-03-10T07:28:30.885402+0000 mon.a (mon.0) 1164 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: cluster 2026-03-10T07:28:30.885402+0000 mon.a (mon.0) 1164 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: cluster 2026-03-10T07:28:30.886394+0000 mon.a (mon.0) 1165 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: cluster 2026-03-10T07:28:30.886394+0000 mon.a (mon.0) 1165 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.892246+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.892246+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.892287+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.892287+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.921228+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.921228+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: cluster 2026-03-10T07:28:30.935945+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: cluster 2026-03-10T07:28:30.935945+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.950712+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.950712+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.956950+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2589783545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.956950+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2589783545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.963008+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.963008+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.963283+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:31 vm03 bash[23382]: audit 2026-03-10T07:28:30.963283+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: cluster 2026-03-10T07:28:30.581142+0000 mgr.y (mgr.24407) 131 : cluster [DBG] pgmap v100: 680 pgs: 64 creating+peering, 5 creating+activating, 3 active+clean+snaptrim, 64 unknown, 544 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: cluster 2026-03-10T07:28:30.581142+0000 mgr.y (mgr.24407) 131 : cluster [DBG] pgmap v100: 680 pgs: 64 creating+peering, 5 creating+activating, 3 active+clean+snaptrim, 64 unknown, 544 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.731611+0000 mon.a (mon.0) 1163 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.731611+0000 mon.a (mon.0) 1163 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: cluster 2026-03-10T07:28:30.885402+0000 mon.a (mon.0) 1164 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: cluster 2026-03-10T07:28:30.885402+0000 mon.a (mon.0) 1164 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: cluster 2026-03-10T07:28:30.886394+0000 mon.a (mon.0) 1165 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: cluster 2026-03-10T07:28:30.886394+0000 mon.a (mon.0) 1165 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.892246+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.892246+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.892287+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.892287+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.921228+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.921228+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: cluster 2026-03-10T07:28:30.935945+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: cluster 2026-03-10T07:28:30.935945+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T07:28:32.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.950712+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.950712+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.956950+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2589783545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.956950+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2589783545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.963008+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.963008+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.963283+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:31 vm00 bash[20701]: audit 2026-03-10T07:28:30.963283+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: cluster 2026-03-10T07:28:30.581142+0000 mgr.y (mgr.24407) 131 : cluster [DBG] pgmap v100: 680 pgs: 64 creating+peering, 5 creating+activating, 3 active+clean+snaptrim, 64 unknown, 544 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: cluster 2026-03-10T07:28:30.581142+0000 mgr.y (mgr.24407) 131 : cluster [DBG] pgmap v100: 680 pgs: 64 creating+peering, 5 creating+activating, 3 active+clean+snaptrim, 64 unknown, 544 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.731611+0000 mon.a (mon.0) 1163 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.731611+0000 mon.a (mon.0) 1163 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: cluster 2026-03-10T07:28:30.885402+0000 mon.a (mon.0) 1164 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: cluster 2026-03-10T07:28:30.885402+0000 mon.a (mon.0) 1164 : cluster [WRN] Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: cluster 2026-03-10T07:28:30.886394+0000 mon.a (mon.0) 1165 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: cluster 2026-03-10T07:28:30.886394+0000 mon.a (mon.0) 1165 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.892246+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.892246+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm00-60006-4", "tierpool": "test-rados-api-vm00-60006-6"}]': finished 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.892287+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.892287+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.921228+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.921228+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: cluster 2026-03-10T07:28:30.935945+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: cluster 2026-03-10T07:28:30.935945+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.950712+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.950712+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.956950+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2589783545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.956950+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.100:0/2589783545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.963008+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.963008+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.963283+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:31 vm00 bash[28005]: audit 2026-03-10T07:28:30.963283+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.970 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.735088+0000 mon.a (mon.0) 1172 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:32.970 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.735088+0000 mon.a (mon.0) 1172 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:32.970 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: cluster 2026-03-10T07:28:31.893050+0000 mon.a (mon.0) 1173 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: cluster 2026-03-10T07:28:31.893050+0000 mon.a (mon.0) 1173 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.898528+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.898528+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.898777+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]': finished 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.898777+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]': finished 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.898816+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.898816+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: cluster 2026-03-10T07:28:31.929560+0000 mon.a (mon.0) 1177 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: cluster 2026-03-10T07:28:31.929560+0000 mon.a (mon.0) 1177 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.974759+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/319009892' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.974759+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/319009892' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.995070+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:31.995070+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.018108+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.018108+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.026040+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.026040+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.026235+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.026235+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.027896+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.027896+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.063525+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.063525+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.063561+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.063561+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.086333+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.086333+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.343657+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.343657+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.345837+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:32.971 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:32 vm03 bash[23382]: audit 2026-03-10T07:28:32.345837+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:33.012 INFO:tasks.workunit.client.0.vm00.stdout:ch_notify_pp: flushed 2026-03-10T07:28:33.012 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 (3008 ms) 2026-03-10T07:28:33.012 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: trying... 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94775971835456 notify_id 365072220163 notifier_gid 15087 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: timed out 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: flushing 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: flushed 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 (3011 ms) 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: List watches 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify2 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94775971835456 notify_id 377957122051 notifier_gid 15087 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify2 done 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: watch_check 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: unwatch2 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: flushing 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: done 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 (3129 ms) 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: List watches 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify2 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: handle_notify cookie 94775971835456 notify_id 386547056644 notifier_gid 15087 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: notify2 done 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: watch_check 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: unwatch2 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: flushing 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: done 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 (3009 ms) 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP (15808 ms total) 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [----------] Global test environment tear-down 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [==========] 16 tests from 2 test suites ran. (28659 ms total) 2026-03-10T07:28:33.013 INFO:tasks.workunit.client.0.vm00.stdout: api_watch_notify_pp: [ PASSED ] 16 tests. 2026-03-10T07:28:33.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.735088+0000 mon.a (mon.0) 1172 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.735088+0000 mon.a (mon.0) 1172 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: cluster 2026-03-10T07:28:31.893050+0000 mon.a (mon.0) 1173 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: cluster 2026-03-10T07:28:31.893050+0000 mon.a (mon.0) 1173 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.898528+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.898528+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.898777+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.898777+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.898816+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.898816+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: cluster 2026-03-10T07:28:31.929560+0000 mon.a (mon.0) 1177 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: cluster 2026-03-10T07:28:31.929560+0000 mon.a (mon.0) 1177 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.974759+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/319009892' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.974759+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/319009892' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.995070+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:31.995070+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.018108+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.018108+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.026040+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.026040+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.026235+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.026235+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.027896+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.027896+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.063525+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.063525+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.063561+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.063561+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.086333+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.086333+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.343657+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.343657+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.345837+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:32 vm00 bash[20701]: audit 2026-03-10T07:28:32.345837+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.735088+0000 mon.a (mon.0) 1172 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.735088+0000 mon.a (mon.0) 1172 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: cluster 2026-03-10T07:28:31.893050+0000 mon.a (mon.0) 1173 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: cluster 2026-03-10T07:28:31.893050+0000 mon.a (mon.0) 1173 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.898528+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.898528+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? 192.168.123.100:0/2018234458' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm00-60006-6", "pool2": "test-rados-api-vm00-60006-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.898777+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.898777+0000 mon.a (mon.0) 1175 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-9", "mode": "writeback"}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.898816+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.898816+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: cluster 2026-03-10T07:28:31.929560+0000 mon.a (mon.0) 1177 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: cluster 2026-03-10T07:28:31.929560+0000 mon.a (mon.0) 1177 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.974759+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/319009892' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.974759+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.100:0/319009892' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.995070+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:31.995070+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.018108+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.018108+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.026040+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.026040+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.026235+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.026235+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.027896+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.027896+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.063525+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.063525+0000 mon.a (mon.0) 1181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.063561+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.063561+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.086333+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.086333+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.343657+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.343657+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.345837+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:33.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:32 vm00 bash[28005]: audit 2026-03-10T07:28:32.345837+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:33.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:28:32 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: cluster 2026-03-10T07:28:32.581869+0000 mgr.y (mgr.24407) 132 : cluster [DBG] pgmap v103: 612 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 445 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: cluster 2026-03-10T07:28:32.581869+0000 mgr.y (mgr.24407) 132 : cluster [DBG] pgmap v103: 612 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 445 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.789717+0000 mon.a (mon.0) 1184 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.789717+0000 mon.a (mon.0) 1184 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.903864+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.903864+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.903903+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.903903+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.903932+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.903932+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.903956+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.903956+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: cluster 2026-03-10T07:28:32.931654+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: cluster 2026-03-10T07:28:32.931654+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T07:28:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.935949+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.935949+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.938135+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.938135+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.966649+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.966649+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.966802+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.966802+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.974301+0000 mgr.y (mgr.24407) 133 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:34.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:33 vm03 bash[23382]: audit 2026-03-10T07:28:32.974301+0000 mgr.y (mgr.24407) 133 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: cluster 2026-03-10T07:28:32.581869+0000 mgr.y (mgr.24407) 132 : cluster [DBG] pgmap v103: 612 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 445 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: cluster 2026-03-10T07:28:32.581869+0000 mgr.y (mgr.24407) 132 : cluster [DBG] pgmap v103: 612 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 445 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.789717+0000 mon.a (mon.0) 1184 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.789717+0000 mon.a (mon.0) 1184 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.903864+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.903864+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.903903+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.903903+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.903932+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.903932+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.903956+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.903956+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: cluster 2026-03-10T07:28:32.931654+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: cluster 2026-03-10T07:28:32.931654+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.935949+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.935949+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.938135+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.938135+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.966649+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.966649+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.966802+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.966802+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.974301+0000 mgr.y (mgr.24407) 133 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:33 vm00 bash[20701]: audit 2026-03-10T07:28:32.974301+0000 mgr.y (mgr.24407) 133 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: cluster 2026-03-10T07:28:32.581869+0000 mgr.y (mgr.24407) 132 : cluster [DBG] pgmap v103: 612 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 445 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: cluster 2026-03-10T07:28:32.581869+0000 mgr.y (mgr.24407) 132 : cluster [DBG] pgmap v103: 612 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 445 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.789717+0000 mon.a (mon.0) 1184 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.789717+0000 mon.a (mon.0) 1184 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.903864+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.903864+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm00-59629-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.903903+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.903903+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.100:0/2643744289' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm00-59637-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.903932+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.903932+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.903956+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.903956+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: cluster 2026-03-10T07:28:32.931654+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: cluster 2026-03-10T07:28:32.931654+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.935949+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.935949+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.938135+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.938135+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.966649+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.966649+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.966802+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.966802+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.974301+0000 mgr.y (mgr.24407) 133 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:34.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:33 vm00 bash[28005]: audit 2026-03-10T07:28:32.974301+0000 mgr.y (mgr.24407) 133 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:34 vm00 bash[20701]: audit 2026-03-10T07:28:33.795006+0000 mon.a (mon.0) 1192 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:34 vm00 bash[20701]: audit 2026-03-10T07:28:33.795006+0000 mon.a (mon.0) 1192 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:34 vm00 bash[20701]: cluster 2026-03-10T07:28:33.904900+0000 mon.a (mon.0) 1193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:34 vm00 bash[20701]: cluster 2026-03-10T07:28:33.904900+0000 mon.a (mon.0) 1193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:34 vm00 bash[20701]: audit 2026-03-10T07:28:33.908260+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:34 vm00 bash[20701]: audit 2026-03-10T07:28:33.908260+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:34 vm00 bash[20701]: cluster 2026-03-10T07:28:33.969311+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:34 vm00 bash[20701]: cluster 2026-03-10T07:28:33.969311+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:34 vm00 bash[28005]: audit 2026-03-10T07:28:33.795006+0000 mon.a (mon.0) 1192 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:34 vm00 bash[28005]: audit 2026-03-10T07:28:33.795006+0000 mon.a (mon.0) 1192 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:34 vm00 bash[28005]: cluster 2026-03-10T07:28:33.904900+0000 mon.a (mon.0) 1193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:34 vm00 bash[28005]: cluster 2026-03-10T07:28:33.904900+0000 mon.a (mon.0) 1193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:34 vm00 bash[28005]: audit 2026-03-10T07:28:33.908260+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:34 vm00 bash[28005]: audit 2026-03-10T07:28:33.908260+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:34 vm00 bash[28005]: cluster 2026-03-10T07:28:33.969311+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T07:28:35.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:34 vm00 bash[28005]: cluster 2026-03-10T07:28:33.969311+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T07:28:35.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:34 vm03 bash[23382]: audit 2026-03-10T07:28:33.795006+0000 mon.a (mon.0) 1192 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:35.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:34 vm03 bash[23382]: audit 2026-03-10T07:28:33.795006+0000 mon.a (mon.0) 1192 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:35.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:34 vm03 bash[23382]: cluster 2026-03-10T07:28:33.904900+0000 mon.a (mon.0) 1193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:35.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:34 vm03 bash[23382]: cluster 2026-03-10T07:28:33.904900+0000 mon.a (mon.0) 1193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:35.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:34 vm03 bash[23382]: audit 2026-03-10T07:28:33.908260+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:35.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:34 vm03 bash[23382]: audit 2026-03-10T07:28:33.908260+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-9"}]': finished 2026-03-10T07:28:35.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:34 vm03 bash[23382]: cluster 2026-03-10T07:28:33.969311+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T07:28:35.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:34 vm03 bash[23382]: cluster 2026-03-10T07:28:33.969311+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T07:28:35.599 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [==========] Running 4 tests from 1 test suite. 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [----------] Global test environment set-up. 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterEarly 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterEarly (5153 ms) 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterLate 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterLate (90 ms) 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Status 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ OK ] LibRadosServicePP.Status (20066 ms) 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Close 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: attempt 0 of 20 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ OK ] LibRadosServicePP.Close (5827 ms) 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP (31136 ms total) 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [----------] Global test environment tear-down 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [==========] 4 tests from 1 test suite ran. (31138 ms total) 2026-03-10T07:28:35.600 INFO:tasks.workunit.client.0.vm00.stdout: api_service_pp: [ PASSED ] 4 tests. 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: cluster 2026-03-10T07:28:34.582378+0000 mgr.y (mgr.24407) 134 : cluster [DBG] pgmap v106: 580 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 413 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: cluster 2026-03-10T07:28:34.582378+0000 mgr.y (mgr.24407) 134 : cluster [DBG] pgmap v106: 580 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 413 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:34.796140+0000 mon.a (mon.0) 1196 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:34.796140+0000 mon.a (mon.0) 1196 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:35.018514+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:35.018514+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: cluster 2026-03-10T07:28:35.022665+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: cluster 2026-03-10T07:28:35.022665+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:35.036383+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:35.036383+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:35.122262+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/203126077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:35.122262+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/203126077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:35.153407+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:35.153407+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:35.595169+0000 mon.a (mon.0) 1201 : audit [DBG] from='client.? 192.168.123.100:0/1244556623' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:36.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:35 vm03 bash[23382]: audit 2026-03-10T07:28:35.595169+0000 mon.a (mon.0) 1201 : audit [DBG] from='client.? 192.168.123.100:0/1244556623' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:36.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: cluster 2026-03-10T07:28:34.582378+0000 mgr.y (mgr.24407) 134 : cluster [DBG] pgmap v106: 580 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 413 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: cluster 2026-03-10T07:28:34.582378+0000 mgr.y (mgr.24407) 134 : cluster [DBG] pgmap v106: 580 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 413 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:34.796140+0000 mon.a (mon.0) 1196 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:34.796140+0000 mon.a (mon.0) 1196 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:35.018514+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:35.018514+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: cluster 2026-03-10T07:28:35.022665+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: cluster 2026-03-10T07:28:35.022665+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:35.036383+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:35.036383+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:35.122262+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/203126077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:35.122262+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/203126077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:35.153407+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:35.153407+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:35.595169+0000 mon.a (mon.0) 1201 : audit [DBG] from='client.? 192.168.123.100:0/1244556623' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:35 vm00 bash[20701]: audit 2026-03-10T07:28:35.595169+0000 mon.a (mon.0) 1201 : audit [DBG] from='client.? 192.168.123.100:0/1244556623' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: cluster 2026-03-10T07:28:34.582378+0000 mgr.y (mgr.24407) 134 : cluster [DBG] pgmap v106: 580 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 413 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: cluster 2026-03-10T07:28:34.582378+0000 mgr.y (mgr.24407) 134 : cluster [DBG] pgmap v106: 580 pgs: 5 creating+activating, 2 active+clean+snaptrim, 160 unknown, 413 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:34.796140+0000 mon.a (mon.0) 1196 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:34.796140+0000 mon.a (mon.0) 1196 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:35.018514+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:35.018514+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm00-59879-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: cluster 2026-03-10T07:28:35.022665+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: cluster 2026-03-10T07:28:35.022665+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:35.036383+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:35.036383+0000 mon.a (mon.0) 1199 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:35.122262+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/203126077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:35.122262+0000 mon.c (mon.2) 135 : audit [INF] from='client.? 192.168.123.100:0/203126077' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:35.153407+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:35.153407+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:36.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:35.595169+0000 mon.a (mon.0) 1201 : audit [DBG] from='client.? 192.168.123.100:0/1244556623' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:36.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:35 vm00 bash[28005]: audit 2026-03-10T07:28:35.595169+0000 mon.a (mon.0) 1201 : audit [DBG] from='client.? 192.168.123.100:0/1244556623' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:37.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:35.595384+0000 mgr.y (mgr.24407) 135 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:37.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:35.595384+0000 mgr.y (mgr.24407) 135 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:37.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:35.810393+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:35.810393+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: cluster 2026-03-10T07:28:35.889485+0000 mon.a (mon.0) 1203 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: cluster 2026-03-10T07:28:35.889485+0000 mon.a (mon.0) 1203 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:36.024518+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:36.024518+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:36.024554+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:36.024554+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: cluster 2026-03-10T07:28:36.027825+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: cluster 2026-03-10T07:28:36.027825+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:36.065102+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:36.065102+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:36.072732+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:36.072732+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:36.073948+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:36 vm00 bash[20701]: audit 2026-03-10T07:28:36.073948+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:35.595384+0000 mgr.y (mgr.24407) 135 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:35.595384+0000 mgr.y (mgr.24407) 135 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:35.810393+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:35.810393+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: cluster 2026-03-10T07:28:35.889485+0000 mon.a (mon.0) 1203 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: cluster 2026-03-10T07:28:35.889485+0000 mon.a (mon.0) 1203 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:36.024518+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:36.024518+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:36.024554+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:36.024554+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: cluster 2026-03-10T07:28:36.027825+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: cluster 2026-03-10T07:28:36.027825+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:36.065102+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:36.065102+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:36.072732+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:36.072732+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:36.073948+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:36 vm00 bash[28005]: audit 2026-03-10T07:28:36.073948+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:35.595384+0000 mgr.y (mgr.24407) 135 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:35.595384+0000 mgr.y (mgr.24407) 135 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:35.810393+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:35.810393+0000 mon.a (mon.0) 1202 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: cluster 2026-03-10T07:28:35.889485+0000 mon.a (mon.0) 1203 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: cluster 2026-03-10T07:28:35.889485+0000 mon.a (mon.0) 1203 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:36.024518+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:36.024518+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.100:0/1152044576' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm00-59629-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:36.024554+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:36.024554+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm00-59637-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: cluster 2026-03-10T07:28:36.027825+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: cluster 2026-03-10T07:28:36.027825+0000 mon.a (mon.0) 1206 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:36.065102+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:36.065102+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:36.072732+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:36.072732+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:36.073948+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:37.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:36 vm03 bash[23382]: audit 2026-03-10T07:28:36.073948+0000 mon.a (mon.0) 1208 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:38.119 INFO:tasks.workunit.client.0.vm00.stdout: misc: Running main() from gmock_main.cc 2026-03-10T07:28:38.119 INFO:tasks.workunit.client.0.vm00.stdout: misc: [==========] Running 12 tests from 1 test suite. 2026-03-10T07:28:38.119 INFO:tasks.workunit.client.0.vm00.stdout: misc: [----------] Global test environment set-up. 2026-03-10T07:28:38.119 INFO:tasks.workunit.client.0.vm00.stdout: misc: [----------] 12 tests from NeoRadosMisc 2026-03-10T07:28:38.119 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.Version 2026-03-10T07:28:38.119 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.Version (1885 ms) 2026-03-10T07:28:38.119 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.WaitOSDMap 2026-03-10T07:28:38.119 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.WaitOSDMap (2166 ms) 2026-03-10T07:28:38.119 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.LongName 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.LongName (3169 ms) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.LongLocator 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.LongLocator (3325 ms) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.LongNamespace 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.LongNamespace (2564 ms) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.LongAttrName 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.LongAttrName (2975 ms) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.Exec 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.Exec (3091 ms) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.Operate1 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.Operate1 (3341 ms) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.Operate2 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.Operate2 (3144 ms) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.BigObject 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.BigObject (2504 ms) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.BigAttr 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.BigAttr (2054 ms) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ RUN ] NeoRadosMisc.WriteSame 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ OK ] NeoRadosMisc.WriteSame (3034 ms) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [----------] 12 tests from NeoRadosMisc (33252 ms total) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [----------] Global test environment tear-down 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [==========] 12 tests from 1 test suite ran. (33254 ms total) 2026-03-10T07:28:38.120 INFO:tasks.workunit.client.0.vm00.stdout: misc: [ PASSED ] 12 tests. 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:37 vm00 bash[20701]: cluster 2026-03-10T07:28:36.582870+0000 mgr.y (mgr.24407) 136 : cluster [DBG] pgmap v109: 620 pgs: 232 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.5 KiB/s wr, 5 op/s 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:37 vm00 bash[20701]: cluster 2026-03-10T07:28:36.582870+0000 mgr.y (mgr.24407) 136 : cluster [DBG] pgmap v109: 620 pgs: 232 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.5 KiB/s wr, 5 op/s 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:37 vm00 bash[20701]: audit 2026-03-10T07:28:36.848523+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:37 vm00 bash[20701]: audit 2026-03-10T07:28:36.848523+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:37 vm00 bash[20701]: audit 2026-03-10T07:28:37.040295+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:37 vm00 bash[20701]: audit 2026-03-10T07:28:37.040295+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:37 vm00 bash[20701]: audit 2026-03-10T07:28:37.040346+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:37 vm00 bash[20701]: audit 2026-03-10T07:28:37.040346+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:37 vm00 bash[20701]: cluster 2026-03-10T07:28:37.044381+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:37 vm00 bash[20701]: cluster 2026-03-10T07:28:37.044381+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:37 vm00 bash[28005]: cluster 2026-03-10T07:28:36.582870+0000 mgr.y (mgr.24407) 136 : cluster [DBG] pgmap v109: 620 pgs: 232 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.5 KiB/s wr, 5 op/s 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:37 vm00 bash[28005]: cluster 2026-03-10T07:28:36.582870+0000 mgr.y (mgr.24407) 136 : cluster [DBG] pgmap v109: 620 pgs: 232 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.5 KiB/s wr, 5 op/s 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:37 vm00 bash[28005]: audit 2026-03-10T07:28:36.848523+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:37 vm00 bash[28005]: audit 2026-03-10T07:28:36.848523+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:38.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:37 vm00 bash[28005]: audit 2026-03-10T07:28:37.040295+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:37 vm00 bash[28005]: audit 2026-03-10T07:28:37.040295+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:37 vm00 bash[28005]: audit 2026-03-10T07:28:37.040346+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:37 vm00 bash[28005]: audit 2026-03-10T07:28:37.040346+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:37 vm00 bash[28005]: cluster 2026-03-10T07:28:37.044381+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T07:28:38.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:37 vm00 bash[28005]: cluster 2026-03-10T07:28:37.044381+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T07:28:38.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:37 vm03 bash[23382]: cluster 2026-03-10T07:28:36.582870+0000 mgr.y (mgr.24407) 136 : cluster [DBG] pgmap v109: 620 pgs: 232 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.5 KiB/s wr, 5 op/s 2026-03-10T07:28:38.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:37 vm03 bash[23382]: cluster 2026-03-10T07:28:36.582870+0000 mgr.y (mgr.24407) 136 : cluster [DBG] pgmap v109: 620 pgs: 232 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.5 KiB/s wr, 5 op/s 2026-03-10T07:28:38.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:37 vm03 bash[23382]: audit 2026-03-10T07:28:36.848523+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:38.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:37 vm03 bash[23382]: audit 2026-03-10T07:28:36.848523+0000 mon.a (mon.0) 1209 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:38.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:37 vm03 bash[23382]: audit 2026-03-10T07:28:37.040295+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:37 vm03 bash[23382]: audit 2026-03-10T07:28:37.040295+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:37 vm03 bash[23382]: audit 2026-03-10T07:28:37.040346+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:37 vm03 bash[23382]: audit 2026-03-10T07:28:37.040346+0000 mon.a (mon.0) 1211 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59837-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:38.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:37 vm03 bash[23382]: cluster 2026-03-10T07:28:37.044381+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T07:28:38.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:37 vm03 bash[23382]: cluster 2026-03-10T07:28:37.044381+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:37.850323+0000 mon.a (mon.0) 1213 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:37.850323+0000 mon.a (mon.0) 1213 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: cluster 2026-03-10T07:28:38.050392+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: cluster 2026-03-10T07:28:38.050392+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.080558+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/1250107820' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.080558+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/1250107820' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.097628+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.097628+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.099428+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.099428+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.150043+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/3170777700' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.150043+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/3170777700' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.150599+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.150599+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.310375+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.310375+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.312543+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:39.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:38 vm03 bash[23382]: audit 2026-03-10T07:28:38.312543+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:39.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:37.850323+0000 mon.a (mon.0) 1213 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:39.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:37.850323+0000 mon.a (mon.0) 1213 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:39.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: cluster 2026-03-10T07:28:38.050392+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T07:28:39.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: cluster 2026-03-10T07:28:38.050392+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T07:28:39.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.080558+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/1250107820' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.080558+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/1250107820' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.097628+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.097628+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.099428+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.099428+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.150043+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/3170777700' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.150043+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/3170777700' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.150599+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.150599+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.310375+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.310375+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.312543+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:38 vm00 bash[20701]: audit 2026-03-10T07:28:38.312543+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:37.850323+0000 mon.a (mon.0) 1213 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:37.850323+0000 mon.a (mon.0) 1213 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: cluster 2026-03-10T07:28:38.050392+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: cluster 2026-03-10T07:28:38.050392+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.080558+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/1250107820' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.080558+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.100:0/1250107820' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.097628+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.097628+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.099428+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.099428+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.150043+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/3170777700' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.150043+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.100:0/3170777700' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.150599+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.150599+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.310375+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.310375+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.312543+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:38 vm00 bash[28005]: audit 2026-03-10T07:28:38.312543+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:40.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: cluster 2026-03-10T07:28:38.583348+0000 mgr.y (mgr.24407) 137 : cluster [DBG] pgmap v112: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 5 op/s 2026-03-10T07:28:40.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: cluster 2026-03-10T07:28:38.583348+0000 mgr.y (mgr.24407) 137 : cluster [DBG] pgmap v112: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 5 op/s 2026-03-10T07:28:40.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:38.851160+0000 mon.a (mon.0) 1219 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:40.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:38.851160+0000 mon.a (mon.0) 1219 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:40.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.010097+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.010097+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.013134+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.013134+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.067615+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.067615+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.067666+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.067666+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.067693+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.067693+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.067717+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.067717+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.125778+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.125778+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: cluster 2026-03-10T07:28:39.150117+0000 mon.a (mon.0) 1225 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: cluster 2026-03-10T07:28:39.150117+0000 mon.a (mon.0) 1225 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.168892+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.168892+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.180597+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.180597+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.851974+0000 mon.a (mon.0) 1228 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:40 vm00 bash[20701]: audit 2026-03-10T07:28:39.851974+0000 mon.a (mon.0) 1228 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: cluster 2026-03-10T07:28:38.583348+0000 mgr.y (mgr.24407) 137 : cluster [DBG] pgmap v112: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 5 op/s 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: cluster 2026-03-10T07:28:38.583348+0000 mgr.y (mgr.24407) 137 : cluster [DBG] pgmap v112: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 5 op/s 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:38.851160+0000 mon.a (mon.0) 1219 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:38.851160+0000 mon.a (mon.0) 1219 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.010097+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.010097+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.013134+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.013134+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.067615+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.067615+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.067666+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.067666+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.067693+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.067693+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.067717+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.067717+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.125778+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.125778+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: cluster 2026-03-10T07:28:39.150117+0000 mon.a (mon.0) 1225 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: cluster 2026-03-10T07:28:39.150117+0000 mon.a (mon.0) 1225 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.168892+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.168892+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.180597+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.180597+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.851974+0000 mon.a (mon.0) 1228 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:40.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:40 vm00 bash[28005]: audit 2026-03-10T07:28:39.851974+0000 mon.a (mon.0) 1228 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:40.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: cluster 2026-03-10T07:28:38.583348+0000 mgr.y (mgr.24407) 137 : cluster [DBG] pgmap v112: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 5 op/s 2026-03-10T07:28:40.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: cluster 2026-03-10T07:28:38.583348+0000 mgr.y (mgr.24407) 137 : cluster [DBG] pgmap v112: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 5 op/s 2026-03-10T07:28:40.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:38.851160+0000 mon.a (mon.0) 1219 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:40.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:38.851160+0000 mon.a (mon.0) 1219 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:40.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.010097+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:40.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.010097+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:28:40.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.013134+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.013134+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.067615+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]': finished 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.067615+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache", "force_nonempty":""}]': finished 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.067666+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.067666+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm00-59629-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.067693+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.067693+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm00-59637-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.067717+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.067717+0000 mon.a (mon.0) 1224 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.125778+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.125778+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: cluster 2026-03-10T07:28:39.150117+0000 mon.a (mon.0) 1225 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: cluster 2026-03-10T07:28:39.150117+0000 mon.a (mon.0) 1225 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.168892+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.168892+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.180597+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.180597+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.851974+0000 mon.a (mon.0) 1228 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:40.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:40 vm03 bash[23382]: audit 2026-03-10T07:28:39.851974+0000 mon.a (mon.0) 1228 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:41.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:28:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:28:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:28:41.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.152799+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:41.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.152799+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:41.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.152854+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:41.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.152854+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:41.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: cluster 2026-03-10T07:28:40.155725+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T07:28:41.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: cluster 2026-03-10T07:28:40.155725+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T07:28:41.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.187789+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.187789+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.192902+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]: dispatch 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.192902+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]: dispatch 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.202127+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.202127+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.852976+0000 mon.a (mon.0) 1234 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.852976+0000 mon.a (mon.0) 1234 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: cluster 2026-03-10T07:28:40.890400+0000 mon.a (mon.0) 1235 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: cluster 2026-03-10T07:28:40.890400+0000 mon.a (mon.0) 1235 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: cluster 2026-03-10T07:28:40.891234+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: cluster 2026-03-10T07:28:40.891234+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.896119+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]': finished 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.896119+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]': finished 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.896160+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]': finished 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.896160+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]': finished 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: cluster 2026-03-10T07:28:40.899445+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: cluster 2026-03-10T07:28:40.899445+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.943720+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:40.943720+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:41.033116+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/1869650045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:41.033116+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/1869650045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:41.047427+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:41 vm03 bash[23382]: audit 2026-03-10T07:28:41.047427+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.152799+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:41.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.152799+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:41.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.152854+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:41.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.152854+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:41.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: cluster 2026-03-10T07:28:40.155725+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T07:28:41.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: cluster 2026-03-10T07:28:40.155725+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T07:28:41.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.187789+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.187789+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.192902+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]: dispatch 2026-03-10T07:28:41.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.192902+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.202127+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.202127+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.852976+0000 mon.a (mon.0) 1234 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.852976+0000 mon.a (mon.0) 1234 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: cluster 2026-03-10T07:28:40.890400+0000 mon.a (mon.0) 1235 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: cluster 2026-03-10T07:28:40.890400+0000 mon.a (mon.0) 1235 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: cluster 2026-03-10T07:28:40.891234+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: cluster 2026-03-10T07:28:40.891234+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.896119+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.896119+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.896160+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.896160+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: cluster 2026-03-10T07:28:40.899445+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: cluster 2026-03-10T07:28:40.899445+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.943720+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:40.943720+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:41.033116+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/1869650045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:41.033116+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/1869650045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:41.047427+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:41 vm00 bash[28005]: audit 2026-03-10T07:28:41.047427+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.152799+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.152799+0000 mon.a (mon.0) 1229 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59837-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.152854+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.152854+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: cluster 2026-03-10T07:28:40.155725+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: cluster 2026-03-10T07:28:40.155725+0000 mon.a (mon.0) 1231 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.187789+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.187789+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.192902+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.192902+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.202127+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.202127+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.852976+0000 mon.a (mon.0) 1234 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.852976+0000 mon.a (mon.0) 1234 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: cluster 2026-03-10T07:28:40.890400+0000 mon.a (mon.0) 1235 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: cluster 2026-03-10T07:28:40.890400+0000 mon.a (mon.0) 1235 : cluster [WRN] Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: cluster 2026-03-10T07:28:40.891234+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: cluster 2026-03-10T07:28:40.891234+0000 mon.a (mon.0) 1236 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.896119+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.896119+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.100:0/2673877288' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59837-10", "tierpool":"test-rados-api-vm00-59837-10-cache"}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.896160+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.896160+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-11", "mode": "writeback"}]': finished 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: cluster 2026-03-10T07:28:40.899445+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: cluster 2026-03-10T07:28:40.899445+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.943720+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:40.943720+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:41.033116+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/1869650045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:41.033116+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.100:0/1869650045' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:41.047427+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:41.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:41 vm00 bash[20701]: audit 2026-03-10T07:28:41.047427+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:42 vm00 bash[20701]: cluster 2026-03-10T07:28:40.583905+0000 mgr.y (mgr.24407) 138 : cluster [DBG] pgmap v115: 588 pgs: 64 creating+peering, 524 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 259 KiB/s wr, 6 op/s 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:42 vm00 bash[20701]: cluster 2026-03-10T07:28:40.583905+0000 mgr.y (mgr.24407) 138 : cluster [DBG] pgmap v115: 588 pgs: 64 creating+peering, 524 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 259 KiB/s wr, 6 op/s 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:42 vm00 bash[20701]: audit 2026-03-10T07:28:41.854196+0000 mon.a (mon.0) 1242 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:42 vm00 bash[20701]: audit 2026-03-10T07:28:41.854196+0000 mon.a (mon.0) 1242 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:42 vm00 bash[20701]: audit 2026-03-10T07:28:41.905813+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:42 vm00 bash[20701]: audit 2026-03-10T07:28:41.905813+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:42 vm00 bash[20701]: audit 2026-03-10T07:28:41.905883+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:42 vm00 bash[20701]: audit 2026-03-10T07:28:41.905883+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:42 vm00 bash[20701]: cluster 2026-03-10T07:28:41.928569+0000 mon.a (mon.0) 1245 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:42 vm00 bash[20701]: cluster 2026-03-10T07:28:41.928569+0000 mon.a (mon.0) 1245 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:42 vm00 bash[28005]: cluster 2026-03-10T07:28:40.583905+0000 mgr.y (mgr.24407) 138 : cluster [DBG] pgmap v115: 588 pgs: 64 creating+peering, 524 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 259 KiB/s wr, 6 op/s 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:42 vm00 bash[28005]: cluster 2026-03-10T07:28:40.583905+0000 mgr.y (mgr.24407) 138 : cluster [DBG] pgmap v115: 588 pgs: 64 creating+peering, 524 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 259 KiB/s wr, 6 op/s 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:42 vm00 bash[28005]: audit 2026-03-10T07:28:41.854196+0000 mon.a (mon.0) 1242 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:42 vm00 bash[28005]: audit 2026-03-10T07:28:41.854196+0000 mon.a (mon.0) 1242 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:42.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:42 vm00 bash[28005]: audit 2026-03-10T07:28:41.905813+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:42 vm00 bash[28005]: audit 2026-03-10T07:28:41.905813+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:42 vm00 bash[28005]: audit 2026-03-10T07:28:41.905883+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:42 vm00 bash[28005]: audit 2026-03-10T07:28:41.905883+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:42 vm00 bash[28005]: cluster 2026-03-10T07:28:41.928569+0000 mon.a (mon.0) 1245 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T07:28:42.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:42 vm00 bash[28005]: cluster 2026-03-10T07:28:41.928569+0000 mon.a (mon.0) 1245 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T07:28:42.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:42 vm03 bash[23382]: cluster 2026-03-10T07:28:40.583905+0000 mgr.y (mgr.24407) 138 : cluster [DBG] pgmap v115: 588 pgs: 64 creating+peering, 524 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 259 KiB/s wr, 6 op/s 2026-03-10T07:28:42.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:42 vm03 bash[23382]: cluster 2026-03-10T07:28:40.583905+0000 mgr.y (mgr.24407) 138 : cluster [DBG] pgmap v115: 588 pgs: 64 creating+peering, 524 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 259 KiB/s wr, 6 op/s 2026-03-10T07:28:42.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:42 vm03 bash[23382]: audit 2026-03-10T07:28:41.854196+0000 mon.a (mon.0) 1242 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:42.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:42 vm03 bash[23382]: audit 2026-03-10T07:28:41.854196+0000 mon.a (mon.0) 1242 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:42.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:42 vm03 bash[23382]: audit 2026-03-10T07:28:41.905813+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:42 vm03 bash[23382]: audit 2026-03-10T07:28:41.905813+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.100:0/1537398115' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm00-59637-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:42 vm03 bash[23382]: audit 2026-03-10T07:28:41.905883+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:42 vm03 bash[23382]: audit 2026-03-10T07:28:41.905883+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm00-59629-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:42.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:42 vm03 bash[23382]: cluster 2026-03-10T07:28:41.928569+0000 mon.a (mon.0) 1245 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T07:28:42.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:42 vm03 bash[23382]: cluster 2026-03-10T07:28:41.928569+0000 mon.a (mon.0) 1245 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: Running main() from gmock_main.cc 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [==========] Running 9 tests from 1 test suite. 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [----------] Global test environment set-up. 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [----------] 9 tests from LibRadosPools 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolList 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolList (3821 ms) 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup (3080 ms) 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup2 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup2 (3354 ms) 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookupOtherInstance 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolLookupOtherInstance (2542 ms) 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolReverseLookupOtherInstance 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolReverseLookupOtherInstance (3033 ms) 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolDelete 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolDelete (5344 ms) 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateDelete 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateDelete (5177 ms) 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateWithCrushRule 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateWithCrushRule (4805 ms) 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ RUN ] LibRadosPools.PoolGetBaseTier 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ OK ] LibRadosPools.PoolGetBaseTier (7633 ms) 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [----------] 9 tests from LibRadosPools (38789 ms total) 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [----------] Global test environment tear-down 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [==========] 9 tests from 1 test suite ran. (38789 ms total) 2026-03-10T07:28:42.958 INFO:tasks.workunit.client.0.vm00.stdout: api_pool: [ PASSED ] 9 tests. 2026-03-10T07:28:43.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:28:42 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.531563+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.531563+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.533128+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.533128+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: cluster 2026-03-10T07:28:42.584414+0000 mgr.y (mgr.24407) 139 : cluster [DBG] pgmap v118: 588 pgs: 128 unknown, 460 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 258 KiB/s wr, 4 op/s 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: cluster 2026-03-10T07:28:42.584414+0000 mgr.y (mgr.24407) 139 : cluster [DBG] pgmap v118: 588 pgs: 128 unknown, 460 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 258 KiB/s wr, 4 op/s 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.855165+0000 mon.a (mon.0) 1247 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.855165+0000 mon.a (mon.0) 1247 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.910654+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.910654+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: cluster 2026-03-10T07:28:42.915527+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: cluster 2026-03-10T07:28:42.915527+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.931362+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.931362+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.949319+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.949319+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.984906+0000 mgr.y (mgr.24407) 140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:43 vm00 bash[28005]: audit 2026-03-10T07:28:42.984906+0000 mgr.y (mgr.24407) 140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.531563+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.531563+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.533128+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.533128+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: cluster 2026-03-10T07:28:42.584414+0000 mgr.y (mgr.24407) 139 : cluster [DBG] pgmap v118: 588 pgs: 128 unknown, 460 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 258 KiB/s wr, 4 op/s 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: cluster 2026-03-10T07:28:42.584414+0000 mgr.y (mgr.24407) 139 : cluster [DBG] pgmap v118: 588 pgs: 128 unknown, 460 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 258 KiB/s wr, 4 op/s 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.855165+0000 mon.a (mon.0) 1247 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.855165+0000 mon.a (mon.0) 1247 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.910654+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.910654+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: cluster 2026-03-10T07:28:42.915527+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: cluster 2026-03-10T07:28:42.915527+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.931362+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.931362+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.949319+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.949319+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.984906+0000 mgr.y (mgr.24407) 140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:43 vm00 bash[20701]: audit 2026-03-10T07:28:42.984906+0000 mgr.y (mgr.24407) 140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:44.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.531563+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:44.036 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.531563+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:44.036 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.533128+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:44.036 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.533128+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:28:44.036 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: cluster 2026-03-10T07:28:42.584414+0000 mgr.y (mgr.24407) 139 : cluster [DBG] pgmap v118: 588 pgs: 128 unknown, 460 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 258 KiB/s wr, 4 op/s 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: cluster 2026-03-10T07:28:42.584414+0000 mgr.y (mgr.24407) 139 : cluster [DBG] pgmap v118: 588 pgs: 128 unknown, 460 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 258 KiB/s wr, 4 op/s 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.855165+0000 mon.a (mon.0) 1247 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.855165+0000 mon.a (mon.0) 1247 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.910654+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.910654+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: cluster 2026-03-10T07:28:42.915527+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: cluster 2026-03-10T07:28:42.915527+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.931362+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.931362+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.949319+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.949319+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]: dispatch 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.984906+0000 mgr.y (mgr.24407) 140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:44.037 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:43 vm03 bash[23382]: audit 2026-03-10T07:28:42.984906+0000 mgr.y (mgr.24407) 140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: audit 2026-03-10T07:28:43.856362+0000 mon.a (mon.0) 1251 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: audit 2026-03-10T07:28:43.856362+0000 mon.a (mon.0) 1251 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: cluster 2026-03-10T07:28:43.911287+0000 mon.a (mon.0) 1252 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: cluster 2026-03-10T07:28:43.911287+0000 mon.a (mon.0) 1252 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: audit 2026-03-10T07:28:44.065981+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: audit 2026-03-10T07:28:44.065981+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: cluster 2026-03-10T07:28:44.071401+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: cluster 2026-03-10T07:28:44.071401+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: audit 2026-03-10T07:28:44.127023+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: audit 2026-03-10T07:28:44.127023+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: audit 2026-03-10T07:28:44.130785+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:44 vm00 bash[20701]: audit 2026-03-10T07:28:44.130785+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: audit 2026-03-10T07:28:43.856362+0000 mon.a (mon.0) 1251 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: audit 2026-03-10T07:28:43.856362+0000 mon.a (mon.0) 1251 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: cluster 2026-03-10T07:28:43.911287+0000 mon.a (mon.0) 1252 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: cluster 2026-03-10T07:28:43.911287+0000 mon.a (mon.0) 1252 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: audit 2026-03-10T07:28:44.065981+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: audit 2026-03-10T07:28:44.065981+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: cluster 2026-03-10T07:28:44.071401+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: cluster 2026-03-10T07:28:44.071401+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: audit 2026-03-10T07:28:44.127023+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: audit 2026-03-10T07:28:44.127023+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: audit 2026-03-10T07:28:44.130785+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:44.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:44 vm00 bash[28005]: audit 2026-03-10T07:28:44.130785+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: audit 2026-03-10T07:28:43.856362+0000 mon.a (mon.0) 1251 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: audit 2026-03-10T07:28:43.856362+0000 mon.a (mon.0) 1251 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: cluster 2026-03-10T07:28:43.911287+0000 mon.a (mon.0) 1252 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: cluster 2026-03-10T07:28:43.911287+0000 mon.a (mon.0) 1252 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: audit 2026-03-10T07:28:44.065981+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: audit 2026-03-10T07:28:44.065981+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-11"}]': finished 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: cluster 2026-03-10T07:28:44.071401+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: cluster 2026-03-10T07:28:44.071401+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: audit 2026-03-10T07:28:44.127023+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: audit 2026-03-10T07:28:44.127023+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: audit 2026-03-10T07:28:44.130785+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:45.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:44 vm03 bash[23382]: audit 2026-03-10T07:28:44.130785+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: cluster 2026-03-10T07:28:44.584863+0000 mgr.y (mgr.24407) 141 : cluster [DBG] pgmap v121: 524 pgs: 96 unknown, 428 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: cluster 2026-03-10T07:28:44.584863+0000 mgr.y (mgr.24407) 141 : cluster [DBG] pgmap v121: 524 pgs: 96 unknown, 428 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: audit 2026-03-10T07:28:44.858009+0000 mon.a (mon.0) 1257 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: audit 2026-03-10T07:28:44.858009+0000 mon.a (mon.0) 1257 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: audit 2026-03-10T07:28:45.072531+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: audit 2026-03-10T07:28:45.072531+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: audit 2026-03-10T07:28:45.072560+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: audit 2026-03-10T07:28:45.072560+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: cluster 2026-03-10T07:28:45.084184+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T07:28:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: cluster 2026-03-10T07:28:45.084184+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T07:28:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: audit 2026-03-10T07:28:45.145857+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: audit 2026-03-10T07:28:45.145857+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: audit 2026-03-10T07:28:45.161869+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:45 vm00 bash[20701]: audit 2026-03-10T07:28:45.161869+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: cluster 2026-03-10T07:28:44.584863+0000 mgr.y (mgr.24407) 141 : cluster [DBG] pgmap v121: 524 pgs: 96 unknown, 428 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: cluster 2026-03-10T07:28:44.584863+0000 mgr.y (mgr.24407) 141 : cluster [DBG] pgmap v121: 524 pgs: 96 unknown, 428 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: audit 2026-03-10T07:28:44.858009+0000 mon.a (mon.0) 1257 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: audit 2026-03-10T07:28:44.858009+0000 mon.a (mon.0) 1257 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: audit 2026-03-10T07:28:45.072531+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: audit 2026-03-10T07:28:45.072531+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: audit 2026-03-10T07:28:45.072560+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: audit 2026-03-10T07:28:45.072560+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: cluster 2026-03-10T07:28:45.084184+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: cluster 2026-03-10T07:28:45.084184+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: audit 2026-03-10T07:28:45.145857+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: audit 2026-03-10T07:28:45.145857+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: audit 2026-03-10T07:28:45.161869+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:45.893 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:45 vm00 bash[28005]: audit 2026-03-10T07:28:45.161869+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: cluster 2026-03-10T07:28:44.584863+0000 mgr.y (mgr.24407) 141 : cluster [DBG] pgmap v121: 524 pgs: 96 unknown, 428 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: cluster 2026-03-10T07:28:44.584863+0000 mgr.y (mgr.24407) 141 : cluster [DBG] pgmap v121: 524 pgs: 96 unknown, 428 active+clean; 144 MiB data, 924 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: audit 2026-03-10T07:28:44.858009+0000 mon.a (mon.0) 1257 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: audit 2026-03-10T07:28:44.858009+0000 mon.a (mon.0) 1257 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: audit 2026-03-10T07:28:45.072531+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: audit 2026-03-10T07:28:45.072531+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3327002409' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: audit 2026-03-10T07:28:45.072560+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: audit 2026-03-10T07:28:45.072560+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/2230726992' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm00-59629-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: cluster 2026-03-10T07:28:45.084184+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: cluster 2026-03-10T07:28:45.084184+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: audit 2026-03-10T07:28:45.145857+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: audit 2026-03-10T07:28:45.145857+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: audit 2026-03-10T07:28:45.161869+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:45 vm03 bash[23382]: audit 2026-03-10T07:28:45.161869+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:45.859476+0000 mon.a (mon.0) 1262 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:45.859476+0000 mon.a (mon.0) 1262 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: cluster 2026-03-10T07:28:45.893170+0000 mon.a (mon.0) 1263 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: cluster 2026-03-10T07:28:45.893170+0000 mon.a (mon.0) 1263 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:46.091444+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:46.091444+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:46.106122+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:46.106122+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: cluster 2026-03-10T07:28:46.108014+0000 mon.a (mon.0) 1265 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: cluster 2026-03-10T07:28:46.108014+0000 mon.a (mon.0) 1265 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:46.111087+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:46.111087+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:46.137841+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:46.137841+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:46.147646+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:46 vm00 bash[20701]: audit 2026-03-10T07:28:46.147646+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:45.859476+0000 mon.a (mon.0) 1262 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:45.859476+0000 mon.a (mon.0) 1262 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: cluster 2026-03-10T07:28:45.893170+0000 mon.a (mon.0) 1263 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: cluster 2026-03-10T07:28:45.893170+0000 mon.a (mon.0) 1263 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:46.091444+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:46.091444+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:46.106122+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:46.106122+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: cluster 2026-03-10T07:28:46.108014+0000 mon.a (mon.0) 1265 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: cluster 2026-03-10T07:28:46.108014+0000 mon.a (mon.0) 1265 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:46.111087+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:46.111087+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:46.137841+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:46.137841+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:46.147646+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:46.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:46 vm00 bash[28005]: audit 2026-03-10T07:28:46.147646+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:45.859476+0000 mon.a (mon.0) 1262 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:47.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:45.859476+0000 mon.a (mon.0) 1262 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:47.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: cluster 2026-03-10T07:28:45.893170+0000 mon.a (mon.0) 1263 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:47.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: cluster 2026-03-10T07:28:45.893170+0000 mon.a (mon.0) 1263 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:47.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:46.091444+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:47.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:46.091444+0000 mon.a (mon.0) 1264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:47.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:46.106122+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:47.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:46.106122+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.100:0/3726154883' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:47.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: cluster 2026-03-10T07:28:46.108014+0000 mon.a (mon.0) 1265 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T07:28:47.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: cluster 2026-03-10T07:28:46.108014+0000 mon.a (mon.0) 1265 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T07:28:47.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:46.111087+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:47.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:46.111087+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]: dispatch 2026-03-10T07:28:47.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:46.137841+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:46.137841+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:46.147646+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:46 vm03 bash[23382]: audit 2026-03-10T07:28:46.147646+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.185 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: Running main() from gmock_main.cc 2026-03-10T07:28:47.185 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [==========] Running 14 tests from 1 test suite. 2026-03-10T07:28:47.185 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [----------] Global test environment set-up. 2026-03-10T07:28:47.185 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps 2026-03-10T07:28:47.185 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.SetOpFlags 2026-03-10T07:28:47.185 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.SetOpFlags (3055 ms) 2026-03-10T07:28:47.185 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertExists 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertExists (3114 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertVersion 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertVersion (3349 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpXattr 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpXattr (2517 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.Read 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.Read (3018 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.Checksum 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.Checksum (3150 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.RWOrderedRead 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.RWOrderedRead (3290 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.ShortRead 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.ShortRead (3161 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.Exec 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.Exec (2416 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.Stat 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.Stat (3146 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.Omap 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.Omap (3062 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.OmapNuls 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.OmapNuls (2872 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.GetXattrs 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.GetXattrs (3106 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpExt 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpExt (3045 ms) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps (42301 ms total) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [----------] Global test environment tear-down 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [==========] 14 tests from 1 test suite ran. (42301 ms total) 2026-03-10T07:28:47.186 INFO:tasks.workunit.client.0.vm00.stdout: read_operations: [ PASSED ] 14 tests. 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: Running main() from gmock_main.cc 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [==========] Running 14 tests from 1 test suite. 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [----------] Global test environment set-up. 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [----------] 14 tests from NeoRadosIo 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.Limits 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.Limits (3103 ms) 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.SimpleWrite 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.SimpleWrite (3188 ms) 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.ReadOp 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.ReadOp (3325 ms) 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.SparseRead 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.SparseRead (2538 ms) 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.RoundTrip 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.RoundTrip (3033 ms) 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.ReadIntoBuufferlist 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.ReadIntoBuufferlist (3108 ms) 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.OverlappingWriteRoundTrip 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.OverlappingWriteRoundTrip (3301 ms) 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.WriteFullRoundTrip 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.WriteFullRoundTrip (3113 ms) 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.AppendRoundTrip 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.AppendRoundTrip (2461 ms) 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.Trunc 2026-03-10T07:28:47.198 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.Trunc (3144 ms) 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.Remove 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.Remove (3007 ms) 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.XattrsRoundTrip 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.XattrsRoundTrip (2854 ms) 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.RmXattr 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.RmXattr (3174 ms) 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [ RUN ] NeoRadosIo.GetXattrs 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [ OK ] NeoRadosIo.GetXattrs (3050 ms) 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [----------] 14 tests from NeoRadosIo (42399 ms total) 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [----------] Global test environment tear-down 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [==========] 14 tests from 1 test suite ran. (42399 ms total) 2026-03-10T07:28:47.199 INFO:tasks.workunit.client.0.vm00.stdout: io: [ PASSED ] 14 tests. 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:46.585322+0000 mgr.y (mgr.24407) 142 : cluster [DBG] pgmap v124: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:46.585322+0000 mgr.y (mgr.24407) 142 : cluster [DBG] pgmap v124: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.616491+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.616491+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.616642+0000 mgr.y (mgr.24407) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.616642+0000 mgr.y (mgr.24407) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.621384+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.621384+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.625553+0000 mgr.y (mgr.24407) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.625553+0000 mgr.y (mgr.24407) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.626976+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.626976+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.627120+0000 mgr.y (mgr.24407) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.627120+0000 mgr.y (mgr.24407) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.627750+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.627750+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.627855+0000 mgr.y (mgr.24407) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.627855+0000 mgr.y (mgr.24407) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.629213+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.629213+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.629310+0000 mgr.y (mgr.24407) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.629310+0000 mgr.y (mgr.24407) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.630183+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.630183+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.630281+0000 mgr.y (mgr.24407) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.630281+0000 mgr.y (mgr.24407) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.631244+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.631244+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.631334+0000 mgr.y (mgr.24407) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.631334+0000 mgr.y (mgr.24407) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.632099+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.632099+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.632191+0000 mgr.y (mgr.24407) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.632191+0000 mgr.y (mgr.24407) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.637712+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.637712+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.637850+0000 mgr.y (mgr.24407) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.637850+0000 mgr.y (mgr.24407) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.638925+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.638925+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.639031+0000 mgr.y (mgr.24407) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.639031+0000 mgr.y (mgr.24407) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.860412+0000 mon.a (mon.0) 1278 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:46.860412+0000 mon.a (mon.0) 1278 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:47.110880+0000 mon.a (mon.0) 1279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:47.110880+0000 mon.a (mon.0) 1279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:47.110920+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:47.110920+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:47.119323+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:47.119323+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:47.187712+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/155652484' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:47.187712+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/155652484' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:47.189266+0000 mon.a (mon.0) 1282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: audit 2026-03-10T07:28:47.189266+0000 mon.a (mon.0) 1282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:47.354543+0000 osd.5 (osd.5) 5 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:47.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:47.354543+0000 osd.5 (osd.5) 5 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:47.361724+0000 osd.5 (osd.5) 6 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:47.361724+0000 osd.5 (osd.5) 6 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:47.376876+0000 osd.0 (osd.0) 5 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:47.376876+0000 osd.0 (osd.0) 5 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:47.388586+0000 osd.0 (osd.0) 6 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:47 vm00 bash[20701]: cluster 2026-03-10T07:28:47.388586+0000 osd.0 (osd.0) 6 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:46.585322+0000 mgr.y (mgr.24407) 142 : cluster [DBG] pgmap v124: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:46.585322+0000 mgr.y (mgr.24407) 142 : cluster [DBG] pgmap v124: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.616491+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.616491+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.616642+0000 mgr.y (mgr.24407) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.616642+0000 mgr.y (mgr.24407) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.621384+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.621384+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.625553+0000 mgr.y (mgr.24407) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.625553+0000 mgr.y (mgr.24407) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.626976+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.626976+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.627120+0000 mgr.y (mgr.24407) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.627120+0000 mgr.y (mgr.24407) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.627750+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.627750+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.627855+0000 mgr.y (mgr.24407) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.627855+0000 mgr.y (mgr.24407) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.629213+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.629213+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.629310+0000 mgr.y (mgr.24407) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.629310+0000 mgr.y (mgr.24407) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.630183+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.630183+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.630281+0000 mgr.y (mgr.24407) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.630281+0000 mgr.y (mgr.24407) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:47.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.631244+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.631244+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.631334+0000 mgr.y (mgr.24407) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.631334+0000 mgr.y (mgr.24407) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.632099+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.632099+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.632191+0000 mgr.y (mgr.24407) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.632191+0000 mgr.y (mgr.24407) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.637712+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.637712+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.637850+0000 mgr.y (mgr.24407) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.637850+0000 mgr.y (mgr.24407) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.638925+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.638925+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.639031+0000 mgr.y (mgr.24407) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.639031+0000 mgr.y (mgr.24407) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.860412+0000 mon.a (mon.0) 1278 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:46.860412+0000 mon.a (mon.0) 1278 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:47.110880+0000 mon.a (mon.0) 1279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:47.110880+0000 mon.a (mon.0) 1279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:47.110920+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:47.110920+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:47.119323+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:47.119323+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:47.187712+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/155652484' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:47.187712+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/155652484' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:47.189266+0000 mon.a (mon.0) 1282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: audit 2026-03-10T07:28:47.189266+0000 mon.a (mon.0) 1282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:47.354543+0000 osd.5 (osd.5) 5 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:47.354543+0000 osd.5 (osd.5) 5 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:47.361724+0000 osd.5 (osd.5) 6 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:47.361724+0000 osd.5 (osd.5) 6 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:47.376876+0000 osd.0 (osd.0) 5 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:47.376876+0000 osd.0 (osd.0) 5 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:47.388586+0000 osd.0 (osd.0) 6 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:47.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:47 vm00 bash[28005]: cluster 2026-03-10T07:28:47.388586+0000 osd.0 (osd.0) 6 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:48.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:46.585322+0000 mgr.y (mgr.24407) 142 : cluster [DBG] pgmap v124: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:46.585322+0000 mgr.y (mgr.24407) 142 : cluster [DBG] pgmap v124: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.616491+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.616491+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.616642+0000 mgr.y (mgr.24407) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.616642+0000 mgr.y (mgr.24407) 143 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.0"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.621384+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.621384+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.625553+0000 mgr.y (mgr.24407) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.625553+0000 mgr.y (mgr.24407) 144 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.1"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.626976+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.626976+0000 mon.a (mon.0) 1270 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.627120+0000 mgr.y (mgr.24407) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.627120+0000 mgr.y (mgr.24407) 145 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.2"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.627750+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.627750+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.627855+0000 mgr.y (mgr.24407) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.627855+0000 mgr.y (mgr.24407) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.3"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.629213+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.629213+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.629310+0000 mgr.y (mgr.24407) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.629310+0000 mgr.y (mgr.24407) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.4"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.630183+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.630183+0000 mon.a (mon.0) 1273 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.630281+0000 mgr.y (mgr.24407) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.630281+0000 mgr.y (mgr.24407) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.5"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.631244+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.631244+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.631334+0000 mgr.y (mgr.24407) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.631334+0000 mgr.y (mgr.24407) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.6"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.632099+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.632099+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.632191+0000 mgr.y (mgr.24407) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.632191+0000 mgr.y (mgr.24407) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.7"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.637712+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.637712+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.637850+0000 mgr.y (mgr.24407) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.637850+0000 mgr.y (mgr.24407) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.8"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.638925+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.638925+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.639031+0000 mgr.y (mgr.24407) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.639031+0000 mgr.y (mgr.24407) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "11.9"}]: dispatch 2026-03-10T07:28:48.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.860412+0000 mon.a (mon.0) 1278 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:46.860412+0000 mon.a (mon.0) 1278 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:47.110880+0000 mon.a (mon.0) 1279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:47.110880+0000 mon.a (mon.0) 1279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm00-59879-10"}]': finished 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:47.110920+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:47.110920+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:47.119323+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:47.119323+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:47.187712+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/155652484' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:47.187712+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.100:0/155652484' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:47.189266+0000 mon.a (mon.0) 1282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: audit 2026-03-10T07:28:47.189266+0000 mon.a (mon.0) 1282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:47.354543+0000 osd.5 (osd.5) 5 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:47.354543+0000 osd.5 (osd.5) 5 : cluster [DBG] 11.7 deep-scrub starts 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:47.361724+0000 osd.5 (osd.5) 6 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:47.361724+0000 osd.5 (osd.5) 6 : cluster [DBG] 11.7 deep-scrub ok 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:47.376876+0000 osd.0 (osd.0) 5 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:47.376876+0000 osd.0 (osd.0) 5 : cluster [DBG] 11.6 deep-scrub starts 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:47.388586+0000 osd.0 (osd.0) 6 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:48.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:47 vm03 bash[23382]: cluster 2026-03-10T07:28:47.388586+0000 osd.0 (osd.0) 6 : cluster [DBG] 11.6 deep-scrub ok 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.205747+0000 osd.6 (osd.6) 7 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.205747+0000 osd.6 (osd.6) 7 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.356501+0000 osd.2 (osd.2) 5 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.356501+0000 osd.2 (osd.2) 5 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.368149+0000 osd.2 (osd.2) 6 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.368149+0000 osd.2 (osd.2) 6 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.393685+0000 osd.7 (osd.7) 7 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.393685+0000 osd.7 (osd.7) 7 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.394861+0000 osd.7 (osd.7) 8 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.394861+0000 osd.7 (osd.7) 8 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.425034+0000 osd.6 (osd.6) 8 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.425034+0000 osd.6 (osd.6) 8 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.623787+0000 osd.1 (osd.1) 5 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.623787+0000 osd.1 (osd.1) 5 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.634831+0000 osd.1 (osd.1) 6 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:47.634831+0000 osd.1 (osd.1) 6 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.757112+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.757112+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.759351+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.759351+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.760025+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.760025+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.761569+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.761569+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.763429+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.763429+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.765314+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.765314+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.862351+0000 mon.a (mon.0) 1286 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:47.862351+0000 mon.a (mon.0) 1286 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.115200+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.115200+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.115247+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.115247+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:48.120134+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: cluster 2026-03-10T07:28:48.120134+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.144845+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.144845+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.196973+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.196973+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.231564+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/1635112288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.231564+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/1635112288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.233146+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:48 vm00 bash[20701]: audit 2026-03-10T07:28:48.233146+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.205747+0000 osd.6 (osd.6) 7 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.205747+0000 osd.6 (osd.6) 7 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.356501+0000 osd.2 (osd.2) 5 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.356501+0000 osd.2 (osd.2) 5 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.368149+0000 osd.2 (osd.2) 6 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.368149+0000 osd.2 (osd.2) 6 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.393685+0000 osd.7 (osd.7) 7 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.393685+0000 osd.7 (osd.7) 7 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.394861+0000 osd.7 (osd.7) 8 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.394861+0000 osd.7 (osd.7) 8 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.425034+0000 osd.6 (osd.6) 8 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.425034+0000 osd.6 (osd.6) 8 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.623787+0000 osd.1 (osd.1) 5 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.623787+0000 osd.1 (osd.1) 5 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.634831+0000 osd.1 (osd.1) 6 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:48.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:47.634831+0000 osd.1 (osd.1) 6 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.757112+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.757112+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.759351+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.759351+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.760025+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.760025+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.761569+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.761569+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.763429+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.763429+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.765314+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.765314+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.862351+0000 mon.a (mon.0) 1286 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:47.862351+0000 mon.a (mon.0) 1286 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.115200+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.115200+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.115247+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.115247+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:48.120134+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: cluster 2026-03-10T07:28:48.120134+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.144845+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.144845+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.196973+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.196973+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.231564+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/1635112288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.231564+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/1635112288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.233146+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:48.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:48 vm00 bash[28005]: audit 2026-03-10T07:28:48.233146+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.205747+0000 osd.6 (osd.6) 7 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.205747+0000 osd.6 (osd.6) 7 : cluster [DBG] 11.3 deep-scrub starts 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.356501+0000 osd.2 (osd.2) 5 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.356501+0000 osd.2 (osd.2) 5 : cluster [DBG] 11.2 deep-scrub starts 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.368149+0000 osd.2 (osd.2) 6 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.368149+0000 osd.2 (osd.2) 6 : cluster [DBG] 11.2 deep-scrub ok 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.393685+0000 osd.7 (osd.7) 7 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.393685+0000 osd.7 (osd.7) 7 : cluster [DBG] 11.5 deep-scrub starts 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.394861+0000 osd.7 (osd.7) 8 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.394861+0000 osd.7 (osd.7) 8 : cluster [DBG] 11.5 deep-scrub ok 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.425034+0000 osd.6 (osd.6) 8 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.425034+0000 osd.6 (osd.6) 8 : cluster [DBG] 11.3 deep-scrub ok 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.623787+0000 osd.1 (osd.1) 5 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.623787+0000 osd.1 (osd.1) 5 : cluster [DBG] 11.0 deep-scrub starts 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.634831+0000 osd.1 (osd.1) 6 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:47.634831+0000 osd.1 (osd.1) 6 : cluster [DBG] 11.0 deep-scrub ok 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.757112+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.757112+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.759351+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.759351+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.760025+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.760025+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.761569+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.761569+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.763429+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.763429+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.765314+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.765314+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.862351+0000 mon.a (mon.0) 1286 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:47.862351+0000 mon.a (mon.0) 1286 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.115200+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.115200+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm00-59637-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.115247+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.115247+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:48.120134+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: cluster 2026-03-10T07:28:48.120134+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.144845+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.144845+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.196973+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.196973+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.231564+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/1635112288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.231564+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.100:0/1635112288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.233146+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:49.017 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:48 vm03 bash[23382]: audit 2026-03-10T07:28:48.233146+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:48.200339+0000 osd.6 (osd.6) 9 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:48.200339+0000 osd.6 (osd.6) 9 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:48.241107+0000 osd.6 (osd.6) 10 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:48.241107+0000 osd.6 (osd.6) 10 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:48.405525+0000 osd.7 (osd.7) 9 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:48.405525+0000 osd.7 (osd.7) 9 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:48.406538+0000 osd.7 (osd.7) 10 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:48.406538+0000 osd.7 (osd.7) 10 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:48.585815+0000 mgr.y (mgr.24407) 153 : cluster [DBG] pgmap v127: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:48.585815+0000 mgr.y (mgr.24407) 153 : cluster [DBG] pgmap v127: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: audit 2026-03-10T07:28:48.863573+0000 mon.a (mon.0) 1292 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: audit 2026-03-10T07:28:48.863573+0000 mon.a (mon.0) 1292 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: audit 2026-03-10T07:28:49.232732+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: audit 2026-03-10T07:28:49.232732+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:49.295075+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:49 vm00 bash[20701]: cluster 2026-03-10T07:28:49.295075+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:48.200339+0000 osd.6 (osd.6) 9 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:48.200339+0000 osd.6 (osd.6) 9 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:48.241107+0000 osd.6 (osd.6) 10 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:48.241107+0000 osd.6 (osd.6) 10 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:48.405525+0000 osd.7 (osd.7) 9 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:48.405525+0000 osd.7 (osd.7) 9 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:48.406538+0000 osd.7 (osd.7) 10 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:48.406538+0000 osd.7 (osd.7) 10 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:48.585815+0000 mgr.y (mgr.24407) 153 : cluster [DBG] pgmap v127: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:48.585815+0000 mgr.y (mgr.24407) 153 : cluster [DBG] pgmap v127: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: audit 2026-03-10T07:28:48.863573+0000 mon.a (mon.0) 1292 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: audit 2026-03-10T07:28:48.863573+0000 mon.a (mon.0) 1292 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: audit 2026-03-10T07:28:49.232732+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: audit 2026-03-10T07:28:49.232732+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:49.295075+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T07:28:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:49 vm00 bash[28005]: cluster 2026-03-10T07:28:49.295075+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T07:28:50.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:48.200339+0000 osd.6 (osd.6) 9 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:48.200339+0000 osd.6 (osd.6) 9 : cluster [DBG] 11.1 deep-scrub starts 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:48.241107+0000 osd.6 (osd.6) 10 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:48.241107+0000 osd.6 (osd.6) 10 : cluster [DBG] 11.1 deep-scrub ok 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:48.405525+0000 osd.7 (osd.7) 9 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:48.405525+0000 osd.7 (osd.7) 9 : cluster [DBG] 11.4 deep-scrub starts 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:48.406538+0000 osd.7 (osd.7) 10 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:48.406538+0000 osd.7 (osd.7) 10 : cluster [DBG] 11.4 deep-scrub ok 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:48.585815+0000 mgr.y (mgr.24407) 153 : cluster [DBG] pgmap v127: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:48.585815+0000 mgr.y (mgr.24407) 153 : cluster [DBG] pgmap v127: 516 pgs: 96 unknown, 4 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 415 active+clean; 144 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: audit 2026-03-10T07:28:48.863573+0000 mon.a (mon.0) 1292 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: audit 2026-03-10T07:28:48.863573+0000 mon.a (mon.0) 1292 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: audit 2026-03-10T07:28:49.232732+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: audit 2026-03-10T07:28:49.232732+0000 mon.a (mon.0) 1293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm00-59629-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:49.295075+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T07:28:50.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:49 vm03 bash[23382]: cluster 2026-03-10T07:28:49.295075+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:50 vm00 bash[20701]: audit 2026-03-10T07:28:49.865659+0000 mon.a (mon.0) 1295 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:50 vm00 bash[20701]: audit 2026-03-10T07:28:49.865659+0000 mon.a (mon.0) 1295 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:50 vm00 bash[20701]: audit 2026-03-10T07:28:50.237935+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:50 vm00 bash[20701]: audit 2026-03-10T07:28:50.237935+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:50 vm00 bash[20701]: cluster 2026-03-10T07:28:50.245650+0000 mon.a (mon.0) 1297 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:50 vm00 bash[20701]: cluster 2026-03-10T07:28:50.245650+0000 mon.a (mon.0) 1297 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:50 vm00 bash[20701]: audit 2026-03-10T07:28:50.323046+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/2206812003' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:50 vm00 bash[20701]: audit 2026-03-10T07:28:50.323046+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/2206812003' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:50 vm00 bash[20701]: audit 2026-03-10T07:28:50.323914+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:50 vm00 bash[20701]: audit 2026-03-10T07:28:50.323914+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:50 vm00 bash[28005]: audit 2026-03-10T07:28:49.865659+0000 mon.a (mon.0) 1295 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:50 vm00 bash[28005]: audit 2026-03-10T07:28:49.865659+0000 mon.a (mon.0) 1295 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:50 vm00 bash[28005]: audit 2026-03-10T07:28:50.237935+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:50 vm00 bash[28005]: audit 2026-03-10T07:28:50.237935+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:50 vm00 bash[28005]: cluster 2026-03-10T07:28:50.245650+0000 mon.a (mon.0) 1297 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:50 vm00 bash[28005]: cluster 2026-03-10T07:28:50.245650+0000 mon.a (mon.0) 1297 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:50 vm00 bash[28005]: audit 2026-03-10T07:28:50.323046+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/2206812003' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:50.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:50 vm00 bash[28005]: audit 2026-03-10T07:28:50.323046+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/2206812003' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:50.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:50 vm00 bash[28005]: audit 2026-03-10T07:28:50.323914+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:50.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:50 vm00 bash[28005]: audit 2026-03-10T07:28:50.323914+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:50 vm03 bash[23382]: audit 2026-03-10T07:28:49.865659+0000 mon.a (mon.0) 1295 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:51.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:50 vm03 bash[23382]: audit 2026-03-10T07:28:49.865659+0000 mon.a (mon.0) 1295 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:51.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:50 vm03 bash[23382]: audit 2026-03-10T07:28:50.237935+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:28:51.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:50 vm03 bash[23382]: audit 2026-03-10T07:28:50.237935+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm00-59879-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:28:51.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:50 vm03 bash[23382]: cluster 2026-03-10T07:28:50.245650+0000 mon.a (mon.0) 1297 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T07:28:51.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:50 vm03 bash[23382]: cluster 2026-03-10T07:28:50.245650+0000 mon.a (mon.0) 1297 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T07:28:51.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:50 vm03 bash[23382]: audit 2026-03-10T07:28:50.323046+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/2206812003' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:50 vm03 bash[23382]: audit 2026-03-10T07:28:50.323046+0000 mon.c (mon.2) 141 : audit [INF] from='client.? 192.168.123.100:0/2206812003' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:50 vm03 bash[23382]: audit 2026-03-10T07:28:50.323914+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:50 vm03 bash[23382]: audit 2026-03-10T07:28:50.323914+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:28:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:28:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:28:51.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: cluster 2026-03-10T07:28:50.586364+0000 mgr.y (mgr.24407) 154 : cluster [DBG] pgmap v130: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: cluster 2026-03-10T07:28:50.586364+0000 mgr.y (mgr.24407) 154 : cluster [DBG] pgmap v130: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: audit 2026-03-10T07:28:50.866698+0000 mon.a (mon.0) 1299 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: audit 2026-03-10T07:28:50.866698+0000 mon.a (mon.0) 1299 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: cluster 2026-03-10T07:28:50.894785+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: cluster 2026-03-10T07:28:50.894785+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: audit 2026-03-10T07:28:50.900317+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: audit 2026-03-10T07:28:50.900317+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: cluster 2026-03-10T07:28:50.928355+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: cluster 2026-03-10T07:28:50.928355+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: audit 2026-03-10T07:28:51.009959+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.100:0/1968346881' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: audit 2026-03-10T07:28:51.009959+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.100:0/1968346881' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: audit 2026-03-10T07:28:51.012151+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:51 vm00 bash[20701]: audit 2026-03-10T07:28:51.012151+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: cluster 2026-03-10T07:28:50.586364+0000 mgr.y (mgr.24407) 154 : cluster [DBG] pgmap v130: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: cluster 2026-03-10T07:28:50.586364+0000 mgr.y (mgr.24407) 154 : cluster [DBG] pgmap v130: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: audit 2026-03-10T07:28:50.866698+0000 mon.a (mon.0) 1299 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: audit 2026-03-10T07:28:50.866698+0000 mon.a (mon.0) 1299 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: cluster 2026-03-10T07:28:50.894785+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: cluster 2026-03-10T07:28:50.894785+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: audit 2026-03-10T07:28:50.900317+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: audit 2026-03-10T07:28:50.900317+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: cluster 2026-03-10T07:28:50.928355+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: cluster 2026-03-10T07:28:50.928355+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: audit 2026-03-10T07:28:51.009959+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.100:0/1968346881' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: audit 2026-03-10T07:28:51.009959+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.100:0/1968346881' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: audit 2026-03-10T07:28:51.012151+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:51.889 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:51 vm00 bash[28005]: audit 2026-03-10T07:28:51.012151+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: cluster 2026-03-10T07:28:50.586364+0000 mgr.y (mgr.24407) 154 : cluster [DBG] pgmap v130: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: cluster 2026-03-10T07:28:50.586364+0000 mgr.y (mgr.24407) 154 : cluster [DBG] pgmap v130: 460 pgs: 40 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: audit 2026-03-10T07:28:50.866698+0000 mon.a (mon.0) 1299 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: audit 2026-03-10T07:28:50.866698+0000 mon.a (mon.0) 1299 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: cluster 2026-03-10T07:28:50.894785+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: cluster 2026-03-10T07:28:50.894785+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: audit 2026-03-10T07:28:50.900317+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: audit 2026-03-10T07:28:50.900317+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm00-59637-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: cluster 2026-03-10T07:28:50.928355+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: cluster 2026-03-10T07:28:50.928355+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: audit 2026-03-10T07:28:51.009959+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.100:0/1968346881' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: audit 2026-03-10T07:28:51.009959+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.100:0/1968346881' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: audit 2026-03-10T07:28:51.012151+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:52.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:51 vm03 bash[23382]: audit 2026-03-10T07:28:51.012151+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: audit 2026-03-10T07:28:51.869020+0000 mon.a (mon.0) 1304 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: audit 2026-03-10T07:28:51.869020+0000 mon.a (mon.0) 1304 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: audit 2026-03-10T07:28:51.935194+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: audit 2026-03-10T07:28:51.935194+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: cluster 2026-03-10T07:28:51.946987+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: cluster 2026-03-10T07:28:51.946987+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: audit 2026-03-10T07:28:52.100358+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: audit 2026-03-10T07:28:52.100358+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: audit 2026-03-10T07:28:52.530571+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: audit 2026-03-10T07:28:52.530571+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: audit 2026-03-10T07:28:52.532493+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:52 vm00 bash[20701]: audit 2026-03-10T07:28:52.532493+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: audit 2026-03-10T07:28:51.869020+0000 mon.a (mon.0) 1304 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: audit 2026-03-10T07:28:51.869020+0000 mon.a (mon.0) 1304 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: audit 2026-03-10T07:28:51.935194+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: audit 2026-03-10T07:28:51.935194+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:52.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: cluster 2026-03-10T07:28:51.946987+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T07:28:52.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: cluster 2026-03-10T07:28:51.946987+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T07:28:52.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: audit 2026-03-10T07:28:52.100358+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:28:52.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: audit 2026-03-10T07:28:52.100358+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:28:52.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: audit 2026-03-10T07:28:52.530571+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:52.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: audit 2026-03-10T07:28:52.530571+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:52.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: audit 2026-03-10T07:28:52.532493+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:52.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:52 vm00 bash[28005]: audit 2026-03-10T07:28:52.532493+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: audit 2026-03-10T07:28:51.869020+0000 mon.a (mon.0) 1304 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: audit 2026-03-10T07:28:51.869020+0000 mon.a (mon.0) 1304 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: audit 2026-03-10T07:28:51.935194+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: audit 2026-03-10T07:28:51.935194+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm00-59629-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: cluster 2026-03-10T07:28:51.946987+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: cluster 2026-03-10T07:28:51.946987+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: audit 2026-03-10T07:28:52.100358+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: audit 2026-03-10T07:28:52.100358+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: audit 2026-03-10T07:28:52.530571+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: audit 2026-03-10T07:28:52.530571+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: audit 2026-03-10T07:28:52.532493+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:53.000 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:52 vm03 bash[23382]: audit 2026-03-10T07:28:52.532493+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:28:53.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:28:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: cluster 2026-03-10T07:28:52.374081+0000 osd.3 (osd.3) 7 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: cluster 2026-03-10T07:28:52.374081+0000 osd.3 (osd.3) 7 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: cluster 2026-03-10T07:28:52.427363+0000 osd.3 (osd.3) 8 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: cluster 2026-03-10T07:28:52.427363+0000 osd.3 (osd.3) 8 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: cluster 2026-03-10T07:28:52.586849+0000 mgr.y (mgr.24407) 155 : cluster [DBG] pgmap v133: 492 pgs: 72 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: cluster 2026-03-10T07:28:52.586849+0000 mgr.y (mgr.24407) 155 : cluster [DBG] pgmap v133: 492 pgs: 72 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:52.870213+0000 mon.a (mon.0) 1308 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:52.870213+0000 mon.a (mon.0) 1308 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:52.939968+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:52.939968+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: cluster 2026-03-10T07:28:52.950198+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: cluster 2026-03-10T07:28:52.950198+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:52.971444+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:52.971444+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:52.993329+0000 mgr.y (mgr.24407) 156 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:52.993329+0000 mgr.y (mgr.24407) 156 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:52.995127+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:52.995127+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:53.095954+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/3964576551' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:53.095954+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/3964576551' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:53.097663+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:54.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:53 vm03 bash[23382]: audit 2026-03-10T07:28:53.097663+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:54.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: cluster 2026-03-10T07:28:52.374081+0000 osd.3 (osd.3) 7 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: cluster 2026-03-10T07:28:52.374081+0000 osd.3 (osd.3) 7 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: cluster 2026-03-10T07:28:52.427363+0000 osd.3 (osd.3) 8 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: cluster 2026-03-10T07:28:52.427363+0000 osd.3 (osd.3) 8 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: cluster 2026-03-10T07:28:52.586849+0000 mgr.y (mgr.24407) 155 : cluster [DBG] pgmap v133: 492 pgs: 72 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: cluster 2026-03-10T07:28:52.586849+0000 mgr.y (mgr.24407) 155 : cluster [DBG] pgmap v133: 492 pgs: 72 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:52.870213+0000 mon.a (mon.0) 1308 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:52.870213+0000 mon.a (mon.0) 1308 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:52.939968+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:52.939968+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: cluster 2026-03-10T07:28:52.950198+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: cluster 2026-03-10T07:28:52.950198+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:52.971444+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:52.971444+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:52.993329+0000 mgr.y (mgr.24407) 156 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:52.993329+0000 mgr.y (mgr.24407) 156 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:52.995127+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:52.995127+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:53.095954+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/3964576551' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:53.095954+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/3964576551' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:53.097663+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:53 vm00 bash[20701]: audit 2026-03-10T07:28:53.097663+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: cluster 2026-03-10T07:28:52.374081+0000 osd.3 (osd.3) 7 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: cluster 2026-03-10T07:28:52.374081+0000 osd.3 (osd.3) 7 : cluster [DBG] 11.8 deep-scrub starts 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: cluster 2026-03-10T07:28:52.427363+0000 osd.3 (osd.3) 8 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: cluster 2026-03-10T07:28:52.427363+0000 osd.3 (osd.3) 8 : cluster [DBG] 11.8 deep-scrub ok 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: cluster 2026-03-10T07:28:52.586849+0000 mgr.y (mgr.24407) 155 : cluster [DBG] pgmap v133: 492 pgs: 72 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: cluster 2026-03-10T07:28:52.586849+0000 mgr.y (mgr.24407) 155 : cluster [DBG] pgmap v133: 492 pgs: 72 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 69 op/s 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:52.870213+0000 mon.a (mon.0) 1308 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:52.870213+0000 mon.a (mon.0) 1308 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:52.939968+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:52.939968+0000 mon.a (mon.0) 1309 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: cluster 2026-03-10T07:28:52.950198+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: cluster 2026-03-10T07:28:52.950198+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:52.971444+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:52.971444+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:52.993329+0000 mgr.y (mgr.24407) 156 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:52.993329+0000 mgr.y (mgr.24407) 156 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:52.995127+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:52.995127+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:53.095954+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/3964576551' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:53.095954+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.100:0/3964576551' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:53.097663+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:54.137 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:53 vm00 bash[28005]: audit 2026-03-10T07:28:53.097663+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: cluster 2026-03-10T07:28:53.343325+0000 osd.3 (osd.3) 9 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:55.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: cluster 2026-03-10T07:28:53.343325+0000 osd.3 (osd.3) 9 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:55.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: cluster 2026-03-10T07:28:53.344731+0000 osd.3 (osd.3) 10 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:55.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: cluster 2026-03-10T07:28:53.344731+0000 osd.3 (osd.3) 10 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:55.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:53.871148+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:55.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:53.871148+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:55.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.028142+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:55.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.028142+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:55.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.050873+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.050873+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.050910+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.050910+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.065706+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.065706+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: cluster 2026-03-10T07:28:54.067164+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: cluster 2026-03-10T07:28:54.067164+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.069412+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/3674289263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.069412+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/3674289263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.083488+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.083488+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.083986+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:54 vm03 bash[23382]: audit 2026-03-10T07:28:54.083986+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: cluster 2026-03-10T07:28:53.343325+0000 osd.3 (osd.3) 9 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: cluster 2026-03-10T07:28:53.343325+0000 osd.3 (osd.3) 9 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: cluster 2026-03-10T07:28:53.344731+0000 osd.3 (osd.3) 10 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: cluster 2026-03-10T07:28:53.344731+0000 osd.3 (osd.3) 10 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:53.871148+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:53.871148+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.028142+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.028142+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.050873+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.050873+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.050910+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.050910+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.065706+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.065706+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: cluster 2026-03-10T07:28:54.067164+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: cluster 2026-03-10T07:28:54.067164+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.069412+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/3674289263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.069412+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/3674289263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.083488+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.083488+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.083986+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:54 vm00 bash[20701]: audit 2026-03-10T07:28:54.083986+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.142 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: cluster 2026-03-10T07:28:53.343325+0000 osd.3 (osd.3) 9 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: cluster 2026-03-10T07:28:53.343325+0000 osd.3 (osd.3) 9 : cluster [DBG] 11.9 deep-scrub starts 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: cluster 2026-03-10T07:28:53.344731+0000 osd.3 (osd.3) 10 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: cluster 2026-03-10T07:28:53.344731+0000 osd.3 (osd.3) 10 : cluster [DBG] 11.9 deep-scrub ok 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:53.871148+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:53.871148+0000 mon.a (mon.0) 1313 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.028142+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.028142+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.050873+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.050873+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.050910+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.050910+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm00-59637-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.065706+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.065706+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: cluster 2026-03-10T07:28:54.067164+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: cluster 2026-03-10T07:28:54.067164+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.069412+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/3674289263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.069412+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/3674289263' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.083488+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.083488+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]: dispatch 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.083986+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:55.143 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:54 vm00 bash[28005]: audit 2026-03-10T07:28:54.083986+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: cluster 2026-03-10T07:28:54.587324+0000 mgr.y (mgr.24407) 157 : cluster [DBG] pgmap v136: 524 pgs: 104 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: cluster 2026-03-10T07:28:54.587324+0000 mgr.y (mgr.24407) 157 : cluster [DBG] pgmap v136: 524 pgs: 104 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: audit 2026-03-10T07:28:54.874592+0000 mon.a (mon.0) 1319 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: audit 2026-03-10T07:28:54.874592+0000 mon.a (mon.0) 1319 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: cluster 2026-03-10T07:28:55.052315+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: cluster 2026-03-10T07:28:55.052315+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: audit 2026-03-10T07:28:55.070343+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]': finished 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: audit 2026-03-10T07:28:55.070343+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]': finished 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: audit 2026-03-10T07:28:55.070398+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: audit 2026-03-10T07:28:55.070398+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: cluster 2026-03-10T07:28:55.081108+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T07:28:56.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:55 vm03 bash[23382]: cluster 2026-03-10T07:28:55.081108+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: cluster 2026-03-10T07:28:54.587324+0000 mgr.y (mgr.24407) 157 : cluster [DBG] pgmap v136: 524 pgs: 104 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: cluster 2026-03-10T07:28:54.587324+0000 mgr.y (mgr.24407) 157 : cluster [DBG] pgmap v136: 524 pgs: 104 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: audit 2026-03-10T07:28:54.874592+0000 mon.a (mon.0) 1319 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: audit 2026-03-10T07:28:54.874592+0000 mon.a (mon.0) 1319 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: cluster 2026-03-10T07:28:55.052315+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: cluster 2026-03-10T07:28:55.052315+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: audit 2026-03-10T07:28:55.070343+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]': finished 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: audit 2026-03-10T07:28:55.070343+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]': finished 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: audit 2026-03-10T07:28:55.070398+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: audit 2026-03-10T07:28:55.070398+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: cluster 2026-03-10T07:28:55.081108+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:55 vm00 bash[20701]: cluster 2026-03-10T07:28:55.081108+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: cluster 2026-03-10T07:28:54.587324+0000 mgr.y (mgr.24407) 157 : cluster [DBG] pgmap v136: 524 pgs: 104 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: cluster 2026-03-10T07:28:54.587324+0000 mgr.y (mgr.24407) 157 : cluster [DBG] pgmap v136: 524 pgs: 104 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: audit 2026-03-10T07:28:54.874592+0000 mon.a (mon.0) 1319 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: audit 2026-03-10T07:28:54.874592+0000 mon.a (mon.0) 1319 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: cluster 2026-03-10T07:28:55.052315+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: cluster 2026-03-10T07:28:55.052315+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: audit 2026-03-10T07:28:55.070343+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]': finished 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: audit 2026-03-10T07:28:55.070343+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-13", "mode": "writeback"}]': finished 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: audit 2026-03-10T07:28:55.070398+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: audit 2026-03-10T07:28:55.070398+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm00-59629-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: cluster 2026-03-10T07:28:55.081108+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T07:28:56.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:55 vm00 bash[28005]: cluster 2026-03-10T07:28:55.081108+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T07:28:57.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:55.878424+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:57.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:55.878424+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:57.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: cluster 2026-03-10T07:28:55.897581+0000 mon.a (mon.0) 1325 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:57.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: cluster 2026-03-10T07:28:55.897581+0000 mon.a (mon.0) 1325 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:57.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: cluster 2026-03-10T07:28:55.941167+0000 mon.a (mon.0) 1326 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T07:28:57.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: cluster 2026-03-10T07:28:55.941167+0000 mon.a (mon.0) 1326 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T07:28:57.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.382151+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.382151+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.382897+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.382897+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.383706+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.383706+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.384363+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.384363+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.385249+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.385249+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.386050+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.386050+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.386815+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.386815+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.387462+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.387462+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.388200+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.388200+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.388790+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:56 vm00 bash[20701]: audit 2026-03-10T07:28:56.388790+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:55.878424+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:55.878424+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: cluster 2026-03-10T07:28:55.897581+0000 mon.a (mon.0) 1325 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: cluster 2026-03-10T07:28:55.897581+0000 mon.a (mon.0) 1325 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: cluster 2026-03-10T07:28:55.941167+0000 mon.a (mon.0) 1326 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: cluster 2026-03-10T07:28:55.941167+0000 mon.a (mon.0) 1326 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.382151+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.382151+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.382897+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.382897+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.383706+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.383706+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.384363+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.384363+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.385249+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.385249+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.386050+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.386050+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.386815+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:57.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.386815+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:57.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.387462+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:57.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.387462+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:57.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.388200+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:57.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.388200+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:57.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.388790+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:57.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:56 vm00 bash[28005]: audit 2026-03-10T07:28:56.388790+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:55.878424+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:55.878424+0000 mon.a (mon.0) 1324 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: cluster 2026-03-10T07:28:55.897581+0000 mon.a (mon.0) 1325 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: cluster 2026-03-10T07:28:55.897581+0000 mon.a (mon.0) 1325 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: cluster 2026-03-10T07:28:55.941167+0000 mon.a (mon.0) 1326 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: cluster 2026-03-10T07:28:55.941167+0000 mon.a (mon.0) 1326 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.382151+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.382151+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.382897+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.382897+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.383706+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.383706+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.384363+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.384363+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.385249+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.385249+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.386050+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:57.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.386050+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:57.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.386815+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:57.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.386815+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:57.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.387462+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:57.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.387462+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:57.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.388200+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:57.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.388200+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:57.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.388790+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:57.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:56 vm03 bash[23382]: audit 2026-03-10T07:28:56.388790+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.383714+0000 mgr.y (mgr.24407) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.383714+0000 mgr.y (mgr.24407) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.384372+0000 mgr.y (mgr.24407) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.384372+0000 mgr.y (mgr.24407) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.385172+0000 mgr.y (mgr.24407) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.385172+0000 mgr.y (mgr.24407) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.385944+0000 mgr.y (mgr.24407) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.385944+0000 mgr.y (mgr.24407) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.386762+0000 mgr.y (mgr.24407) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.386762+0000 mgr.y (mgr.24407) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.387569+0000 mgr.y (mgr.24407) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.387569+0000 mgr.y (mgr.24407) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.388291+0000 mgr.y (mgr.24407) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.388291+0000 mgr.y (mgr.24407) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.388965+0000 mgr.y (mgr.24407) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.388965+0000 mgr.y (mgr.24407) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.389664+0000 mgr.y (mgr.24407) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.389664+0000 mgr.y (mgr.24407) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.390233+0000 mgr.y (mgr.24407) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.390233+0000 mgr.y (mgr.24407) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.415007+0000 osd.2 (osd.2) 7 : cluster [DBG] 164.2 scrub starts 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.383714+0000 mgr.y (mgr.24407) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.383714+0000 mgr.y (mgr.24407) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.384372+0000 mgr.y (mgr.24407) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.384372+0000 mgr.y (mgr.24407) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.385172+0000 mgr.y (mgr.24407) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.385172+0000 mgr.y (mgr.24407) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.385944+0000 mgr.y (mgr.24407) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:58.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.385944+0000 mgr.y (mgr.24407) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.386762+0000 mgr.y (mgr.24407) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.386762+0000 mgr.y (mgr.24407) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.387569+0000 mgr.y (mgr.24407) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.387569+0000 mgr.y (mgr.24407) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.388291+0000 mgr.y (mgr.24407) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.388291+0000 mgr.y (mgr.24407) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.388965+0000 mgr.y (mgr.24407) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.388965+0000 mgr.y (mgr.24407) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.389664+0000 mgr.y (mgr.24407) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.389664+0000 mgr.y (mgr.24407) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.390233+0000 mgr.y (mgr.24407) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.390233+0000 mgr.y (mgr.24407) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.415007+0000 osd.2 (osd.2) 7 : cluster [DBG] 164.2 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.415007+0000 osd.2 (osd.2) 7 : cluster [DBG] 164.2 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.416348+0000 osd.2 (osd.2) 8 : cluster [DBG] 164.2 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.416348+0000 osd.2 (osd.2) 8 : cluster [DBG] 164.2 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.555535+0000 osd.1 (osd.1) 7 : cluster [DBG] 164.6 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.555535+0000 osd.1 (osd.1) 7 : cluster [DBG] 164.6 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.558023+0000 osd.1 (osd.1) 8 : cluster [DBG] 164.6 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.558023+0000 osd.1 (osd.1) 8 : cluster [DBG] 164.6 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.587734+0000 mgr.y (mgr.24407) 168 : cluster [DBG] pgmap v139: 460 pgs: 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 316 KiB/s wr, 59 op/s 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.587734+0000 mgr.y (mgr.24407) 168 : cluster [DBG] pgmap v139: 460 pgs: 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 316 KiB/s wr, 59 op/s 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.880030+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.880030+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.963835+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.963835+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.970182+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.970182+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.970209+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/4156574861' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.970209+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/4156574861' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.972137+0000 osd.4 (osd.4) 3 : cluster [DBG] 164.3 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:56.972137+0000 osd.4 (osd.4) 3 : cluster [DBG] 164.3 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.974563+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: audit 2026-03-10T07:28:56.974563+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:57.044625+0000 osd.4 (osd.4) 4 : cluster [DBG] 164.3 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:57.044625+0000 osd.4 (osd.4) 4 : cluster [DBG] 164.3 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:57.346187+0000 osd.5 (osd.5) 7 : cluster [DBG] 164.7 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:57.346187+0000 osd.5 (osd.5) 7 : cluster [DBG] 164.7 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:57.348559+0000 osd.3 (osd.3) 11 : cluster [DBG] 164.1 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:57.348559+0000 osd.3 (osd.3) 11 : cluster [DBG] 164.1 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:57.348671+0000 osd.5 (osd.5) 8 : cluster [DBG] 164.7 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:57.348671+0000 osd.5 (osd.5) 8 : cluster [DBG] 164.7 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:57.349739+0000 osd.3 (osd.3) 12 : cluster [DBG] 164.1 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:57 vm00 bash[28005]: cluster 2026-03-10T07:28:57.349739+0000 osd.3 (osd.3) 12 : cluster [DBG] 164.1 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.415007+0000 osd.2 (osd.2) 7 : cluster [DBG] 164.2 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.416348+0000 osd.2 (osd.2) 8 : cluster [DBG] 164.2 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.416348+0000 osd.2 (osd.2) 8 : cluster [DBG] 164.2 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.555535+0000 osd.1 (osd.1) 7 : cluster [DBG] 164.6 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.555535+0000 osd.1 (osd.1) 7 : cluster [DBG] 164.6 scrub starts 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.558023+0000 osd.1 (osd.1) 8 : cluster [DBG] 164.6 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.558023+0000 osd.1 (osd.1) 8 : cluster [DBG] 164.6 scrub ok 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.587734+0000 mgr.y (mgr.24407) 168 : cluster [DBG] pgmap v139: 460 pgs: 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 316 KiB/s wr, 59 op/s 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.587734+0000 mgr.y (mgr.24407) 168 : cluster [DBG] pgmap v139: 460 pgs: 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 316 KiB/s wr, 59 op/s 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.880030+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:58.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.880030+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.963835+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.963835+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.970182+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.970182+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.970209+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/4156574861' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.970209+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/4156574861' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.972137+0000 osd.4 (osd.4) 3 : cluster [DBG] 164.3 scrub starts 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:56.972137+0000 osd.4 (osd.4) 3 : cluster [DBG] 164.3 scrub starts 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.974563+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: audit 2026-03-10T07:28:56.974563+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:57.044625+0000 osd.4 (osd.4) 4 : cluster [DBG] 164.3 scrub ok 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:57.044625+0000 osd.4 (osd.4) 4 : cluster [DBG] 164.3 scrub ok 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:57.346187+0000 osd.5 (osd.5) 7 : cluster [DBG] 164.7 scrub starts 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:57.346187+0000 osd.5 (osd.5) 7 : cluster [DBG] 164.7 scrub starts 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:57.348559+0000 osd.3 (osd.3) 11 : cluster [DBG] 164.1 scrub starts 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:57.348559+0000 osd.3 (osd.3) 11 : cluster [DBG] 164.1 scrub starts 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:57.348671+0000 osd.5 (osd.5) 8 : cluster [DBG] 164.7 scrub ok 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:57.348671+0000 osd.5 (osd.5) 8 : cluster [DBG] 164.7 scrub ok 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:57.349739+0000 osd.3 (osd.3) 12 : cluster [DBG] 164.1 scrub ok 2026-03-10T07:28:58.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:57 vm00 bash[20701]: cluster 2026-03-10T07:28:57.349739+0000 osd.3 (osd.3) 12 : cluster [DBG] 164.1 scrub ok 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.383714+0000 mgr.y (mgr.24407) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.383714+0000 mgr.y (mgr.24407) 158 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.0"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.384372+0000 mgr.y (mgr.24407) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.384372+0000 mgr.y (mgr.24407) 159 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.1"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.385172+0000 mgr.y (mgr.24407) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.385172+0000 mgr.y (mgr.24407) 160 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.2"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.385944+0000 mgr.y (mgr.24407) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.385944+0000 mgr.y (mgr.24407) 161 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.3"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.386762+0000 mgr.y (mgr.24407) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.386762+0000 mgr.y (mgr.24407) 162 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.4"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.387569+0000 mgr.y (mgr.24407) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.387569+0000 mgr.y (mgr.24407) 163 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.5"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.388291+0000 mgr.y (mgr.24407) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.388291+0000 mgr.y (mgr.24407) 164 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.6"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.388965+0000 mgr.y (mgr.24407) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.388965+0000 mgr.y (mgr.24407) 165 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.7"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.389664+0000 mgr.y (mgr.24407) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.389664+0000 mgr.y (mgr.24407) 166 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.8"}]: dispatch 2026-03-10T07:28:58.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.390233+0000 mgr.y (mgr.24407) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.390233+0000 mgr.y (mgr.24407) 167 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "164.9"}]: dispatch 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.415007+0000 osd.2 (osd.2) 7 : cluster [DBG] 164.2 scrub starts 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.415007+0000 osd.2 (osd.2) 7 : cluster [DBG] 164.2 scrub starts 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.416348+0000 osd.2 (osd.2) 8 : cluster [DBG] 164.2 scrub ok 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.416348+0000 osd.2 (osd.2) 8 : cluster [DBG] 164.2 scrub ok 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.555535+0000 osd.1 (osd.1) 7 : cluster [DBG] 164.6 scrub starts 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.555535+0000 osd.1 (osd.1) 7 : cluster [DBG] 164.6 scrub starts 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.558023+0000 osd.1 (osd.1) 8 : cluster [DBG] 164.6 scrub ok 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.558023+0000 osd.1 (osd.1) 8 : cluster [DBG] 164.6 scrub ok 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.587734+0000 mgr.y (mgr.24407) 168 : cluster [DBG] pgmap v139: 460 pgs: 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 316 KiB/s wr, 59 op/s 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.587734+0000 mgr.y (mgr.24407) 168 : cluster [DBG] pgmap v139: 460 pgs: 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 316 KiB/s wr, 59 op/s 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.880030+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.880030+0000 mon.a (mon.0) 1327 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.963835+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.963835+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.970182+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.970182+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.970209+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/4156574861' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.970209+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.100:0/4156574861' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.972137+0000 osd.4 (osd.4) 3 : cluster [DBG] 164.3 scrub starts 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:56.972137+0000 osd.4 (osd.4) 3 : cluster [DBG] 164.3 scrub starts 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.974563+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: audit 2026-03-10T07:28:56.974563+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:57.044625+0000 osd.4 (osd.4) 4 : cluster [DBG] 164.3 scrub ok 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:57.044625+0000 osd.4 (osd.4) 4 : cluster [DBG] 164.3 scrub ok 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:57.346187+0000 osd.5 (osd.5) 7 : cluster [DBG] 164.7 scrub starts 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:57.346187+0000 osd.5 (osd.5) 7 : cluster [DBG] 164.7 scrub starts 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:57.348559+0000 osd.3 (osd.3) 11 : cluster [DBG] 164.1 scrub starts 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:57.348559+0000 osd.3 (osd.3) 11 : cluster [DBG] 164.1 scrub starts 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:57.348671+0000 osd.5 (osd.5) 8 : cluster [DBG] 164.7 scrub ok 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:57.348671+0000 osd.5 (osd.5) 8 : cluster [DBG] 164.7 scrub ok 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:57.349739+0000 osd.3 (osd.3) 12 : cluster [DBG] 164.1 scrub ok 2026-03-10T07:28:58.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:57 vm03 bash[23382]: cluster 2026-03-10T07:28:57.349739+0000 osd.3 (osd.3) 12 : cluster [DBG] 164.1 scrub ok 2026-03-10T07:28:59.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:57.248654+0000 osd.6 (osd.6) 11 : cluster [DBG] 164.5 scrub starts 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:57.248654+0000 osd.6 (osd.6) 11 : cluster [DBG] 164.5 scrub starts 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:57.251376+0000 osd.6 (osd.6) 12 : cluster [DBG] 164.5 scrub ok 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:57.251376+0000 osd.6 (osd.6) 12 : cluster [DBG] 164.5 scrub ok 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:57.450518+0000 osd.2 (osd.2) 9 : cluster [DBG] 164.0 scrub starts 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:57.450518+0000 osd.2 (osd.2) 9 : cluster [DBG] 164.0 scrub starts 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:57.451973+0000 osd.2 (osd.2) 10 : cluster [DBG] 164.0 scrub ok 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:57.451973+0000 osd.2 (osd.2) 10 : cluster [DBG] 164.0 scrub ok 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: audit 2026-03-10T07:28:57.882000+0000 mon.a (mon.0) 1331 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: audit 2026-03-10T07:28:57.882000+0000 mon.a (mon.0) 1331 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: audit 2026-03-10T07:28:57.941812+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: audit 2026-03-10T07:28:57.941812+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: audit 2026-03-10T07:28:57.941851+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: audit 2026-03-10T07:28:57.941851+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:57.957019+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:57.957019+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:58.311168+0000 osd.5 (osd.5) 9 : cluster [DBG] 164.9 scrub starts 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:58.311168+0000 osd.5 (osd.5) 9 : cluster [DBG] 164.9 scrub starts 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:58.312690+0000 osd.5 (osd.5) 10 : cluster [DBG] 164.9 scrub ok 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:58 vm00 bash[20701]: cluster 2026-03-10T07:28:58.312690+0000 osd.5 (osd.5) 10 : cluster [DBG] 164.9 scrub ok 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:57.248654+0000 osd.6 (osd.6) 11 : cluster [DBG] 164.5 scrub starts 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:57.248654+0000 osd.6 (osd.6) 11 : cluster [DBG] 164.5 scrub starts 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:57.251376+0000 osd.6 (osd.6) 12 : cluster [DBG] 164.5 scrub ok 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:57.251376+0000 osd.6 (osd.6) 12 : cluster [DBG] 164.5 scrub ok 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:57.450518+0000 osd.2 (osd.2) 9 : cluster [DBG] 164.0 scrub starts 2026-03-10T07:28:59.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:57.450518+0000 osd.2 (osd.2) 9 : cluster [DBG] 164.0 scrub starts 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:57.451973+0000 osd.2 (osd.2) 10 : cluster [DBG] 164.0 scrub ok 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:57.451973+0000 osd.2 (osd.2) 10 : cluster [DBG] 164.0 scrub ok 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: audit 2026-03-10T07:28:57.882000+0000 mon.a (mon.0) 1331 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: audit 2026-03-10T07:28:57.882000+0000 mon.a (mon.0) 1331 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: audit 2026-03-10T07:28:57.941812+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: audit 2026-03-10T07:28:57.941812+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: audit 2026-03-10T07:28:57.941851+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: audit 2026-03-10T07:28:57.941851+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:57.957019+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:57.957019+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:58.311168+0000 osd.5 (osd.5) 9 : cluster [DBG] 164.9 scrub starts 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:58.311168+0000 osd.5 (osd.5) 9 : cluster [DBG] 164.9 scrub starts 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:58.312690+0000 osd.5 (osd.5) 10 : cluster [DBG] 164.9 scrub ok 2026-03-10T07:28:59.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:58 vm00 bash[28005]: cluster 2026-03-10T07:28:58.312690+0000 osd.5 (osd.5) 10 : cluster [DBG] 164.9 scrub ok 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:57.248654+0000 osd.6 (osd.6) 11 : cluster [DBG] 164.5 scrub starts 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:57.248654+0000 osd.6 (osd.6) 11 : cluster [DBG] 164.5 scrub starts 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:57.251376+0000 osd.6 (osd.6) 12 : cluster [DBG] 164.5 scrub ok 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:57.251376+0000 osd.6 (osd.6) 12 : cluster [DBG] 164.5 scrub ok 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:57.450518+0000 osd.2 (osd.2) 9 : cluster [DBG] 164.0 scrub starts 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:57.450518+0000 osd.2 (osd.2) 9 : cluster [DBG] 164.0 scrub starts 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:57.451973+0000 osd.2 (osd.2) 10 : cluster [DBG] 164.0 scrub ok 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:57.451973+0000 osd.2 (osd.2) 10 : cluster [DBG] 164.0 scrub ok 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: audit 2026-03-10T07:28:57.882000+0000 mon.a (mon.0) 1331 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: audit 2026-03-10T07:28:57.882000+0000 mon.a (mon.0) 1331 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: audit 2026-03-10T07:28:57.941812+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: audit 2026-03-10T07:28:57.941812+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.100:0/1723726902' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm00-59629-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: audit 2026-03-10T07:28:57.941851+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: audit 2026-03-10T07:28:57.941851+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm00-59637-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:57.957019+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:57.957019+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:58.311168+0000 osd.5 (osd.5) 9 : cluster [DBG] 164.9 scrub starts 2026-03-10T07:28:59.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:58.311168+0000 osd.5 (osd.5) 9 : cluster [DBG] 164.9 scrub starts 2026-03-10T07:28:59.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:58.312690+0000 osd.5 (osd.5) 10 : cluster [DBG] 164.9 scrub ok 2026-03-10T07:28:59.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:58 vm03 bash[23382]: cluster 2026-03-10T07:28:58.312690+0000 osd.5 (osd.5) 10 : cluster [DBG] 164.9 scrub ok 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:58.206473+0000 osd.6 (osd.6) 13 : cluster [DBG] 164.4 scrub starts 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:58.206473+0000 osd.6 (osd.6) 13 : cluster [DBG] 164.4 scrub starts 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:58.208077+0000 osd.6 (osd.6) 14 : cluster [DBG] 164.4 scrub ok 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:58.208077+0000 osd.6 (osd.6) 14 : cluster [DBG] 164.4 scrub ok 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:58.588278+0000 mgr.y (mgr.24407) 169 : cluster [DBG] pgmap v142: 524 pgs: 96 unknown, 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 60 KiB/s wr, 58 op/s 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:58.588278+0000 mgr.y (mgr.24407) 169 : cluster [DBG] pgmap v142: 524 pgs: 96 unknown, 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 60 KiB/s wr, 58 op/s 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: audit 2026-03-10T07:28:58.891205+0000 mon.a (mon.0) 1335 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: audit 2026-03-10T07:28:58.891205+0000 mon.a (mon.0) 1335 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:58.966038+0000 mon.a (mon.0) 1336 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:58.966038+0000 mon.a (mon.0) 1336 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: audit 2026-03-10T07:28:59.001491+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: audit 2026-03-10T07:28:59.001491+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: audit 2026-03-10T07:28:59.007381+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: audit 2026-03-10T07:28:59.007381+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:59.323496+0000 osd.5 (osd.5) 11 : cluster [DBG] 164.8 deep-scrub starts 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:59.323496+0000 osd.5 (osd.5) 11 : cluster [DBG] 164.8 deep-scrub starts 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:59.324454+0000 osd.5 (osd.5) 12 : cluster [DBG] 164.8 deep-scrub ok 2026-03-10T07:29:00.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:28:59 vm00 bash[20701]: cluster 2026-03-10T07:28:59.324454+0000 osd.5 (osd.5) 12 : cluster [DBG] 164.8 deep-scrub ok 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:58.206473+0000 osd.6 (osd.6) 13 : cluster [DBG] 164.4 scrub starts 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:58.206473+0000 osd.6 (osd.6) 13 : cluster [DBG] 164.4 scrub starts 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:58.208077+0000 osd.6 (osd.6) 14 : cluster [DBG] 164.4 scrub ok 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:58.208077+0000 osd.6 (osd.6) 14 : cluster [DBG] 164.4 scrub ok 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:58.588278+0000 mgr.y (mgr.24407) 169 : cluster [DBG] pgmap v142: 524 pgs: 96 unknown, 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 60 KiB/s wr, 58 op/s 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:58.588278+0000 mgr.y (mgr.24407) 169 : cluster [DBG] pgmap v142: 524 pgs: 96 unknown, 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 60 KiB/s wr, 58 op/s 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: audit 2026-03-10T07:28:58.891205+0000 mon.a (mon.0) 1335 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: audit 2026-03-10T07:28:58.891205+0000 mon.a (mon.0) 1335 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:58.966038+0000 mon.a (mon.0) 1336 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:58.966038+0000 mon.a (mon.0) 1336 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: audit 2026-03-10T07:28:59.001491+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: audit 2026-03-10T07:28:59.001491+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: audit 2026-03-10T07:28:59.007381+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: audit 2026-03-10T07:28:59.007381+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:59.323496+0000 osd.5 (osd.5) 11 : cluster [DBG] 164.8 deep-scrub starts 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:59.323496+0000 osd.5 (osd.5) 11 : cluster [DBG] 164.8 deep-scrub starts 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:59.324454+0000 osd.5 (osd.5) 12 : cluster [DBG] 164.8 deep-scrub ok 2026-03-10T07:29:00.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:28:59 vm00 bash[28005]: cluster 2026-03-10T07:28:59.324454+0000 osd.5 (osd.5) 12 : cluster [DBG] 164.8 deep-scrub ok 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:58.206473+0000 osd.6 (osd.6) 13 : cluster [DBG] 164.4 scrub starts 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:58.206473+0000 osd.6 (osd.6) 13 : cluster [DBG] 164.4 scrub starts 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:58.208077+0000 osd.6 (osd.6) 14 : cluster [DBG] 164.4 scrub ok 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:58.208077+0000 osd.6 (osd.6) 14 : cluster [DBG] 164.4 scrub ok 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:58.588278+0000 mgr.y (mgr.24407) 169 : cluster [DBG] pgmap v142: 524 pgs: 96 unknown, 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 60 KiB/s wr, 58 op/s 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:58.588278+0000 mgr.y (mgr.24407) 169 : cluster [DBG] pgmap v142: 524 pgs: 96 unknown, 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 60 KiB/s wr, 58 op/s 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: audit 2026-03-10T07:28:58.891205+0000 mon.a (mon.0) 1335 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: audit 2026-03-10T07:28:58.891205+0000 mon.a (mon.0) 1335 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:58.966038+0000 mon.a (mon.0) 1336 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:58.966038+0000 mon.a (mon.0) 1336 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: audit 2026-03-10T07:28:59.001491+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: audit 2026-03-10T07:28:59.001491+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: audit 2026-03-10T07:28:59.007381+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: audit 2026-03-10T07:28:59.007381+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:59.323496+0000 osd.5 (osd.5) 11 : cluster [DBG] 164.8 deep-scrub starts 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:59.323496+0000 osd.5 (osd.5) 11 : cluster [DBG] 164.8 deep-scrub starts 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:59.324454+0000 osd.5 (osd.5) 12 : cluster [DBG] 164.8 deep-scrub ok 2026-03-10T07:29:00.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:28:59 vm03 bash[23382]: cluster 2026-03-10T07:28:59.324454+0000 osd.5 (osd.5) 12 : cluster [DBG] 164.8 deep-scrub ok 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: Running main() from gmock_main.cc 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [==========] Running 13 tests from 4 test suites. 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] Global test environment set-up. 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapList 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapList (2278 ms) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapRemove 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapRemove (2104 ms) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.Rollback 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshots.Rollback (2091 ms) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapGetName 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapGetName (2226 ms) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots (8699 ms total) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Snap 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Snap (4351 ms) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Rollback 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Rollback (4277 ms) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.FutureSnapRollback 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.FutureSnapRollback (4528 ms) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged (13156 ms total) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapList 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapList (2994 ms) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapRemove 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapRemove (2102 ms) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.Rollback 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.Rollback (1740 ms) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapGetName 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapGetName (2172 ms) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC (9008 ms total) 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: 2026-03-10T07:29:00.949 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC 2026-03-10T07:29:00.950 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Snap 2026-03-10T07:29:00.950 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Snap (3954 ms) 2026-03-10T07:29:00.950 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Rollback 2026-03-10T07:29:00.950 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Rollback (3887 ms) 2026-03-10T07:29:00.950 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC (7841 ms total) 2026-03-10T07:29:00.950 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: 2026-03-10T07:29:00.950 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [----------] Global test environment tear-down 2026-03-10T07:29:00.950 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [==========] 13 tests from 4 test suites ran. (56791 ms total) 2026-03-10T07:29:00.950 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots: [ PASSED ] 13 tests. 2026-03-10T07:29:01.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:28:59.892076+0000 mon.a (mon.0) 1338 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:28:59.892076+0000 mon.a (mon.0) 1338 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:28:59.956960+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:28:59.956960+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: cluster 2026-03-10T07:28:59.963463+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: cluster 2026-03-10T07:28:59.963463+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:28:59.977084+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:28:59.977084+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:28:59.991703+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:28:59.991703+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:28:59.994070+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:28:59.994070+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:29:00.076077+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/890723974' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:29:00.076077+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/890723974' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:29:00.079762+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:00 vm00 bash[20701]: audit 2026-03-10T07:29:00.079762+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:29:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:29:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:28:59.892076+0000 mon.a (mon.0) 1338 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:28:59.892076+0000 mon.a (mon.0) 1338 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:28:59.956960+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:28:59.956960+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: cluster 2026-03-10T07:28:59.963463+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: cluster 2026-03-10T07:28:59.963463+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:28:59.977084+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:28:59.977084+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:28:59.991703+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:28:59.991703+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:28:59.994070+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:28:59.994070+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:29:00.076077+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/890723974' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:29:00.076077+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/890723974' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:29:00.079762+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.136 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:00 vm00 bash[28005]: audit 2026-03-10T07:29:00.079762+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:28:59.892076+0000 mon.a (mon.0) 1338 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:01.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:28:59.892076+0000 mon.a (mon.0) 1338 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:01.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:28:59.956960+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:01.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:28:59.956960+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:01.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: cluster 2026-03-10T07:28:59.963463+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T07:29:01.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: cluster 2026-03-10T07:28:59.963463+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T07:29:01.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:28:59.977084+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:28:59.977084+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:28:59.991703+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:28:59.991703+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.100:0/4092179648' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:28:59.994070+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:28:59.994070+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]: dispatch 2026-03-10T07:29:01.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:29:00.076077+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/890723974' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:29:00.076077+0000 mon.c (mon.2) 146 : audit [INF] from='client.? 192.168.123.100:0/890723974' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:29:00.079762+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:01.266 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:00 vm03 bash[23382]: audit 2026-03-10T07:29:00.079762+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: cluster 2026-03-10T07:29:00.588744+0000 mgr.y (mgr.24407) 170 : cluster [DBG] pgmap v145: 516 pgs: 64 unknown, 452 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 55 KiB/s rd, 76 op/s 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: cluster 2026-03-10T07:29:00.588744+0000 mgr.y (mgr.24407) 170 : cluster [DBG] pgmap v145: 516 pgs: 64 unknown, 452 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 55 KiB/s rd, 76 op/s 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: audit 2026-03-10T07:29:00.892799+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: audit 2026-03-10T07:29:00.892799+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: cluster 2026-03-10T07:29:00.916506+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: cluster 2026-03-10T07:29:00.916506+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: audit 2026-03-10T07:29:00.920754+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: audit 2026-03-10T07:29:00.920754+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: audit 2026-03-10T07:29:00.920816+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: audit 2026-03-10T07:29:00.920816+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: audit 2026-03-10T07:29:00.920846+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: audit 2026-03-10T07:29:00.920846+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: cluster 2026-03-10T07:29:00.931287+0000 mon.a (mon.0) 1349 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:02 vm00 bash[20701]: cluster 2026-03-10T07:29:00.931287+0000 mon.a (mon.0) 1349 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: cluster 2026-03-10T07:29:00.588744+0000 mgr.y (mgr.24407) 170 : cluster [DBG] pgmap v145: 516 pgs: 64 unknown, 452 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 55 KiB/s rd, 76 op/s 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: cluster 2026-03-10T07:29:00.588744+0000 mgr.y (mgr.24407) 170 : cluster [DBG] pgmap v145: 516 pgs: 64 unknown, 452 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 55 KiB/s rd, 76 op/s 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: audit 2026-03-10T07:29:00.892799+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: audit 2026-03-10T07:29:00.892799+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:02.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: cluster 2026-03-10T07:29:00.916506+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:02.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: cluster 2026-03-10T07:29:00.916506+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:02.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: audit 2026-03-10T07:29:00.920754+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: audit 2026-03-10T07:29:00.920754+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: audit 2026-03-10T07:29:00.920816+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:02.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: audit 2026-03-10T07:29:00.920816+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:02.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: audit 2026-03-10T07:29:00.920846+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: audit 2026-03-10T07:29:00.920846+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: cluster 2026-03-10T07:29:00.931287+0000 mon.a (mon.0) 1349 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T07:29:02.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:02 vm00 bash[28005]: cluster 2026-03-10T07:29:00.931287+0000 mon.a (mon.0) 1349 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: cluster 2026-03-10T07:29:00.588744+0000 mgr.y (mgr.24407) 170 : cluster [DBG] pgmap v145: 516 pgs: 64 unknown, 452 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 55 KiB/s rd, 76 op/s 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: cluster 2026-03-10T07:29:00.588744+0000 mgr.y (mgr.24407) 170 : cluster [DBG] pgmap v145: 516 pgs: 64 unknown, 452 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 55 KiB/s rd, 76 op/s 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: audit 2026-03-10T07:29:00.892799+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: audit 2026-03-10T07:29:00.892799+0000 mon.a (mon.0) 1344 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: cluster 2026-03-10T07:29:00.916506+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: cluster 2026-03-10T07:29:00.916506+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: audit 2026-03-10T07:29:00.920754+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: audit 2026-03-10T07:29:00.920754+0000 mon.a (mon.0) 1346 : audit [INF] from='client.? 192.168.123.100:0/1084927819' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm00-59629-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: audit 2026-03-10T07:29:00.920816+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: audit 2026-03-10T07:29:00.920816+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm00-59879-15"}]': finished 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: audit 2026-03-10T07:29:00.920846+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: audit 2026-03-10T07:29:00.920846+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm00-59637-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: cluster 2026-03-10T07:29:00.931287+0000 mon.a (mon.0) 1349 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T07:29:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:02 vm03 bash[23382]: cluster 2026-03-10T07:29:00.931287+0000 mon.a (mon.0) 1349 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T07:29:03.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:29:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:29:03.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:03 vm03 bash[23382]: audit 2026-03-10T07:29:02.056371+0000 mon.a (mon.0) 1350 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:03.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:03 vm03 bash[23382]: audit 2026-03-10T07:29:02.056371+0000 mon.a (mon.0) 1350 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:03.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:03 vm03 bash[23382]: cluster 2026-03-10T07:29:02.116891+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T07:29:03.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:03 vm03 bash[23382]: cluster 2026-03-10T07:29:02.116891+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T07:29:03.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:03 vm03 bash[23382]: audit 2026-03-10T07:29:02.148739+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:03.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:03 vm03 bash[23382]: audit 2026-03-10T07:29:02.148739+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:03 vm00 bash[20701]: audit 2026-03-10T07:29:02.056371+0000 mon.a (mon.0) 1350 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:03 vm00 bash[20701]: audit 2026-03-10T07:29:02.056371+0000 mon.a (mon.0) 1350 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:03 vm00 bash[20701]: cluster 2026-03-10T07:29:02.116891+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:03 vm00 bash[20701]: cluster 2026-03-10T07:29:02.116891+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:03 vm00 bash[20701]: audit 2026-03-10T07:29:02.148739+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:03 vm00 bash[20701]: audit 2026-03-10T07:29:02.148739+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:03 vm00 bash[28005]: audit 2026-03-10T07:29:02.056371+0000 mon.a (mon.0) 1350 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:03 vm00 bash[28005]: audit 2026-03-10T07:29:02.056371+0000 mon.a (mon.0) 1350 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:03 vm00 bash[28005]: cluster 2026-03-10T07:29:02.116891+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:03 vm00 bash[28005]: cluster 2026-03-10T07:29:02.116891+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:03 vm00 bash[28005]: audit 2026-03-10T07:29:02.148739+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:03 vm00 bash[28005]: audit 2026-03-10T07:29:02.148739+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: cluster 2026-03-10T07:29:02.589147+0000 mgr.y (mgr.24407) 171 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: cluster 2026-03-10T07:29:02.589147+0000 mgr.y (mgr.24407) 171 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.004101+0000 mgr.y (mgr.24407) 172 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.004101+0000 mgr.y (mgr.24407) 172 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.059473+0000 mon.c (mon.2) 147 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.059473+0000 mon.c (mon.2) 147 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.060407+0000 mon.c (mon.2) 148 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.060407+0000 mon.c (mon.2) 148 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.070485+0000 mon.a (mon.0) 1353 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.070485+0000 mon.a (mon.0) 1353 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.109323+0000 mon.a (mon.0) 1354 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.109323+0000 mon.a (mon.0) 1354 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.126107+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.126107+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: cluster 2026-03-10T07:29:03.139786+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: cluster 2026-03-10T07:29:03.139786+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.145669+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.145669+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.238016+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/341867729' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.238016+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/341867729' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.240253+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:04 vm00 bash[20701]: audit 2026-03-10T07:29:03.240253+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: cluster 2026-03-10T07:29:02.589147+0000 mgr.y (mgr.24407) 171 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: cluster 2026-03-10T07:29:02.589147+0000 mgr.y (mgr.24407) 171 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.004101+0000 mgr.y (mgr.24407) 172 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.004101+0000 mgr.y (mgr.24407) 172 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.059473+0000 mon.c (mon.2) 147 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.059473+0000 mon.c (mon.2) 147 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.060407+0000 mon.c (mon.2) 148 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.060407+0000 mon.c (mon.2) 148 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.070485+0000 mon.a (mon.0) 1353 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.070485+0000 mon.a (mon.0) 1353 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.109323+0000 mon.a (mon.0) 1354 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.109323+0000 mon.a (mon.0) 1354 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.126107+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.126107+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: cluster 2026-03-10T07:29:03.139786+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: cluster 2026-03-10T07:29:03.139786+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.145669+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.145669+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.238016+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/341867729' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.238016+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/341867729' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.240253+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:04 vm00 bash[28005]: audit 2026-03-10T07:29:03.240253+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: cluster 2026-03-10T07:29:02.589147+0000 mgr.y (mgr.24407) 171 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: cluster 2026-03-10T07:29:02.589147+0000 mgr.y (mgr.24407) 171 : cluster [DBG] pgmap v148: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.004101+0000 mgr.y (mgr.24407) 172 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.004101+0000 mgr.y (mgr.24407) 172 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.059473+0000 mon.c (mon.2) 147 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.059473+0000 mon.c (mon.2) 147 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.060407+0000 mon.c (mon.2) 148 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.060407+0000 mon.c (mon.2) 148 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.070485+0000 mon.a (mon.0) 1353 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.070485+0000 mon.a (mon.0) 1353 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.109323+0000 mon.a (mon.0) 1354 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.109323+0000 mon.a (mon.0) 1354 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.126107+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.126107+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.100:0/1417448595' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59956-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: cluster 2026-03-10T07:29:03.139786+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: cluster 2026-03-10T07:29:03.139786+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.145669+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.145669+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.238016+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/341867729' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.238016+0000 mon.c (mon.2) 149 : audit [INF] from='client.? 192.168.123.100:0/341867729' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.240253+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:04 vm03 bash[23382]: audit 2026-03-10T07:29:03.240253+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:05 vm03 bash[23382]: audit 2026-03-10T07:29:04.088119+0000 mon.a (mon.0) 1359 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:05 vm03 bash[23382]: audit 2026-03-10T07:29:04.088119+0000 mon.a (mon.0) 1359 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:05 vm03 bash[23382]: audit 2026-03-10T07:29:04.178896+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:05 vm03 bash[23382]: audit 2026-03-10T07:29:04.178896+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:05 vm03 bash[23382]: audit 2026-03-10T07:29:04.178932+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:05 vm03 bash[23382]: audit 2026-03-10T07:29:04.178932+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:05 vm03 bash[23382]: cluster 2026-03-10T07:29:04.182220+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T07:29:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:05 vm03 bash[23382]: cluster 2026-03-10T07:29:04.182220+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:05 vm00 bash[20701]: audit 2026-03-10T07:29:04.088119+0000 mon.a (mon.0) 1359 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:05 vm00 bash[20701]: audit 2026-03-10T07:29:04.088119+0000 mon.a (mon.0) 1359 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:05 vm00 bash[20701]: audit 2026-03-10T07:29:04.178896+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:05 vm00 bash[20701]: audit 2026-03-10T07:29:04.178896+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:05 vm00 bash[20701]: audit 2026-03-10T07:29:04.178932+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:05 vm00 bash[20701]: audit 2026-03-10T07:29:04.178932+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:05 vm00 bash[20701]: cluster 2026-03-10T07:29:04.182220+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:05 vm00 bash[20701]: cluster 2026-03-10T07:29:04.182220+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:05 vm00 bash[28005]: audit 2026-03-10T07:29:04.088119+0000 mon.a (mon.0) 1359 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:05 vm00 bash[28005]: audit 2026-03-10T07:29:04.088119+0000 mon.a (mon.0) 1359 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:05 vm00 bash[28005]: audit 2026-03-10T07:29:04.178896+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:05 vm00 bash[28005]: audit 2026-03-10T07:29:04.178896+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? 192.168.123.100:0/656217991' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm00-59637-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:05 vm00 bash[28005]: audit 2026-03-10T07:29:04.178932+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:05 vm00 bash[28005]: audit 2026-03-10T07:29:04.178932+0000 mon.a (mon.0) 1361 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59629-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:05 vm00 bash[28005]: cluster 2026-03-10T07:29:04.182220+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T07:29:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:05 vm00 bash[28005]: cluster 2026-03-10T07:29:04.182220+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T07:29:06.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:06 vm03 bash[23382]: cluster 2026-03-10T07:29:04.589593+0000 mgr.y (mgr.24407) 173 : cluster [DBG] pgmap v151: 516 pgs: 128 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:06.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:06 vm03 bash[23382]: cluster 2026-03-10T07:29:04.589593+0000 mgr.y (mgr.24407) 173 : cluster [DBG] pgmap v151: 516 pgs: 128 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:06.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:06 vm03 bash[23382]: audit 2026-03-10T07:29:05.186883+0000 mon.a (mon.0) 1363 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:06.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:06 vm03 bash[23382]: audit 2026-03-10T07:29:05.186883+0000 mon.a (mon.0) 1363 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:06.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:06 vm03 bash[23382]: cluster 2026-03-10T07:29:05.218347+0000 mon.a (mon.0) 1364 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T07:29:06.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:06 vm03 bash[23382]: cluster 2026-03-10T07:29:05.218347+0000 mon.a (mon.0) 1364 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T07:29:06.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:06 vm03 bash[23382]: cluster 2026-03-10T07:29:05.918916+0000 mon.a (mon.0) 1365 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:06.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:06 vm03 bash[23382]: cluster 2026-03-10T07:29:05.918916+0000 mon.a (mon.0) 1365 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:06.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:06 vm03 bash[23382]: audit 2026-03-10T07:29:06.190813+0000 mon.a (mon.0) 1366 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:06.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:06 vm03 bash[23382]: audit 2026-03-10T07:29:06.190813+0000 mon.a (mon.0) 1366 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:06 vm00 bash[28005]: cluster 2026-03-10T07:29:04.589593+0000 mgr.y (mgr.24407) 173 : cluster [DBG] pgmap v151: 516 pgs: 128 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:06 vm00 bash[28005]: cluster 2026-03-10T07:29:04.589593+0000 mgr.y (mgr.24407) 173 : cluster [DBG] pgmap v151: 516 pgs: 128 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:06 vm00 bash[28005]: audit 2026-03-10T07:29:05.186883+0000 mon.a (mon.0) 1363 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:06 vm00 bash[28005]: audit 2026-03-10T07:29:05.186883+0000 mon.a (mon.0) 1363 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:06 vm00 bash[28005]: cluster 2026-03-10T07:29:05.218347+0000 mon.a (mon.0) 1364 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T07:29:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:06 vm00 bash[28005]: cluster 2026-03-10T07:29:05.218347+0000 mon.a (mon.0) 1364 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T07:29:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:06 vm00 bash[28005]: cluster 2026-03-10T07:29:05.918916+0000 mon.a (mon.0) 1365 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:06 vm00 bash[28005]: cluster 2026-03-10T07:29:05.918916+0000 mon.a (mon.0) 1365 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:06 vm00 bash[28005]: audit 2026-03-10T07:29:06.190813+0000 mon.a (mon.0) 1366 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:06 vm00 bash[28005]: audit 2026-03-10T07:29:06.190813+0000 mon.a (mon.0) 1366 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:06.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:06 vm00 bash[20701]: cluster 2026-03-10T07:29:04.589593+0000 mgr.y (mgr.24407) 173 : cluster [DBG] pgmap v151: 516 pgs: 128 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:06.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:06 vm00 bash[20701]: cluster 2026-03-10T07:29:04.589593+0000 mgr.y (mgr.24407) 173 : cluster [DBG] pgmap v151: 516 pgs: 128 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:06.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:06 vm00 bash[20701]: audit 2026-03-10T07:29:05.186883+0000 mon.a (mon.0) 1363 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:06.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:06 vm00 bash[20701]: audit 2026-03-10T07:29:05.186883+0000 mon.a (mon.0) 1363 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:06.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:06 vm00 bash[20701]: cluster 2026-03-10T07:29:05.218347+0000 mon.a (mon.0) 1364 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T07:29:06.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:06 vm00 bash[20701]: cluster 2026-03-10T07:29:05.218347+0000 mon.a (mon.0) 1364 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T07:29:06.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:06 vm00 bash[20701]: cluster 2026-03-10T07:29:05.918916+0000 mon.a (mon.0) 1365 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:06.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:06 vm00 bash[20701]: cluster 2026-03-10T07:29:05.918916+0000 mon.a (mon.0) 1365 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:06.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:06 vm00 bash[20701]: audit 2026-03-10T07:29:06.190813+0000 mon.a (mon.0) 1366 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:06.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:06 vm00 bash[20701]: audit 2026-03-10T07:29:06.190813+0000 mon.a (mon.0) 1366 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: cluster 2026-03-10T07:29:06.221335+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: cluster 2026-03-10T07:29:06.221335+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.249679+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.249679+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.296638+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1254302764' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.296638+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1254302764' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.300461+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.300461+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.397765+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.397765+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.399411+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.399411+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.399800+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.399800+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.400453+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.400453+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.401382+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.401382+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.401998+0000 mon.a (mon.0) 1372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:06.401998+0000 mon.a (mon.0) 1372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:07.192758+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:07.766 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:07 vm03 bash[23382]: audit 2026-03-10T07:29:07.192758+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: cluster 2026-03-10T07:29:06.221335+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T07:29:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: cluster 2026-03-10T07:29:06.221335+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T07:29:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.249679+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.249679+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.296638+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1254302764' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.296638+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1254302764' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.300461+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.300461+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.397765+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.397765+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.399411+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.399411+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.399800+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.399800+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.400453+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.400453+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.401382+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.401382+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.401998+0000 mon.a (mon.0) 1372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:06.401998+0000 mon.a (mon.0) 1372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:07.192758+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:07 vm00 bash[20701]: audit 2026-03-10T07:29:07.192758+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: cluster 2026-03-10T07:29:06.221335+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: cluster 2026-03-10T07:29:06.221335+0000 mon.a (mon.0) 1367 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.249679+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.249679+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.296638+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1254302764' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.296638+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.100:0/1254302764' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.300461+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.300461+0000 mon.a (mon.0) 1369 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.397765+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.397765+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.399411+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.399411+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.399800+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.399800+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.400453+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.400453+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.401382+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.401382+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.401998+0000 mon.a (mon.0) 1372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:06.401998+0000 mon.a (mon.0) 1372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:07.192758+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:07.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:07 vm00 bash[28005]: audit 2026-03-10T07:29:07.192758+0000 mon.a (mon.0) 1373 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: cluster 2026-03-10T07:29:06.590021+0000 mgr.y (mgr.24407) 174 : cluster [DBG] pgmap v154: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: cluster 2026-03-10T07:29:06.590021+0000 mgr.y (mgr.24407) 174 : cluster [DBG] pgmap v154: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:07.445942+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:07.445942+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:07.445998+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:07.445998+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:07.446030+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:07.446030+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: cluster 2026-03-10T07:29:07.476792+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: cluster 2026-03-10T07:29:07.476792+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:07.518550+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:07.518550+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:07.524912+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:07.524912+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:08.198845+0000 mon.a (mon.0) 1379 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:08.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:08 vm03 bash[23382]: audit 2026-03-10T07:29:08.198845+0000 mon.a (mon.0) 1379 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: cluster 2026-03-10T07:29:06.590021+0000 mgr.y (mgr.24407) 174 : cluster [DBG] pgmap v154: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: cluster 2026-03-10T07:29:06.590021+0000 mgr.y (mgr.24407) 174 : cluster [DBG] pgmap v154: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:07.445942+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:07.445942+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:07.445998+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:07.445998+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:07.446030+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:07.446030+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: cluster 2026-03-10T07:29:07.476792+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: cluster 2026-03-10T07:29:07.476792+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:07.518550+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:07.518550+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:07.524912+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:07.524912+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:08.198845+0000 mon.a (mon.0) 1379 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:08 vm00 bash[20701]: audit 2026-03-10T07:29:08.198845+0000 mon.a (mon.0) 1379 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: cluster 2026-03-10T07:29:06.590021+0000 mgr.y (mgr.24407) 174 : cluster [DBG] pgmap v154: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: cluster 2026-03-10T07:29:06.590021+0000 mgr.y (mgr.24407) 174 : cluster [DBG] pgmap v154: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:07.445942+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:07.445942+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? 192.168.123.100:0/343610354' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm00-59637-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:07.445998+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:07.445998+0000 mon.a (mon.0) 1375 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm00-59629-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:07.446030+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:07.446030+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: cluster 2026-03-10T07:29:07.476792+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: cluster 2026-03-10T07:29:07.476792+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:07.518550+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:07.518550+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:07.524912+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:07.524912+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:08.198845+0000 mon.a (mon.0) 1379 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:08.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:08 vm00 bash[28005]: audit 2026-03-10T07:29:08.198845+0000 mon.a (mon.0) 1379 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: cluster 2026-03-10T07:29:08.487565+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T07:29:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: cluster 2026-03-10T07:29:08.487565+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T07:29:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.581833+0000 mon.c (mon.2) 151 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.581833+0000 mon.c (mon.2) 151 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.582210+0000 mon.a (mon.0) 1381 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.582210+0000 mon.a (mon.0) 1381 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.582260+0000 mon.c (mon.2) 152 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.582260+0000 mon.c (mon.2) 152 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.582573+0000 mon.a (mon.0) 1382 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.582573+0000 mon.a (mon.0) 1382 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.582913+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.582913+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.583167+0000 mon.a (mon.0) 1383 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.583167+0000 mon.a (mon.0) 1383 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.583412+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.583412+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.583785+0000 mon.c (mon.2) 155 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.583785+0000 mon.c (mon.2) 155 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.584534+0000 mon.c (mon.2) 156 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.584534+0000 mon.c (mon.2) 156 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.584992+0000 mon.c (mon.2) 157 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.584992+0000 mon.c (mon.2) 157 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.585431+0000 mon.a (mon.0) 1384 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.585431+0000 mon.a (mon.0) 1384 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.585602+0000 mon.c (mon.2) 158 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.585602+0000 mon.c (mon.2) 158 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.585783+0000 mon.a (mon.0) 1385 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.585783+0000 mon.a (mon.0) 1385 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.585989+0000 mon.a (mon.0) 1386 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.585989+0000 mon.a (mon.0) 1386 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.586214+0000 mon.a (mon.0) 1387 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.586214+0000 mon.a (mon.0) 1387 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.586441+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.586441+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.586613+0000 mon.c (mon.2) 159 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.586613+0000 mon.c (mon.2) 159 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.587005+0000 mon.c (mon.2) 160 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.587005+0000 mon.c (mon.2) 160 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.587929+0000 mon.a (mon.0) 1389 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.587929+0000 mon.a (mon.0) 1389 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.588161+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:08.588161+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: cluster 2026-03-10T07:29:08.590370+0000 mgr.y (mgr.24407) 175 : cluster [DBG] pgmap v157: 420 pgs: 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: cluster 2026-03-10T07:29:08.590370+0000 mgr.y (mgr.24407) 175 : cluster [DBG] pgmap v157: 420 pgs: 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:09.173482+0000 mon.a (mon.0) 1391 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:09.173482+0000 mon.a (mon.0) 1391 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:09.185897+0000 mon.c (mon.2) 161 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:09.185897+0000 mon.c (mon.2) 161 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:09.208593+0000 mon.a (mon.0) 1392 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:09 vm00 bash[20701]: audit 2026-03-10T07:29:09.208593+0000 mon.a (mon.0) 1392 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: cluster 2026-03-10T07:29:08.487565+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: cluster 2026-03-10T07:29:08.487565+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.581833+0000 mon.c (mon.2) 151 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.581833+0000 mon.c (mon.2) 151 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.582210+0000 mon.a (mon.0) 1381 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.582210+0000 mon.a (mon.0) 1381 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.582260+0000 mon.c (mon.2) 152 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.582260+0000 mon.c (mon.2) 152 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.582573+0000 mon.a (mon.0) 1382 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.582573+0000 mon.a (mon.0) 1382 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.582913+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.582913+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.583167+0000 mon.a (mon.0) 1383 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.583167+0000 mon.a (mon.0) 1383 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.583412+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.583412+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.583785+0000 mon.c (mon.2) 155 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.583785+0000 mon.c (mon.2) 155 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.584534+0000 mon.c (mon.2) 156 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.584534+0000 mon.c (mon.2) 156 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.584992+0000 mon.c (mon.2) 157 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.584992+0000 mon.c (mon.2) 157 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.585431+0000 mon.a (mon.0) 1384 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.585431+0000 mon.a (mon.0) 1384 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.585602+0000 mon.c (mon.2) 158 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.585602+0000 mon.c (mon.2) 158 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.585783+0000 mon.a (mon.0) 1385 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.585783+0000 mon.a (mon.0) 1385 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.585989+0000 mon.a (mon.0) 1386 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.585989+0000 mon.a (mon.0) 1386 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.586214+0000 mon.a (mon.0) 1387 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.586214+0000 mon.a (mon.0) 1387 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.586441+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.586441+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.586613+0000 mon.c (mon.2) 159 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.586613+0000 mon.c (mon.2) 159 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.587005+0000 mon.c (mon.2) 160 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.587005+0000 mon.c (mon.2) 160 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.587929+0000 mon.a (mon.0) 1389 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.587929+0000 mon.a (mon.0) 1389 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.588161+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:08.588161+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: cluster 2026-03-10T07:29:08.590370+0000 mgr.y (mgr.24407) 175 : cluster [DBG] pgmap v157: 420 pgs: 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: cluster 2026-03-10T07:29:08.590370+0000 mgr.y (mgr.24407) 175 : cluster [DBG] pgmap v157: 420 pgs: 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:09.173482+0000 mon.a (mon.0) 1391 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:09.173482+0000 mon.a (mon.0) 1391 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:09.185897+0000 mon.c (mon.2) 161 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:09.185897+0000 mon.c (mon.2) 161 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:09.208593+0000 mon.a (mon.0) 1392 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:09.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:09 vm00 bash[28005]: audit 2026-03-10T07:29:09.208593+0000 mon.a (mon.0) 1392 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: cluster 2026-03-10T07:29:08.487565+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: cluster 2026-03-10T07:29:08.487565+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.581833+0000 mon.c (mon.2) 151 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.581833+0000 mon.c (mon.2) 151 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.582210+0000 mon.a (mon.0) 1381 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.582210+0000 mon.a (mon.0) 1381 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.582260+0000 mon.c (mon.2) 152 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.582260+0000 mon.c (mon.2) 152 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.582573+0000 mon.a (mon.0) 1382 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.582573+0000 mon.a (mon.0) 1382 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.582913+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.582913+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.583167+0000 mon.a (mon.0) 1383 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.583167+0000 mon.a (mon.0) 1383 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.583412+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.583412+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.583785+0000 mon.c (mon.2) 155 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.583785+0000 mon.c (mon.2) 155 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.584534+0000 mon.c (mon.2) 156 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.584534+0000 mon.c (mon.2) 156 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.584992+0000 mon.c (mon.2) 157 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.584992+0000 mon.c (mon.2) 157 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.585431+0000 mon.a (mon.0) 1384 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.585431+0000 mon.a (mon.0) 1384 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.585602+0000 mon.c (mon.2) 158 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.585602+0000 mon.c (mon.2) 158 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.585783+0000 mon.a (mon.0) 1385 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.585783+0000 mon.a (mon.0) 1385 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.585989+0000 mon.a (mon.0) 1386 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.585989+0000 mon.a (mon.0) 1386 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.586214+0000 mon.a (mon.0) 1387 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.586214+0000 mon.a (mon.0) 1387 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.586441+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.586441+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.586613+0000 mon.c (mon.2) 159 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.586613+0000 mon.c (mon.2) 159 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.587005+0000 mon.c (mon.2) 160 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.587005+0000 mon.c (mon.2) 160 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.587929+0000 mon.a (mon.0) 1389 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.587929+0000 mon.a (mon.0) 1389 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.588161+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:08.588161+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: cluster 2026-03-10T07:29:08.590370+0000 mgr.y (mgr.24407) 175 : cluster [DBG] pgmap v157: 420 pgs: 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: cluster 2026-03-10T07:29:08.590370+0000 mgr.y (mgr.24407) 175 : cluster [DBG] pgmap v157: 420 pgs: 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:09.173482+0000 mon.a (mon.0) 1391 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:09.173482+0000 mon.a (mon.0) 1391 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:09.185897+0000 mon.c (mon.2) 161 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:09.185897+0000 mon.c (mon.2) 161 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:09.208593+0000 mon.a (mon.0) 1392 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:10.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:09 vm03 bash[23382]: audit 2026-03-10T07:29:09.208593+0000 mon.a (mon.0) 1392 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [==========] Running 12 tests from 4 test suites. 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] Global test environment set-up. 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMiscVersion.Version 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMiscVersion.Version (0 ms) 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion (0 ms total) 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectFailure 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: unable to get monitor info from DNS SRV with service name: ceph-mon 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-10T07:28:04.156+0000 7f52412f0980 -1 failed for service _ceph-mon._tcp 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-10T07:28:04.156+0000 7f52412f0980 -1 monclient: get_monmap_and_config cannot identify monitors to contact 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectFailure (95 ms) 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectTimeout 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectTimeout (5017 ms) 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure (5112 ms total) 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 1 test from LibRadosMiscPool 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMiscPool.PoolCreationRace 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: started 0x7f5220067d70 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: started 0x56552aec1670 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: started 2 aios 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: waiting 0x7f5220067d70 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: waiting 0x56552aec1670 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: done. 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMiscPool.PoolCreationRace (5180 ms) 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 1 test from LibRadosMiscPool (5180 ms total) 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 8 tests from LibRadosMisc 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.ClusterFSID 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.ClusterFSID (0 ms) 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.Exec 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.Exec (199 ms) 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.WriteSame 2026-03-10T07:29:10.638 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.WriteSame (7 ms) 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.CmpExt 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.CmpExt (9 ms) 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.Applications 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.Applications (4840 ms) 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatOSD 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatOSD (0 ms) 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatClient 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatClient (0 ms) 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ RUN ] LibRadosMisc.ShutdownRace 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ OK ] LibRadosMisc.ShutdownRace (49591 ms) 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] 8 tests from LibRadosMisc (54646 ms total) 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [----------] Global test environment tear-down 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [==========] 12 tests from 4 test suites ran. (66546 ms total) 2026-03-10T07:29:10.639 INFO:tasks.workunit.client.0.vm00.stdout: api_misc: [ PASSED ] 12 tests. 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585003+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585003+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585062+0000 mon.a (mon.0) 1394 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585062+0000 mon.a (mon.0) 1394 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585102+0000 mon.a (mon.0) 1395 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585102+0000 mon.a (mon.0) 1395 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585137+0000 mon.a (mon.0) 1396 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585137+0000 mon.a (mon.0) 1396 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585170+0000 mon.a (mon.0) 1397 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585170+0000 mon.a (mon.0) 1397 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585208+0000 mon.a (mon.0) 1398 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585208+0000 mon.a (mon.0) 1398 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585250+0000 mon.a (mon.0) 1399 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585250+0000 mon.a (mon.0) 1399 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585292+0000 mon.a (mon.0) 1400 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585292+0000 mon.a (mon.0) 1400 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585331+0000 mon.a (mon.0) 1401 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585331+0000 mon.a (mon.0) 1401 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585376+0000 mon.a (mon.0) 1402 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585376+0000 mon.a (mon.0) 1402 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585414+0000 mon.a (mon.0) 1403 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.585414+0000 mon.a (mon.0) 1403 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]': finished 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: cluster 2026-03-10T07:29:09.595173+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: cluster 2026-03-10T07:29:09.595173+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.844252+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.100:0/925460559' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.844252+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.100:0/925460559' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.844920+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.844920+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.853582+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.100:0/1823967460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.853582+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.100:0/1823967460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.854660+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:09.854660+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:10.213879+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:10.213879+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:10.215001+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:11.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:10 vm03 bash[23382]: audit 2026-03-10T07:29:10.215001+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585003+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585003+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585062+0000 mon.a (mon.0) 1394 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585062+0000 mon.a (mon.0) 1394 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585102+0000 mon.a (mon.0) 1395 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585102+0000 mon.a (mon.0) 1395 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585137+0000 mon.a (mon.0) 1396 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585137+0000 mon.a (mon.0) 1396 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585170+0000 mon.a (mon.0) 1397 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585170+0000 mon.a (mon.0) 1397 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585208+0000 mon.a (mon.0) 1398 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585208+0000 mon.a (mon.0) 1398 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585250+0000 mon.a (mon.0) 1399 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585250+0000 mon.a (mon.0) 1399 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585292+0000 mon.a (mon.0) 1400 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585292+0000 mon.a (mon.0) 1400 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585331+0000 mon.a (mon.0) 1401 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585331+0000 mon.a (mon.0) 1401 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585376+0000 mon.a (mon.0) 1402 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585376+0000 mon.a (mon.0) 1402 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585414+0000 mon.a (mon.0) 1403 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.585414+0000 mon.a (mon.0) 1403 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]': finished 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: cluster 2026-03-10T07:29:09.595173+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T07:29:11.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: cluster 2026-03-10T07:29:09.595173+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.844252+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.100:0/925460559' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.844252+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.100:0/925460559' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.844920+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.844920+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.853582+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.100:0/1823967460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.853582+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.100:0/1823967460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.854660+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:09.854660+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:10.213879+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:10.213879+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:10.215001+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:10 vm00 bash[28005]: audit 2026-03-10T07:29:10.215001+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585003+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585003+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm00-59956-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585062+0000 mon.a (mon.0) 1394 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585062+0000 mon.a (mon.0) 1394 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.7", "id": [7, 2]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585102+0000 mon.a (mon.0) 1395 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585102+0000 mon.a (mon.0) 1395 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [7, 4]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585137+0000 mon.a (mon.0) 1396 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585137+0000 mon.a (mon.0) 1396 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.15", "id": [7, 2]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585170+0000 mon.a (mon.0) 1397 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585170+0000 mon.a (mon.0) 1397 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [7, 2]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585208+0000 mon.a (mon.0) 1398 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585208+0000 mon.a (mon.0) 1398 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.12", "id": [0, 1]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585250+0000 mon.a (mon.0) 1399 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585250+0000 mon.a (mon.0) 1399 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585292+0000 mon.a (mon.0) 1400 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585292+0000 mon.a (mon.0) 1400 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.18", "id": [3, 7]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585331+0000 mon.a (mon.0) 1401 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585331+0000 mon.a (mon.0) 1401 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.19", "id": [6, 1]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585376+0000 mon.a (mon.0) 1402 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585376+0000 mon.a (mon.0) 1402 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 2]}]': finished 2026-03-10T07:29:11.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585414+0000 mon.a (mon.0) 1403 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]': finished 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.585414+0000 mon.a (mon.0) 1403 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1e", "id": [3, 7]}]': finished 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: cluster 2026-03-10T07:29:09.595173+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: cluster 2026-03-10T07:29:09.595173+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.844252+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.100:0/925460559' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.844252+0000 mon.c (mon.2) 162 : audit [INF] from='client.? 192.168.123.100:0/925460559' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.844920+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.844920+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.853582+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.100:0/1823967460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.853582+0000 mon.c (mon.2) 163 : audit [INF] from='client.? 192.168.123.100:0/1823967460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.854660+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:09.854660+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:10.213879+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:10.213879+0000 mon.a (mon.0) 1407 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:10.215001+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:10 vm00 bash[20701]: audit 2026-03-10T07:29:10.215001+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:11.082 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:29:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:29:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.589709+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.589709+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.589751+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.589751+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.589778+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.589778+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: cluster 2026-03-10T07:29:10.591229+0000 mgr.y (mgr.24407) 176 : cluster [DBG] pgmap v159: 492 pgs: 39 creating+peering, 33 unknown, 6 peering, 414 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: cluster 2026-03-10T07:29:10.591229+0000 mgr.y (mgr.24407) 176 : cluster [DBG] pgmap v159: 492 pgs: 39 creating+peering, 33 unknown, 6 peering, 414 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: cluster 2026-03-10T07:29:10.592646+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: cluster 2026-03-10T07:29:10.592646+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.623635+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.623635+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.691922+0000 mon.a (mon.0) 1414 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.691922+0000 mon.a (mon.0) 1414 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.692442+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:12.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:11 vm03 bash[23382]: audit 2026-03-10T07:29:10.692442+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.589709+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.589709+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.589751+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.589751+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.589778+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.589778+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: cluster 2026-03-10T07:29:10.591229+0000 mgr.y (mgr.24407) 176 : cluster [DBG] pgmap v159: 492 pgs: 39 creating+peering, 33 unknown, 6 peering, 414 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: cluster 2026-03-10T07:29:10.591229+0000 mgr.y (mgr.24407) 176 : cluster [DBG] pgmap v159: 492 pgs: 39 creating+peering, 33 unknown, 6 peering, 414 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: cluster 2026-03-10T07:29:10.592646+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: cluster 2026-03-10T07:29:10.592646+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.623635+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.623635+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.691922+0000 mon.a (mon.0) 1414 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.691922+0000 mon.a (mon.0) 1414 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.692442+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:11 vm00 bash[28005]: audit 2026-03-10T07:29:10.692442+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.589709+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.589709+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.589751+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.589751+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm00-59629-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.589778+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.589778+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: cluster 2026-03-10T07:29:10.591229+0000 mgr.y (mgr.24407) 176 : cluster [DBG] pgmap v159: 492 pgs: 39 creating+peering, 33 unknown, 6 peering, 414 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: cluster 2026-03-10T07:29:10.591229+0000 mgr.y (mgr.24407) 176 : cluster [DBG] pgmap v159: 492 pgs: 39 creating+peering, 33 unknown, 6 peering, 414 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: cluster 2026-03-10T07:29:10.592646+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: cluster 2026-03-10T07:29:10.592646+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.623635+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.623635+0000 mon.a (mon.0) 1413 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.691922+0000 mon.a (mon.0) 1414 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.691922+0000 mon.a (mon.0) 1414 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.692442+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:11 vm00 bash[20701]: audit 2026-03-10T07:29:10.692442+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:13.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:29:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: cluster 2026-03-10T07:29:11.634173+0000 mon.a (mon.0) 1416 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: cluster 2026-03-10T07:29:11.634173+0000 mon.a (mon.0) 1416 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: cluster 2026-03-10T07:29:11.634208+0000 mon.a (mon.0) 1417 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: cluster 2026-03-10T07:29:11.634208+0000 mon.a (mon.0) 1417 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: audit 2026-03-10T07:29:11.636464+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: audit 2026-03-10T07:29:11.636464+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: cluster 2026-03-10T07:29:11.642337+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: cluster 2026-03-10T07:29:11.642337+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: audit 2026-03-10T07:29:11.650347+0000 mon.a (mon.0) 1420 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: audit 2026-03-10T07:29:11.650347+0000 mon.a (mon.0) 1420 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: audit 2026-03-10T07:29:11.822498+0000 mon.a (mon.0) 1421 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:13.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:12 vm03 bash[23382]: audit 2026-03-10T07:29:11.822498+0000 mon.a (mon.0) 1421 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:13.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: cluster 2026-03-10T07:29:11.634173+0000 mon.a (mon.0) 1416 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: cluster 2026-03-10T07:29:11.634173+0000 mon.a (mon.0) 1416 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: cluster 2026-03-10T07:29:11.634208+0000 mon.a (mon.0) 1417 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: cluster 2026-03-10T07:29:11.634208+0000 mon.a (mon.0) 1417 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: audit 2026-03-10T07:29:11.636464+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: audit 2026-03-10T07:29:11.636464+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: cluster 2026-03-10T07:29:11.642337+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: cluster 2026-03-10T07:29:11.642337+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: audit 2026-03-10T07:29:11.650347+0000 mon.a (mon.0) 1420 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: audit 2026-03-10T07:29:11.650347+0000 mon.a (mon.0) 1420 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: audit 2026-03-10T07:29:11.822498+0000 mon.a (mon.0) 1421 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:12 vm00 bash[28005]: audit 2026-03-10T07:29:11.822498+0000 mon.a (mon.0) 1421 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: cluster 2026-03-10T07:29:11.634173+0000 mon.a (mon.0) 1416 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: cluster 2026-03-10T07:29:11.634173+0000 mon.a (mon.0) 1416 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: cluster 2026-03-10T07:29:11.634208+0000 mon.a (mon.0) 1417 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: cluster 2026-03-10T07:29:11.634208+0000 mon.a (mon.0) 1417 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: audit 2026-03-10T07:29:11.636464+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: audit 2026-03-10T07:29:11.636464+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: cluster 2026-03-10T07:29:11.642337+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: cluster 2026-03-10T07:29:11.642337+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: audit 2026-03-10T07:29:11.650347+0000 mon.a (mon.0) 1420 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: audit 2026-03-10T07:29:11.650347+0000 mon.a (mon.0) 1420 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: audit 2026-03-10T07:29:11.822498+0000 mon.a (mon.0) 1421 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:13.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:12 vm00 bash[20701]: audit 2026-03-10T07:29:11.822498+0000 mon.a (mon.0) 1421 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: cluster 2026-03-10T07:29:12.591703+0000 mgr.y (mgr.24407) 177 : cluster [DBG] pgmap v162: 396 pgs: 5 creating+peering, 3 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: cluster 2026-03-10T07:29:12.591703+0000 mgr.y (mgr.24407) 177 : cluster [DBG] pgmap v162: 396 pgs: 5 creating+peering, 3 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: cluster 2026-03-10T07:29:12.784419+0000 mon.a (mon.0) 1422 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: cluster 2026-03-10T07:29:12.784419+0000 mon.a (mon.0) 1422 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:12.791914+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.100:0/4217674211' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:12.791914+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.100:0/4217674211' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:12.792356+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/3535028425' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:12.792356+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/3535028425' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:12.859302+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:12.859302+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:12.859700+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:12.859700+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:12.859897+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:12.859897+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:13.008344+0000 mgr.y (mgr.24407) 178 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:13.008344+0000 mgr.y (mgr.24407) 178 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:13.759389+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:13.759389+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:13.759466+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:13.759466+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: cluster 2026-03-10T07:29:13.764231+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: cluster 2026-03-10T07:29:13.764231+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:13.860248+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:13 vm03 bash[23382]: audit 2026-03-10T07:29:13.860248+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: cluster 2026-03-10T07:29:12.591703+0000 mgr.y (mgr.24407) 177 : cluster [DBG] pgmap v162: 396 pgs: 5 creating+peering, 3 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: cluster 2026-03-10T07:29:12.591703+0000 mgr.y (mgr.24407) 177 : cluster [DBG] pgmap v162: 396 pgs: 5 creating+peering, 3 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: cluster 2026-03-10T07:29:12.784419+0000 mon.a (mon.0) 1422 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: cluster 2026-03-10T07:29:12.784419+0000 mon.a (mon.0) 1422 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:12.791914+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.100:0/4217674211' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:12.791914+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.100:0/4217674211' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:12.792356+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/3535028425' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:12.792356+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/3535028425' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:12.859302+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:12.859302+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:12.859700+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:12.859700+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:12.859897+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:12.859897+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:13.008344+0000 mgr.y (mgr.24407) 178 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:13.008344+0000 mgr.y (mgr.24407) 178 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:13.759389+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:13.759389+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:13.759466+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:13.759466+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: cluster 2026-03-10T07:29:13.764231+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: cluster 2026-03-10T07:29:13.764231+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:13.860248+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:13 vm00 bash[28005]: audit 2026-03-10T07:29:13.860248+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: cluster 2026-03-10T07:29:12.591703+0000 mgr.y (mgr.24407) 177 : cluster [DBG] pgmap v162: 396 pgs: 5 creating+peering, 3 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: cluster 2026-03-10T07:29:12.591703+0000 mgr.y (mgr.24407) 177 : cluster [DBG] pgmap v162: 396 pgs: 5 creating+peering, 3 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: cluster 2026-03-10T07:29:12.784419+0000 mon.a (mon.0) 1422 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: cluster 2026-03-10T07:29:12.784419+0000 mon.a (mon.0) 1422 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:12.791914+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.100:0/4217674211' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:12.791914+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.100:0/4217674211' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:12.792356+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/3535028425' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:12.792356+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.100:0/3535028425' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:12.859302+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:12.859302+0000 mon.a (mon.0) 1423 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:12.859700+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:12.859700+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:12.859897+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:12.859897+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:13.008344+0000 mgr.y (mgr.24407) 178 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:13.008344+0000 mgr.y (mgr.24407) 178 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:13.759389+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:13.759389+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm00-59637-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:13.759466+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:13.759466+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm00-59629-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: cluster 2026-03-10T07:29:13.764231+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T07:29:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: cluster 2026-03-10T07:29:13.764231+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T07:29:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:13.860248+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:13 vm00 bash[20701]: audit 2026-03-10T07:29:13.860248+0000 mon.a (mon.0) 1429 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: Running main() from gmock_main.cc 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [==========] Running 21 tests from 5 test suites. 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] Global test environment set-up. 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: seed 59956 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapListPP 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapListPP (2298 ms) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapRemovePP 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapRemovePP (2106 ms) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.RollbackPP 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.RollbackPP (2081 ms) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapGetNamePP 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapGetNamePP (2236 ms) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapCreateRemovePP 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapCreateRemovePP (3611 ms) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP (12332 ms total) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapPP 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapPP (4707 ms) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.RollbackPP 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.RollbackPP (3892 ms) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP (5530 ms) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.Bug11677 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.Bug11677 (4141 ms) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.OrderSnap 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.OrderSnap (2331 ms) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: ./src/test/librados/snapshots_cxx.cc:460: Skipped 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback (0 ms) 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: deleting snap 14 in pool LibRadosSnapshotsSelfManagedPP_vm00-59956-7 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: waiting for snaps to purge 2026-03-10T07:29:15.947 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap (17637 ms) 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP (38238 ms total) 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected (5 ms) 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance (5410 ms) 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode (5415 ms total) 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapListPP 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapListPP (3152 ms) 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapRemovePP 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapRemovePP (1999 ms) 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.RollbackPP 2026-03-10T07:29:15.948 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.RollbackPP (1165 ms) 2026-03-10T07:29:16.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:15 vm03 bash[23382]: cluster 2026-03-10T07:29:14.592271+0000 mgr.y (mgr.24407) 179 : cluster [DBG] pgmap v165: 460 pgs: 5 creating+peering, 67 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:16.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:15 vm03 bash[23382]: cluster 2026-03-10T07:29:14.592271+0000 mgr.y (mgr.24407) 179 : cluster [DBG] pgmap v165: 460 pgs: 5 creating+peering, 67 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:16.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:15 vm03 bash[23382]: cluster 2026-03-10T07:29:14.786372+0000 mon.a (mon.0) 1430 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T07:29:16.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:15 vm03 bash[23382]: cluster 2026-03-10T07:29:14.786372+0000 mon.a (mon.0) 1430 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T07:29:16.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:15 vm03 bash[23382]: audit 2026-03-10T07:29:14.861045+0000 mon.a (mon.0) 1431 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:16.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:15 vm03 bash[23382]: audit 2026-03-10T07:29:14.861045+0000 mon.a (mon.0) 1431 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:16.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:15 vm00 bash[28005]: cluster 2026-03-10T07:29:14.592271+0000 mgr.y (mgr.24407) 179 : cluster [DBG] pgmap v165: 460 pgs: 5 creating+peering, 67 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:16.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:15 vm00 bash[28005]: cluster 2026-03-10T07:29:14.592271+0000 mgr.y (mgr.24407) 179 : cluster [DBG] pgmap v165: 460 pgs: 5 creating+peering, 67 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:15 vm00 bash[28005]: cluster 2026-03-10T07:29:14.786372+0000 mon.a (mon.0) 1430 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T07:29:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:15 vm00 bash[28005]: cluster 2026-03-10T07:29:14.786372+0000 mon.a (mon.0) 1430 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T07:29:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:15 vm00 bash[28005]: audit 2026-03-10T07:29:14.861045+0000 mon.a (mon.0) 1431 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:15 vm00 bash[28005]: audit 2026-03-10T07:29:14.861045+0000 mon.a (mon.0) 1431 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:15 vm00 bash[20701]: cluster 2026-03-10T07:29:14.592271+0000 mgr.y (mgr.24407) 179 : cluster [DBG] pgmap v165: 460 pgs: 5 creating+peering, 67 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:15 vm00 bash[20701]: cluster 2026-03-10T07:29:14.592271+0000 mgr.y (mgr.24407) 179 : cluster [DBG] pgmap v165: 460 pgs: 5 creating+peering, 67 unknown, 6 peering, 382 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:15 vm00 bash[20701]: cluster 2026-03-10T07:29:14.786372+0000 mon.a (mon.0) 1430 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T07:29:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:15 vm00 bash[20701]: cluster 2026-03-10T07:29:14.786372+0000 mon.a (mon.0) 1430 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T07:29:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:15 vm00 bash[20701]: audit 2026-03-10T07:29:14.861045+0000 mon.a (mon.0) 1431 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:15 vm00 bash[20701]: audit 2026-03-10T07:29:14.861045+0000 mon.a (mon.0) 1431 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:17.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: cluster 2026-03-10T07:29:15.781304+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T07:29:17.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: cluster 2026-03-10T07:29:15.781304+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T07:29:17.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.791183+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.100:0/229848316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.791183+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.100:0/229848316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.798496+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/3353711703' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.798496+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/3353711703' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.808617+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.808617+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.808800+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.808800+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.861868+0000 mon.a (mon.0) 1435 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.861868+0000 mon.a (mon.0) 1435 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.925847+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.925847+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.926027+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:15.926027+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: cluster 2026-03-10T07:29:15.939974+0000 mon.a (mon.0) 1438 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: cluster 2026-03-10T07:29:15.939974+0000 mon.a (mon.0) 1438 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:16.649095+0000 mon.a (mon.0) 1439 : audit [DBG] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:16.649095+0000 mon.a (mon.0) 1439 : audit [DBG] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:16.650774+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:16 vm00 bash[28005]: audit 2026-03-10T07:29:16.650774+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: cluster 2026-03-10T07:29:15.781304+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: cluster 2026-03-10T07:29:15.781304+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.791183+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.100:0/229848316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.791183+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.100:0/229848316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.798496+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/3353711703' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.798496+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/3353711703' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.808617+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.808617+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.808800+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.808800+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.861868+0000 mon.a (mon.0) 1435 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.861868+0000 mon.a (mon.0) 1435 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.925847+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.925847+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.926027+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:15.926027+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: cluster 2026-03-10T07:29:15.939974+0000 mon.a (mon.0) 1438 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: cluster 2026-03-10T07:29:15.939974+0000 mon.a (mon.0) 1438 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:16.649095+0000 mon.a (mon.0) 1439 : audit [DBG] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:16.649095+0000 mon.a (mon.0) 1439 : audit [DBG] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:16.650774+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:16 vm00 bash[20701]: audit 2026-03-10T07:29:16.650774+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: cluster 2026-03-10T07:29:15.781304+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: cluster 2026-03-10T07:29:15.781304+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.791183+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.100:0/229848316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.791183+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.100:0/229848316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.798496+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/3353711703' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.798496+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/3353711703' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.808617+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.808617+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.808800+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.808800+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.861868+0000 mon.a (mon.0) 1435 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.861868+0000 mon.a (mon.0) 1435 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.925847+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.925847+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm00-59629-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.926027+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:15.926027+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm00-59637-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: cluster 2026-03-10T07:29:15.939974+0000 mon.a (mon.0) 1438 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: cluster 2026-03-10T07:29:15.939974+0000 mon.a (mon.0) 1438 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:16.649095+0000 mon.a (mon.0) 1439 : audit [DBG] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:16.649095+0000 mon.a (mon.0) 1439 : audit [DBG] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:16.650774+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:17.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:16 vm03 bash[23382]: audit 2026-03-10T07:29:16.650774+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:18.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: cluster 2026-03-10T07:29:16.592775+0000 mgr.y (mgr.24407) 180 : cluster [DBG] pgmap v169: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 4.7 KiB/s wr, 10 op/s 2026-03-10T07:29:18.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: cluster 2026-03-10T07:29:16.592775+0000 mgr.y (mgr.24407) 180 : cluster [DBG] pgmap v169: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 4.7 KiB/s wr, 10 op/s 2026-03-10T07:29:18.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: audit 2026-03-10T07:29:16.862712+0000 mon.a (mon.0) 1441 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: audit 2026-03-10T07:29:16.862712+0000 mon.a (mon.0) 1441 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: cluster 2026-03-10T07:29:16.925676+0000 mon.a (mon.0) 1442 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive, 2 pgs peering) 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: cluster 2026-03-10T07:29:16.925676+0000 mon.a (mon.0) 1442 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive, 2 pgs peering) 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: audit 2026-03-10T07:29:16.934242+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: audit 2026-03-10T07:29:16.934242+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: cluster 2026-03-10T07:29:16.940243+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: cluster 2026-03-10T07:29:16.940243+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: audit 2026-03-10T07:29:16.941647+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2"}]: dispatch 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: audit 2026-03-10T07:29:16.941647+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2"}]: dispatch 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: audit 2026-03-10T07:29:16.942595+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:17 vm00 bash[28005]: audit 2026-03-10T07:29:16.942595+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: cluster 2026-03-10T07:29:16.592775+0000 mgr.y (mgr.24407) 180 : cluster [DBG] pgmap v169: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 4.7 KiB/s wr, 10 op/s 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: cluster 2026-03-10T07:29:16.592775+0000 mgr.y (mgr.24407) 180 : cluster [DBG] pgmap v169: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 4.7 KiB/s wr, 10 op/s 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: audit 2026-03-10T07:29:16.862712+0000 mon.a (mon.0) 1441 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: audit 2026-03-10T07:29:16.862712+0000 mon.a (mon.0) 1441 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: cluster 2026-03-10T07:29:16.925676+0000 mon.a (mon.0) 1442 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive, 2 pgs peering) 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: cluster 2026-03-10T07:29:16.925676+0000 mon.a (mon.0) 1442 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive, 2 pgs peering) 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: audit 2026-03-10T07:29:16.934242+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: audit 2026-03-10T07:29:16.934242+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: cluster 2026-03-10T07:29:16.940243+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: cluster 2026-03-10T07:29:16.940243+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: audit 2026-03-10T07:29:16.941647+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2"}]: dispatch 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: audit 2026-03-10T07:29:16.941647+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2"}]: dispatch 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: audit 2026-03-10T07:29:16.942595+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:17 vm00 bash[20701]: audit 2026-03-10T07:29:16.942595+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:18.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: cluster 2026-03-10T07:29:16.592775+0000 mgr.y (mgr.24407) 180 : cluster [DBG] pgmap v169: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 4.7 KiB/s wr, 10 op/s 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: cluster 2026-03-10T07:29:16.592775+0000 mgr.y (mgr.24407) 180 : cluster [DBG] pgmap v169: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 4.7 KiB/s wr, 10 op/s 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: audit 2026-03-10T07:29:16.862712+0000 mon.a (mon.0) 1441 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: audit 2026-03-10T07:29:16.862712+0000 mon.a (mon.0) 1441 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: cluster 2026-03-10T07:29:16.925676+0000 mon.a (mon.0) 1442 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive, 2 pgs peering) 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: cluster 2026-03-10T07:29:16.925676+0000 mon.a (mon.0) 1442 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive, 2 pgs peering) 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: audit 2026-03-10T07:29:16.934242+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: audit 2026-03-10T07:29:16.934242+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: cluster 2026-03-10T07:29:16.940243+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: cluster 2026-03-10T07:29:16.940243+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: audit 2026-03-10T07:29:16.941647+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2"}]: dispatch 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: audit 2026-03-10T07:29:16.941647+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2"}]: dispatch 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: audit 2026-03-10T07:29:16.942595+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:18.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:17 vm03 bash[23382]: audit 2026-03-10T07:29:16.942595+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.863531+0000 mon.a (mon.0) 1447 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:19.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.863531+0000 mon.a (mon.0) 1447 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.938985+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.938985+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: cluster 2026-03-10T07:29:17.949085+0000 mon.a (mon.0) 1449 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: cluster 2026-03-10T07:29:17.949085+0000 mon.a (mon.0) 1449 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.951304+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.951304+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.952082+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.952082+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.952995+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.952995+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.960511+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/987614573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.960511+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/987614573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.962240+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:18 vm00 bash[28005]: audit 2026-03-10T07:29:17.962240+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.863531+0000 mon.a (mon.0) 1447 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.863531+0000 mon.a (mon.0) 1447 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.938985+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.938985+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: cluster 2026-03-10T07:29:17.949085+0000 mon.a (mon.0) 1449 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: cluster 2026-03-10T07:29:17.949085+0000 mon.a (mon.0) 1449 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.951304+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.951304+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.952082+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.952082+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.952995+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.952995+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.960511+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/987614573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.960511+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/987614573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.962240+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:18 vm00 bash[20701]: audit 2026-03-10T07:29:17.962240+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.863531+0000 mon.a (mon.0) 1447 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.863531+0000 mon.a (mon.0) 1447 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.938985+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.938985+0000 mon.a (mon.0) 1448 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm00-59776-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: cluster 2026-03-10T07:29:17.949085+0000 mon.a (mon.0) 1449 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: cluster 2026-03-10T07:29:17.949085+0000 mon.a (mon.0) 1449 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.951304+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.951304+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.952082+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.952082+0000 mon.a (mon.0) 1451 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.952995+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.952995+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.960511+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/987614573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.960511+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/987614573' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.962240+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:19.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:18 vm03 bash[23382]: audit 2026-03-10T07:29:17.962240+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: cluster 2026-03-10T07:29:18.593214+0000 mgr.y (mgr.24407) 181 : cluster [DBG] pgmap v172: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: cluster 2026-03-10T07:29:18.593214+0000 mgr.y (mgr.24407) 181 : cluster [DBG] pgmap v172: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.864243+0000 mon.a (mon.0) 1454 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.864243+0000 mon.a (mon.0) 1454 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: cluster 2026-03-10T07:29:18.939000+0000 mon.a (mon.0) 1455 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: cluster 2026-03-10T07:29:18.939000+0000 mon.a (mon.0) 1455 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.948081+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.948081+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.948538+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.948538+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.948670+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.948670+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.968771+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.968771+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: cluster 2026-03-10T07:29:18.969498+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: cluster 2026-03-10T07:29:18.969498+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.986602+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.986602+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.992122+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:19 vm00 bash[28005]: audit 2026-03-10T07:29:18.992122+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: cluster 2026-03-10T07:29:18.593214+0000 mgr.y (mgr.24407) 181 : cluster [DBG] pgmap v172: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: cluster 2026-03-10T07:29:18.593214+0000 mgr.y (mgr.24407) 181 : cluster [DBG] pgmap v172: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.864243+0000 mon.a (mon.0) 1454 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.864243+0000 mon.a (mon.0) 1454 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: cluster 2026-03-10T07:29:18.939000+0000 mon.a (mon.0) 1455 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: cluster 2026-03-10T07:29:18.939000+0000 mon.a (mon.0) 1455 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.948081+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.948081+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.948538+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.948538+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.948670+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.948670+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.968771+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.968771+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: cluster 2026-03-10T07:29:18.969498+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: cluster 2026-03-10T07:29:18.969498+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.986602+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.986602+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.992122+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:29:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:19 vm00 bash[20701]: audit 2026-03-10T07:29:18.992122+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: cluster 2026-03-10T07:29:18.593214+0000 mgr.y (mgr.24407) 181 : cluster [DBG] pgmap v172: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: cluster 2026-03-10T07:29:18.593214+0000 mgr.y (mgr.24407) 181 : cluster [DBG] pgmap v172: 460 pgs: 64 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.864243+0000 mon.a (mon.0) 1454 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.864243+0000 mon.a (mon.0) 1454 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: cluster 2026-03-10T07:29:18.939000+0000 mon.a (mon.0) 1455 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: cluster 2026-03-10T07:29:18.939000+0000 mon.a (mon.0) 1455 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.948081+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.948081+0000 mon.a (mon.0) 1456 : audit [INF] from='client.? 192.168.123.100:0/353319515' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm00-59637-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.948538+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.948538+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.948670+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.948670+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm00-59629-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.968771+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.968771+0000 mon.b (mon.1) 153 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: cluster 2026-03-10T07:29:18.969498+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: cluster 2026-03-10T07:29:18.969498+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.986602+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.986602+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.992122+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:29:20.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:19 vm03 bash[23382]: audit 2026-03-10T07:29:18.992122+0000 mon.a (mon.0) 1461 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ snapshots: Running main() from gmock_main.cc 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [==========] Running 11 tests from 2 test suites. 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] Global test environment set-up. 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapList 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapList (5062 ms) 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapRemove 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapRemove (5430 ms) 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSnapshots.Rollback 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSnapshots.Rollback (3597 ms) 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapGetName 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapGetName (5082 ms) 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapCreateRemove 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapCreateRemove (6921 ms) 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots (26092 ms total) 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Snap 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Snap (5088 ms) 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Rollback 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Rollback (5860 ms) 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.SnapOverlap 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.SnapOverlap (8397 ms) 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Bug11677 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Bug11677 (6644 ms) 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.OrderSnap 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.OrderSnap (4039 ms) 2026-03-10T07:29:21.000 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.ReusePurgedSnap 2026-03-10T07:29:21.001 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: Deleting snap 3 in pool ReusePurgedSnapvm00-60638-11. 2026-03-10T07:29:21.001 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: Waiting for snaps to purge. 2026-03-10T07:29:21.001 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.ReusePurgedSnap (19991 ms) 2026-03-10T07:29:21.001 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps (50019 ms total) 2026-03-10T07:29:21.001 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: 2026-03-10T07:29:21.001 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [----------] Global test environment tear-down 2026-03-10T07:29:21.001 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [==========] 11 tests from 2 test suites ran. (76111 ms total) 2026-03-10T07:29:21.001 INFO:tasks.workunit.client.0.vm00.stdout: snapshots: [ PASSED ] 11 tests. 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.864891+0000 mon.a (mon.0) 1462 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.864891+0000 mon.a (mon.0) 1462 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.952212+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.952212+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.952260+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.952260+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.961826+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.961826+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: cluster 2026-03-10T07:29:19.965674+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: cluster 2026-03-10T07:29:19.965674+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.973827+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.973827+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.974591+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:29:21.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:19.974591+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:20.593996+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:20.593996+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:20.594732+0000 mon.a (mon.0) 1468 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:20 vm00 bash[28005]: audit 2026-03-10T07:29:20.594732+0000 mon.a (mon.0) 1468 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:29:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:29:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.864891+0000 mon.a (mon.0) 1462 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.864891+0000 mon.a (mon.0) 1462 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.952212+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.952212+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.952260+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.952260+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.961826+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.961826+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: cluster 2026-03-10T07:29:19.965674+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: cluster 2026-03-10T07:29:19.965674+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.973827+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.973827+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.974591+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:19.974591+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:20.593996+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:20.593996+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:20.594732+0000 mon.a (mon.0) 1468 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:20 vm00 bash[20701]: audit 2026-03-10T07:29:20.594732+0000 mon.a (mon.0) 1468 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.864891+0000 mon.a (mon.0) 1462 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.864891+0000 mon.a (mon.0) 1462 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.952212+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.952212+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.952260+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.952260+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.961826+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.961826+0000 mon.b (mon.1) 154 : audit [INF] from='client.? 192.168.123.100:0/2128165283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: cluster 2026-03-10T07:29:19.965674+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: cluster 2026-03-10T07:29:19.965674+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.973827+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.973827+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]: dispatch 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.974591+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:19.974591+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:20.593996+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:20.593996+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:20.594732+0000 mon.a (mon.0) 1468 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:20 vm03 bash[23382]: audit 2026-03-10T07:29:20.594732+0000 mon.a (mon.0) 1468 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [==========] Running 31 tests from 7 test suites. 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] Global test environment set-up. 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscVersion.VersionPP 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscVersion.VersionPP (0 ms) 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion (0 ms total) 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 22 tests from LibRadosMiscPP 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: seed 59776 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WaitOSDMapPP 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WaitOSDMapPP (4 ms) 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNamePP 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNamePP (658 ms) 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongLocatorPP 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongLocatorPP (48 ms) 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNSpacePP 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNSpacePP (51 ms) 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongAttrNamePP 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongAttrNamePP (211 ms) 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.ExecPP 2026-03-10T07:29:21.975 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.ExecPP (39 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BadFlagsPP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BadFlagsPP (17 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate1PP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate1PP (29 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate2PP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate2PP (9 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigObjectPP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigObjectPP (459 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AioOperatePP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AioOperatePP (15 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertExistsPP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertExistsPP (96 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertVersionPP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertVersionPP (14 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigAttrPP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: osd_max_attr_size = 0 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: osd_max_attr_size == 0; skipping test 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigAttrPP (5213 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyPP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyPP (839 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyScrubPP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: waiting for initial deep scrubs... 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: done waiting, doing copies 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: waiting for final deep scrubs... 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: done waiting 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyScrubPP (62151 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WriteSamePP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WriteSamePP (7 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CmpExtPP 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CmpExtPP (2 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Applications 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Applications (4378 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatOSD 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatOSD (0 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatClient 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatClient (0 ms) 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Conf 2026-03-10T07:29:21.976 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Conf (0 ms) 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: cluster 2026-03-10T07:29:20.593652+0000 mgr.y (mgr.24407) 182 : cluster [DBG] pgmap v175: 388 pgs: 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: cluster 2026-03-10T07:29:20.593652+0000 mgr.y (mgr.24407) 182 : cluster [DBG] pgmap v175: 388 pgs: 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.865636+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.865636+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.962318+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.962318+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.962419+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.962419+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.962496+0000 mon.a (mon.0) 1472 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.962496+0000 mon.a (mon.0) 1472 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: cluster 2026-03-10T07:29:20.996337+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: cluster 2026-03-10T07:29:20.996337+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.996672+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2472244431' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.996672+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2472244431' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.997138+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/1935512706' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:20.997138+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/1935512706' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:21.020204+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:21.020204+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:21.020291+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:21.020291+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:21.021367+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:21.021367+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:21.023320+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:21.023320+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:21.023517+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:21 vm00 bash[28005]: audit 2026-03-10T07:29:21.023517+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: cluster 2026-03-10T07:29:20.593652+0000 mgr.y (mgr.24407) 182 : cluster [DBG] pgmap v175: 388 pgs: 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: cluster 2026-03-10T07:29:20.593652+0000 mgr.y (mgr.24407) 182 : cluster [DBG] pgmap v175: 388 pgs: 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.865636+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.865636+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.962318+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.962318+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.962419+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.962419+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.962496+0000 mon.a (mon.0) 1472 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.962496+0000 mon.a (mon.0) 1472 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: cluster 2026-03-10T07:29:20.996337+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: cluster 2026-03-10T07:29:20.996337+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.996672+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2472244431' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.996672+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2472244431' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.997138+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/1935512706' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:20.997138+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/1935512706' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:21.020204+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:21.020204+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:21.020291+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:21.020291+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:21.021367+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:21.021367+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:21.023320+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:21.023320+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:21.023517+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:22.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:21 vm00 bash[20701]: audit 2026-03-10T07:29:21.023517+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: cluster 2026-03-10T07:29:20.593652+0000 mgr.y (mgr.24407) 182 : cluster [DBG] pgmap v175: 388 pgs: 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: cluster 2026-03-10T07:29:20.593652+0000 mgr.y (mgr.24407) 182 : cluster [DBG] pgmap v175: 388 pgs: 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.865636+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.865636+0000 mon.a (mon.0) 1469 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.962318+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.962318+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm00-59956-16"}]': finished 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.962419+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.962419+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? 192.168.123.100:0/2226096751' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm00-59776-1","app":"app1","key":"key1"}]': finished 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.962496+0000 mon.a (mon.0) 1472 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.962496+0000 mon.a (mon.0) 1472 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm00-59704-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: cluster 2026-03-10T07:29:20.996337+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: cluster 2026-03-10T07:29:20.996337+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.996672+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2472244431' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.996672+0000 mon.b (mon.1) 155 : audit [INF] from='client.? 192.168.123.100:0/2472244431' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.997138+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/1935512706' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:20.997138+0000 mon.b (mon.1) 156 : audit [INF] from='client.? 192.168.123.100:0/1935512706' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:21.020204+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:21.020204+0000 mon.a (mon.0) 1474 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:21.020291+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:21.020291+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:21.021367+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:21.021367+0000 mon.a (mon.0) 1476 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:21.023320+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:21.023320+0000 mon.a (mon.0) 1477 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:21.023517+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:22.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:21 vm03 bash[23382]: audit 2026-03-10T07:29:21.023517+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:23.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.866431+0000 mon.a (mon.0) 1479 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.866431+0000 mon.a (mon.0) 1479 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.866863+0000 mon.a (mon.0) 1480 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.866863+0000 mon.a (mon.0) 1480 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.966801+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.966801+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.966874+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.966874+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.966923+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.966923+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.966956+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.966956+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: cluster 2026-03-10T07:29:21.974109+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: cluster 2026-03-10T07:29:21.974109+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.975034+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.975034+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.975193+0000 mon.a (mon.0) 1487 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:21.975193+0000 mon.a (mon.0) 1487 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:22.005349+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:22.005349+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:22.007872+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:22.007872+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:22.017004+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:22.017004+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:22.020288+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:22.020288+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:22.022012+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:22 vm00 bash[28005]: audit 2026-03-10T07:29:22.022012+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.866431+0000 mon.a (mon.0) 1479 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.866431+0000 mon.a (mon.0) 1479 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.866863+0000 mon.a (mon.0) 1480 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.866863+0000 mon.a (mon.0) 1480 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.966801+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.966801+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.966874+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.966874+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.966923+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.966923+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.966956+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.966956+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: cluster 2026-03-10T07:29:21.974109+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: cluster 2026-03-10T07:29:21.974109+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.975034+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.975034+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.975193+0000 mon.a (mon.0) 1487 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:21.975193+0000 mon.a (mon.0) 1487 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:22.005349+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:22.005349+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:22.007872+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:22.007872+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:22.017004+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:22.017004+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:22.020288+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:22.020288+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:22.022012+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:23.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:22 vm00 bash[20701]: audit 2026-03-10T07:29:22.022012+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:29:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.866431+0000 mon.a (mon.0) 1479 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.866431+0000 mon.a (mon.0) 1479 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.866863+0000 mon.a (mon.0) 1480 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.866863+0000 mon.a (mon.0) 1480 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.966801+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.966801+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm00-59637-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.966874+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.966874+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm00-59629-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.966923+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.966923+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.966956+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.966956+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pg_num","val":"11"}]': finished 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: cluster 2026-03-10T07:29:21.974109+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: cluster 2026-03-10T07:29:21.974109+0000 mon.a (mon.0) 1485 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.975034+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.975034+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.975193+0000 mon.a (mon.0) 1487 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:21.975193+0000 mon.a (mon.0) 1487 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:22.005349+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:22.005349+0000 mon.a (mon.0) 1488 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:22.007872+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:22.007872+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:22.017004+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:22.017004+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:22.020288+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:22.020288+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:22.022012+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:23.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:22 vm03 bash[23382]: audit 2026-03-10T07:29:22.022012+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: cluster 2026-03-10T07:29:22.594203+0000 mgr.y (mgr.24407) 183 : cluster [DBG] pgmap v178: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: cluster 2026-03-10T07:29:22.594203+0000 mgr.y (mgr.24407) 183 : cluster [DBG] pgmap v178: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:22.970978+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:22.970978+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:22.971234+0000 mon.a (mon.0) 1494 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:22.971234+0000 mon.a (mon.0) 1494 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: cluster 2026-03-10T07:29:22.998774+0000 mon.a (mon.0) 1495 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: cluster 2026-03-10T07:29:22.998774+0000 mon.a (mon.0) 1495 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.000414+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.000414+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.000808+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.000808+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.018186+0000 mgr.y (mgr.24407) 184 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.018186+0000 mgr.y (mgr.24407) 184 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.031032+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.031032+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.040200+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.040200+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.040748+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:23 vm03 bash[23382]: audit 2026-03-10T07:29:23.040748+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: cluster 2026-03-10T07:29:22.594203+0000 mgr.y (mgr.24407) 183 : cluster [DBG] pgmap v178: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: cluster 2026-03-10T07:29:22.594203+0000 mgr.y (mgr.24407) 183 : cluster [DBG] pgmap v178: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:22.970978+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:22.970978+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:22.971234+0000 mon.a (mon.0) 1494 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:22.971234+0000 mon.a (mon.0) 1494 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: cluster 2026-03-10T07:29:22.998774+0000 mon.a (mon.0) 1495 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: cluster 2026-03-10T07:29:22.998774+0000 mon.a (mon.0) 1495 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.000414+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.000414+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.000808+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.000808+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.018186+0000 mgr.y (mgr.24407) 184 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.018186+0000 mgr.y (mgr.24407) 184 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.031032+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.031032+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.040200+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.040200+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.040748+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:23 vm00 bash[28005]: audit 2026-03-10T07:29:23.040748+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: cluster 2026-03-10T07:29:22.594203+0000 mgr.y (mgr.24407) 183 : cluster [DBG] pgmap v178: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: cluster 2026-03-10T07:29:22.594203+0000 mgr.y (mgr.24407) 183 : cluster [DBG] pgmap v178: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:22.970978+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:22.970978+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm00-59704-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:22.971234+0000 mon.a (mon.0) 1494 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:22.971234+0000 mon.a (mon.0) 1494 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59776-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: cluster 2026-03-10T07:29:22.998774+0000 mon.a (mon.0) 1495 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: cluster 2026-03-10T07:29:22.998774+0000 mon.a (mon.0) 1495 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.000414+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.000414+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/2386836633' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.000808+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.000808+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.018186+0000 mgr.y (mgr.24407) 184 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.018186+0000 mgr.y (mgr.24407) 184 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.031032+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.031032+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.040200+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.040200+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.040748+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:23 vm00 bash[20701]: audit 2026-03-10T07:29:23.040748+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:23.986177+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:23.986177+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:23.986713+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:23.986713+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: cluster 2026-03-10T07:29:24.022887+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: cluster 2026-03-10T07:29:24.022887+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.024438+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.024438+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.046038+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.046038+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.046534+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.046534+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.051451+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.051451+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.052701+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.052701+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.053619+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.053619+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.054276+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.054276+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.054854+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.054854+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.206487+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:25.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:25 vm03 bash[23382]: audit 2026-03-10T07:29:24.206487+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:23.986177+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:23.986177+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:23.986713+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:23.986713+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: cluster 2026-03-10T07:29:24.022887+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: cluster 2026-03-10T07:29:24.022887+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.024438+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.024438+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.046038+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.046038+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.046534+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.046534+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.051451+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.051451+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.052701+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.052701+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.053619+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.053619+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.054276+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.054276+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.054854+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.054854+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.206487+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:25 vm00 bash[28005]: audit 2026-03-10T07:29:24.206487+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:23.986177+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:23.986177+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm00-59956-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:23.986713+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:23.986713+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm00-59629-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: cluster 2026-03-10T07:29:24.022887+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: cluster 2026-03-10T07:29:24.022887+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T07:29:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.024438+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.024438+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.046038+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.046038+0000 mon.a (mon.0) 1505 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.046534+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.046534+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.051451+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.051451+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.052701+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.052701+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.053619+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.053619+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.054276+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.054276+0000 mon.c (mon.2) 170 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.054854+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.054854+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.206487+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:25 vm00 bash[20701]: audit 2026-03-10T07:29:24.206487+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: cluster 2026-03-10T07:29:24.594638+0000 mgr.y (mgr.24407) 185 : cluster [DBG] pgmap v181: 332 pgs: 40 unknown, 292 active+clean; 462 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: cluster 2026-03-10T07:29:24.594638+0000 mgr.y (mgr.24407) 185 : cluster [DBG] pgmap v181: 332 pgs: 40 unknown, 292 active+clean; 462 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: cluster 2026-03-10T07:29:24.986215+0000 mon.a (mon.0) 1509 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: cluster 2026-03-10T07:29:24.986215+0000 mon.a (mon.0) 1509 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: audit 2026-03-10T07:29:25.156170+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: audit 2026-03-10T07:29:25.156170+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: audit 2026-03-10T07:29:25.156273+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: audit 2026-03-10T07:29:25.156273+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: audit 2026-03-10T07:29:25.156586+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: audit 2026-03-10T07:29:25.156586+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: audit 2026-03-10T07:29:25.200262+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: audit 2026-03-10T07:29:25.200262+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: cluster 2026-03-10T07:29:25.215098+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: cluster 2026-03-10T07:29:25.215098+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: audit 2026-03-10T07:29:25.219506+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:26.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:26 vm03 bash[23382]: audit 2026-03-10T07:29:25.219506+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: cluster 2026-03-10T07:29:24.594638+0000 mgr.y (mgr.24407) 185 : cluster [DBG] pgmap v181: 332 pgs: 40 unknown, 292 active+clean; 462 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: cluster 2026-03-10T07:29:24.594638+0000 mgr.y (mgr.24407) 185 : cluster [DBG] pgmap v181: 332 pgs: 40 unknown, 292 active+clean; 462 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: cluster 2026-03-10T07:29:24.986215+0000 mon.a (mon.0) 1509 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: cluster 2026-03-10T07:29:24.986215+0000 mon.a (mon.0) 1509 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: audit 2026-03-10T07:29:25.156170+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: audit 2026-03-10T07:29:25.156170+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: audit 2026-03-10T07:29:25.156273+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: audit 2026-03-10T07:29:25.156273+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: audit 2026-03-10T07:29:25.156586+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: audit 2026-03-10T07:29:25.156586+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: audit 2026-03-10T07:29:25.200262+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: audit 2026-03-10T07:29:25.200262+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: cluster 2026-03-10T07:29:25.215098+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: cluster 2026-03-10T07:29:25.215098+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: audit 2026-03-10T07:29:25.219506+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:26 vm00 bash[28005]: audit 2026-03-10T07:29:25.219506+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: cluster 2026-03-10T07:29:24.594638+0000 mgr.y (mgr.24407) 185 : cluster [DBG] pgmap v181: 332 pgs: 40 unknown, 292 active+clean; 462 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: cluster 2026-03-10T07:29:24.594638+0000 mgr.y (mgr.24407) 185 : cluster [DBG] pgmap v181: 332 pgs: 40 unknown, 292 active+clean; 462 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: cluster 2026-03-10T07:29:24.986215+0000 mon.a (mon.0) 1509 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: cluster 2026-03-10T07:29:24.986215+0000 mon.a (mon.0) 1509 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: audit 2026-03-10T07:29:25.156170+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: audit 2026-03-10T07:29:25.156170+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59776-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: audit 2026-03-10T07:29:25.156273+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: audit 2026-03-10T07:29:25.156273+0000 mon.a (mon.0) 1511 : audit [INF] from='client.? 192.168.123.100:0/3451782877' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm00-59637-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: audit 2026-03-10T07:29:25.156586+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: audit 2026-03-10T07:29:25.156586+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm00-59704-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: audit 2026-03-10T07:29:25.200262+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: audit 2026-03-10T07:29:25.200262+0000 mon.c (mon.2) 172 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: cluster 2026-03-10T07:29:25.215098+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: cluster 2026-03-10T07:29:25.215098+0000 mon.a (mon.0) 1513 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: audit 2026-03-10T07:29:25.219506+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:26 vm00 bash[20701]: audit 2026-03-10T07:29:25.219506+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:27.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:27 vm03 bash[23382]: audit 2026-03-10T07:29:26.174926+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:27.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:27 vm03 bash[23382]: audit 2026-03-10T07:29:26.174926+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:27.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:27 vm03 bash[23382]: cluster 2026-03-10T07:29:26.216241+0000 mon.a (mon.0) 1516 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T07:29:27.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:27 vm03 bash[23382]: cluster 2026-03-10T07:29:26.216241+0000 mon.a (mon.0) 1516 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T07:29:27.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:27 vm03 bash[23382]: audit 2026-03-10T07:29:26.219425+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:27.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:27 vm03 bash[23382]: audit 2026-03-10T07:29:26.219425+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:27 vm00 bash[28005]: audit 2026-03-10T07:29:26.174926+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:27 vm00 bash[28005]: audit 2026-03-10T07:29:26.174926+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:27 vm00 bash[28005]: cluster 2026-03-10T07:29:26.216241+0000 mon.a (mon.0) 1516 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T07:29:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:27 vm00 bash[28005]: cluster 2026-03-10T07:29:26.216241+0000 mon.a (mon.0) 1516 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T07:29:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:27 vm00 bash[28005]: audit 2026-03-10T07:29:26.219425+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:27 vm00 bash[28005]: audit 2026-03-10T07:29:26.219425+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:27 vm00 bash[20701]: audit 2026-03-10T07:29:26.174926+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:27 vm00 bash[20701]: audit 2026-03-10T07:29:26.174926+0000 mon.a (mon.0) 1515 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm00-59629-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:27 vm00 bash[20701]: cluster 2026-03-10T07:29:26.216241+0000 mon.a (mon.0) 1516 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T07:29:27.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:27 vm00 bash[20701]: cluster 2026-03-10T07:29:26.216241+0000 mon.a (mon.0) 1516 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T07:29:27.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:27 vm00 bash[20701]: audit 2026-03-10T07:29:26.219425+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:27.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:27 vm00 bash[20701]: audit 2026-03-10T07:29:26.219425+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 22 tests from LibRadpi_list: : cursor()=7:62a1935d:::14:head expected=7:62a1935d:::14:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:62a1935d:::14:head -> 14 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=14 expected=14 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:5c6b0b28:::7:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:5c6b0b28:::7:head expected=7:5c6b0b28:::7:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:5c6b0b28:::7:head -> 7 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=7 expected=7 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:89d3ae78:::11:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:89d3ae78:::11:head expected=7:89d3ae78:::11:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:89d3ae78:::11:head -> 11 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=11 expected=11 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:de5d7c5f:::12:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:de5d7c5f:::12:head expected=7:de5d7c5f:::12:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:de5d7c5f:::12:head -> 12 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=12 expected=12 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:d83876eb:::4:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:d83876eb:::4:head expected=7:d83876eb:::4:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:d83876eb:::4:head -> 4 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=4 expected=4 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:cfc208b3:::3:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:cfc208b3:::3:head expected=7:cfc208b3:::3:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:cfc208b3:::3:head -> 3 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=3 expected=3 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:c4fdafeb:::6:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:c4fdafeb:::6:head expected=7:c4fdafeb:::6:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:c4fdafeb:::6:head -> 6 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=6 expected=6 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : seek to 7:02547ec2:::1:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : cursor()=7:02547ec2:::1:head expected=7:02547ec2:::1:head 2026-03-10T07:29:28.462 INFO:tasks.workunit.client.0.vm00.stdout: api_list: > 7:02547ec2:::1:head -> 1 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: : entry=1 expected=1 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.ListObjectsCursor (190 ms) 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjects 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.EnumerateObjects (65403 ms) 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjectsSplit 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: split 0/5 -> MIN 7:33333333::::head 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: split 1/5 -> 7:33333333::::head 7:66666666::::head 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: split 2/5 -> 7:66666666::::head 7:99999999::::head 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: split 3/5 -> 7:99999999::::head 7:cccccccc::::head 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: split 4/5 -> 7:cccccccc::::head MAX 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosList.EnumerateObjectsSplit (11378 ms) 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 7 tests from LibRadosList (77480 ms total) 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 3 tests from LibRadosListEC 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosListEC.ListObjects 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosListEC.ListObjects (1087 ms) 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosListEC.ListObjectsNS 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo1,foo2,foo3 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo1 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo2 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo3 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo1,foo4,foo5 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo4 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo5 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo1 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset foo6,foo7 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo7 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: foo6 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo4 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo5 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns2:foo7 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns2:foo6 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: ns1:foo1 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo1 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo2 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: :foo3 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsNS (120 ms) 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosListEC.ListObjectsStart 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 1 0 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 10 0 2026-03-10T07:29:28.463 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 13 0 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: cluster 2026-03-10T07:29:26.594978+0000 mgr.y (mgr.24407) 186 : cluster [DBG] pgmap v184: 348 pgs: 48 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: cluster 2026-03-10T07:29:26.594978+0000 mgr.y (mgr.24407) 186 : cluster [DBG] pgmap v184: 348 pgs: 48 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: audit 2026-03-10T07:29:27.187145+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: audit 2026-03-10T07:29:27.187145+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: audit 2026-03-10T07:29:27.187202+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: audit 2026-03-10T07:29:27.187202+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: cluster 2026-03-10T07:29:27.223872+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: cluster 2026-03-10T07:29:27.223872+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: audit 2026-03-10T07:29:27.226318+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: audit 2026-03-10T07:29:27.226318+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: audit 2026-03-10T07:29:27.232478+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/3224757085' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: audit 2026-03-10T07:29:27.232478+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/3224757085' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: audit 2026-03-10T07:29:27.234553+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:28 vm03 bash[23382]: audit 2026-03-10T07:29:27.234553+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: cluster 2026-03-10T07:29:26.594978+0000 mgr.y (mgr.24407) 186 : cluster [DBG] pgmap v184: 348 pgs: 48 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: cluster 2026-03-10T07:29:26.594978+0000 mgr.y (mgr.24407) 186 : cluster [DBG] pgmap v184: 348 pgs: 48 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: audit 2026-03-10T07:29:27.187145+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: audit 2026-03-10T07:29:27.187145+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: audit 2026-03-10T07:29:27.187202+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: audit 2026-03-10T07:29:27.187202+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: cluster 2026-03-10T07:29:27.223872+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T07:29:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: cluster 2026-03-10T07:29:27.223872+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: audit 2026-03-10T07:29:27.226318+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: audit 2026-03-10T07:29:27.226318+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: audit 2026-03-10T07:29:27.232478+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/3224757085' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: audit 2026-03-10T07:29:27.232478+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/3224757085' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: audit 2026-03-10T07:29:27.234553+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:28 vm00 bash[28005]: audit 2026-03-10T07:29:27.234553+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: cluster 2026-03-10T07:29:26.594978+0000 mgr.y (mgr.24407) 186 : cluster [DBG] pgmap v184: 348 pgs: 48 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: cluster 2026-03-10T07:29:26.594978+0000 mgr.y (mgr.24407) 186 : cluster [DBG] pgmap v184: 348 pgs: 48 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: audit 2026-03-10T07:29:27.187145+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: audit 2026-03-10T07:29:27.187145+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm00-59704-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: audit 2026-03-10T07:29:27.187202+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: audit 2026-03-10T07:29:27.187202+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: cluster 2026-03-10T07:29:27.223872+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: cluster 2026-03-10T07:29:27.223872+0000 mon.a (mon.0) 1520 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: audit 2026-03-10T07:29:27.226318+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: audit 2026-03-10T07:29:27.226318+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: audit 2026-03-10T07:29:27.232478+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/3224757085' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: audit 2026-03-10T07:29:27.232478+0000 mon.c (mon.2) 173 : audit [INF] from='client.? 192.168.123.100:0/3224757085' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: audit 2026-03-10T07:29:27.234553+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:28 vm00 bash[20701]: audit 2026-03-10T07:29:27.234553+0000 mon.a (mon.0) 1522 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:29.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:28.191120+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:28.191120+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:28.191191+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:28.191191+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: cluster 2026-03-10T07:29:28.197000+0000 mon.a (mon.0) 1525 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: cluster 2026-03-10T07:29:28.197000+0000 mon.a (mon.0) 1525 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:28.205215+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:28.205215+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:29.194476+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:29.194476+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: cluster 2026-03-10T07:29:29.199049+0000 mon.a (mon.0) 1528 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: cluster 2026-03-10T07:29:29.199049+0000 mon.a (mon.0) 1528 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:29.202555+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:29.202555+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:29.207732+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:29.207732+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:29.211564+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:29.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:29 vm03 bash[23382]: audit 2026-03-10T07:29:29.211564+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:29.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:28.191120+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:28.191120+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:28.191191+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:28.191191+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: cluster 2026-03-10T07:29:28.197000+0000 mon.a (mon.0) 1525 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T07:29:29.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: cluster 2026-03-10T07:29:28.197000+0000 mon.a (mon.0) 1525 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T07:29:29.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:28.205215+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:28.205215+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:29.194476+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:29.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:29.194476+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:29.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: cluster 2026-03-10T07:29:29.199049+0000 mon.a (mon.0) 1528 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: cluster 2026-03-10T07:29:29.199049+0000 mon.a (mon.0) 1528 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:29.202555+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:29.202555+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:29.207732+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:29.207732+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:29.211564+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:29 vm00 bash[28005]: audit 2026-03-10T07:29:29.211564+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:28.191120+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:28.191120+0000 mon.a (mon.0) 1523 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59776-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:28.191191+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:28.191191+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm00-59637-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: cluster 2026-03-10T07:29:28.197000+0000 mon.a (mon.0) 1525 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: cluster 2026-03-10T07:29:28.197000+0000 mon.a (mon.0) 1525 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:28.205215+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:28.205215+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:29.194476+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:29.194476+0000 mon.a (mon.0) 1527 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: cluster 2026-03-10T07:29:29.199049+0000 mon.a (mon.0) 1528 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: cluster 2026-03-10T07:29:29.199049+0000 mon.a (mon.0) 1528 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:29.202555+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:29.202555+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:29.207732+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:29.207732+0000 mon.c (mon.2) 174 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:29.211564+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:29 vm00 bash[20701]: audit 2026-03-10T07:29:29.211564+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:30.475 INFO:tasks.workunit.client.0.vm00.stdout: api_lis api_aio: Running main() from gmock_main.cc 2026-03-10T07:29:30.475 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [==========] Running 42 tests from 2 test suites. 2026-03-10T07:29:30.475 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] Global test environment set-up. 2026-03-10T07:29:30.475 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] 26 tests from LibRadosAio 2026-03-10T07:29:30.475 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.TooBig 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.TooBig (2788 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.SimpleWrite 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.SimpleWrite (3216 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.WaitForSafe 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.WaitForSafe (3132 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip (2802 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip2 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip2 (3070 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip3 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip3 (2935 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripAppend 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTripAppend (3382 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RemoveTest 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RemoveTest (3174 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.XattrsRoundTrip 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.XattrsRoundTrip (2492 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RmXattr 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RmXattr (3087 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.XattrIter 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.XattrIter (3029 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.IsComplete 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.IsComplete (3115 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.IsSafe 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.IsSafe (2827 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.ReturnValue 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.ReturnValue (3129 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.Flush 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.Flush (4195 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.FlushAsync 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.FlushAsync (2723 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteFull 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteFull (2875 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteSame 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteSame (3103 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStat 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.SimpleStat (3198 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.OperateMtime 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.OperateMtime (3035 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.Operate2Mtime 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.Operate2Mtime (3239 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStatNS 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.SimpleStatNS (3133 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.StatRemove 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.StatRemove (3142 ms) 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.ExecuteClass 2026-03-10T07:29:30.476 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.ExecuteClass (2161 ms) 2026-03-10T07:29:30.477 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.MultiWrite 2026-03-10T07:29:30.477 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.MultiWrite (3018 ms) 2026-03-10T07:29:30.477 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAio.AioUnlock 2026-03-10T07:29:30.477 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAio.AioUnlock (3023 ms) 2026-03-10T07:29:30.477 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] 26 tests from LibRadosAio (79023 ms total) 2026-03-10T07:29:30.477 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: 2026-03-10T07:29:30.477 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] 16 tests from LibRadosAioEC 2026-03-10T07:29:30.477 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleWrite 2026-03-10T07:29:30.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:30 vm03 bash[23382]: cluster 2026-03-10T07:29:28.595531+0000 mgr.y (mgr.24407) 187 : cluster [DBG] pgmap v187: 380 pgs: 80 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:30.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:30 vm03 bash[23382]: cluster 2026-03-10T07:29:28.595531+0000 mgr.y (mgr.24407) 187 : cluster [DBG] pgmap v187: 380 pgs: 80 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:30 vm00 bash[28005]: cluster 2026-03-10T07:29:28.595531+0000 mgr.y (mgr.24407) 187 : cluster [DBG] pgmap v187: 380 pgs: 80 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:30 vm00 bash[28005]: cluster 2026-03-10T07:29:28.595531+0000 mgr.y (mgr.24407) 187 : cluster [DBG] pgmap v187: 380 pgs: 80 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:30 vm00 bash[20701]: cluster 2026-03-10T07:29:28.595531+0000 mgr.y (mgr.24407) 187 : cluster [DBG] pgmap v187: 380 pgs: 80 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:30 vm00 bash[20701]: cluster 2026-03-10T07:29:28.595531+0000 mgr.y (mgr.24407) 187 : cluster [DBG] pgmap v187: 380 pgs: 80 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:31.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:29:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:29:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.423773+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.423773+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.423893+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.423893+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: cluster 2026-03-10T07:29:30.435277+0000 mon.a (mon.0) 1533 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: cluster 2026-03-10T07:29:30.435277+0000 mon.a (mon.0) 1533 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.443487+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.443487+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.445390+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.100:0/548076437' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.445390+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.100:0/548076437' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.473488+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.473488+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.473789+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.473789+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.478528+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.478528+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.521911+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.521911+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.522952+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.522952+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.523456+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.523456+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.523875+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.523875+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.524257+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.524257+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.524556+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.524556+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: cluster 2026-03-10T07:29:30.596076+0000 mgr.y (mgr.24407) 188 : cluster [DBG] pgmap v190: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: cluster 2026-03-10T07:29:30.596076+0000 mgr.y (mgr.24407) 188 : cluster [DBG] pgmap v190: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.801001+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.801001+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.803087+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:30.803087+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.428680+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.428680+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.428823+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.428823+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.428934+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.428934+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.429016+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.429016+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.431054+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.431054+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.435455+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.435455+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: cluster 2026-03-10T07:29:31.439693+0000 mon.a (mon.0) 1546 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: cluster 2026-03-10T07:29:31.439693+0000 mon.a (mon.0) 1546 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.454506+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.454506+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.454742+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.454742+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.461677+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.461677+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.466461+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:31 vm00 bash[28005]: audit 2026-03-10T07:29:31.466461+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.423773+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.423773+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.423893+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.423893+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: cluster 2026-03-10T07:29:30.435277+0000 mon.a (mon.0) 1533 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: cluster 2026-03-10T07:29:30.435277+0000 mon.a (mon.0) 1533 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.443487+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.443487+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.445390+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.100:0/548076437' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.445390+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.100:0/548076437' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.473488+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.473488+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.473789+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.473789+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.478528+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.478528+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.521911+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.521911+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.522952+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.522952+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.523456+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.523456+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.523875+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.523875+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.524257+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.524257+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.524556+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.524556+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: cluster 2026-03-10T07:29:30.596076+0000 mgr.y (mgr.24407) 188 : cluster [DBG] pgmap v190: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: cluster 2026-03-10T07:29:30.596076+0000 mgr.y (mgr.24407) 188 : cluster [DBG] pgmap v190: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.801001+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.801001+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.803087+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:30.803087+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.428680+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.428680+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.428823+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.428823+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.428934+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.428934+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.429016+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.429016+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.431054+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.431054+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.435455+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.435455+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: cluster 2026-03-10T07:29:31.439693+0000 mon.a (mon.0) 1546 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: cluster 2026-03-10T07:29:31.439693+0000 mon.a (mon.0) 1546 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.454506+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.454506+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.454742+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.454742+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.461677+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.461677+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.466461+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:31.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:31 vm00 bash[20701]: audit 2026-03-10T07:29:31.466461+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.423773+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.423773+0000 mon.a (mon.0) 1531 : audit [INF] from='client.? 192.168.123.100:0/3855854101' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm00-59629-27"}]': finished 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.423893+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.423893+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: cluster 2026-03-10T07:29:30.435277+0000 mon.a (mon.0) 1533 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: cluster 2026-03-10T07:29:30.435277+0000 mon.a (mon.0) 1533 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.443487+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.443487+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.100:0/1220566742' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.445390+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.100:0/548076437' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.445390+0000 mon.c (mon.2) 176 : audit [INF] from='client.? 192.168.123.100:0/548076437' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.473488+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.473488+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.473789+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.473789+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.478528+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.478528+0000 mon.a (mon.0) 1536 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.521911+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.521911+0000 mon.c (mon.2) 177 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.522952+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.522952+0000 mon.a (mon.0) 1537 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.523456+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.523456+0000 mon.c (mon.2) 178 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.523875+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.523875+0000 mon.a (mon.0) 1538 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.524257+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.524257+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:32.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.524556+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.524556+0000 mon.a (mon.0) 1539 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: cluster 2026-03-10T07:29:30.596076+0000 mgr.y (mgr.24407) 188 : cluster [DBG] pgmap v190: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: cluster 2026-03-10T07:29:30.596076+0000 mgr.y (mgr.24407) 188 : cluster [DBG] pgmap v190: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.801001+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.801001+0000 mon.b (mon.1) 157 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.803087+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:30.803087+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.428680+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.428680+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm00-59704-2"}]': finished 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.428823+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.428823+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm00-59637-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.428934+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.428934+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.429016+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.429016+0000 mon.a (mon.0) 1544 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm00-59629-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.431054+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.431054+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.435455+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.435455+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: cluster 2026-03-10T07:29:31.439693+0000 mon.a (mon.0) 1546 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: cluster 2026-03-10T07:29:31.439693+0000 mon.a (mon.0) 1546 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.454506+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.454506+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.454742+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.454742+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.461677+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.461677+0000 mon.c (mon.2) 180 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.466461+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:31 vm03 bash[23382]: audit 2026-03-10T07:29:31.466461+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: cluster 2026-03-10T07:29:32.429211+0000 mon.a (mon.0) 1550 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: cluster 2026-03-10T07:29:32.429211+0000 mon.a (mon.0) 1550 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: audit 2026-03-10T07:29:32.432744+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: audit 2026-03-10T07:29:32.432744+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: audit 2026-03-10T07:29:32.432860+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: audit 2026-03-10T07:29:32.432860+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: cluster 2026-03-10T07:29:32.469537+0000 mon.a (mon.0) 1553 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: cluster 2026-03-10T07:29:32.469537+0000 mon.a (mon.0) 1553 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: audit 2026-03-10T07:29:32.484213+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: audit 2026-03-10T07:29:32.484213+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: audit 2026-03-10T07:29:32.487680+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:32 vm00 bash[28005]: audit 2026-03-10T07:29:32.487680+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: cluster 2026-03-10T07:29:32.429211+0000 mon.a (mon.0) 1550 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: cluster 2026-03-10T07:29:32.429211+0000 mon.a (mon.0) 1550 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: audit 2026-03-10T07:29:32.432744+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: audit 2026-03-10T07:29:32.432744+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: audit 2026-03-10T07:29:32.432860+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: audit 2026-03-10T07:29:32.432860+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: cluster 2026-03-10T07:29:32.469537+0000 mon.a (mon.0) 1553 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: cluster 2026-03-10T07:29:32.469537+0000 mon.a (mon.0) 1553 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: audit 2026-03-10T07:29:32.484213+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: audit 2026-03-10T07:29:32.484213+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: audit 2026-03-10T07:29:32.487680+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:32 vm00 bash[20701]: audit 2026-03-10T07:29:32.487680+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: cluster 2026-03-10T07:29:32.429211+0000 mon.a (mon.0) 1550 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: cluster 2026-03-10T07:29:32.429211+0000 mon.a (mon.0) 1550 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: audit 2026-03-10T07:29:32.432744+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: audit 2026-03-10T07:29:32.432744+0000 mon.a (mon.0) 1551 : audit [INF] from='client.? 192.168.123.100:0/3499728306' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59776-24"}]': finished 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: audit 2026-03-10T07:29:32.432860+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: audit 2026-03-10T07:29:32.432860+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-13"}]': finished 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: cluster 2026-03-10T07:29:32.469537+0000 mon.a (mon.0) 1553 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: cluster 2026-03-10T07:29:32.469537+0000 mon.a (mon.0) 1553 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: audit 2026-03-10T07:29:32.484213+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: audit 2026-03-10T07:29:32.484213+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: audit 2026-03-10T07:29:32.487680+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:32 vm03 bash[23382]: audit 2026-03-10T07:29:32.487680+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.514 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:29:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: cluster 2026-03-10T07:29:32.596515+0000 mgr.y (mgr.24407) 189 : cluster [DBG] pgmap v193: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: cluster 2026-03-10T07:29:32.596515+0000 mgr.y (mgr.24407) 189 : cluster [DBG] pgmap v193: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.021320+0000 mgr.y (mgr.24407) 190 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.021320+0000 mgr.y (mgr.24407) 190 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.438107+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.438107+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.438185+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.438185+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.473499+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1013955204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.473499+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1013955204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: cluster 2026-03-10T07:29:33.474443+0000 mon.a (mon.0) 1557 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: cluster 2026-03-10T07:29:33.474443+0000 mon.a (mon.0) 1557 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.474685+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/1753803713' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.474685+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/1753803713' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.486842+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.486842+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.494005+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.494005+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.518723+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.518723+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.521073+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:33 vm00 bash[28005]: audit 2026-03-10T07:29:33.521073+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: cluster 2026-03-10T07:29:32.596515+0000 mgr.y (mgr.24407) 189 : cluster [DBG] pgmap v193: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: cluster 2026-03-10T07:29:32.596515+0000 mgr.y (mgr.24407) 189 : cluster [DBG] pgmap v193: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.021320+0000 mgr.y (mgr.24407) 190 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.021320+0000 mgr.y (mgr.24407) 190 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.438107+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.438107+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.438185+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.438185+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.473499+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1013955204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.473499+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1013955204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: cluster 2026-03-10T07:29:33.474443+0000 mon.a (mon.0) 1557 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: cluster 2026-03-10T07:29:33.474443+0000 mon.a (mon.0) 1557 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.474685+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/1753803713' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.474685+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/1753803713' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.486842+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.486842+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.494005+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.494005+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.518723+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.518723+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.521073+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:33 vm00 bash[20701]: audit 2026-03-10T07:29:33.521073+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:34.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: cluster 2026-03-10T07:29:32.596515+0000 mgr.y (mgr.24407) 189 : cluster [DBG] pgmap v193: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: cluster 2026-03-10T07:29:32.596515+0000 mgr.y (mgr.24407) 189 : cluster [DBG] pgmap v193: 332 pgs: 8 active+clean+snaptrim, 29 active+clean+snaptrim_wait, 32 unknown, 263 active+clean; 484 KiB data, 665 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 10 KiB/s wr, 3 op/s 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.021320+0000 mgr.y (mgr.24407) 190 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.021320+0000 mgr.y (mgr.24407) 190 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.438107+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.438107+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm00-59629-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.438185+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.438185+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59704-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.473499+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1013955204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.473499+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.100:0/1013955204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: cluster 2026-03-10T07:29:33.474443+0000 mon.a (mon.0) 1557 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: cluster 2026-03-10T07:29:33.474443+0000 mon.a (mon.0) 1557 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.474685+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/1753803713' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.474685+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.100:0/1753803713' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.486842+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.486842+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.494005+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.494005+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.518723+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.518723+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.100:0/3013049239' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.521073+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:34.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:33 vm03 bash[23382]: audit 2026-03-10T07:29:33.521073+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T07:29:34.469 INFO:tasks.workunit.client.0.vm00.stdout: t: 7 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 14 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 0 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 15 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 11 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 5 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 8 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 6 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 3 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 4 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 12 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 9 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 2 0 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsStart (59 ms) 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 3 tests from LibRadosListEC (1266 ms total) 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 1 test from LibRadosListNP 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ RUN ] LibRadosListNP.ListObjectsError 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ OK ] LibRadosListNP.ListObjectsError (3008 ms) 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] 1 test from LibRadosListNP (3008 ms total) 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [----------] Global test environment tear-down 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [==========] 11 tests from 3 test suites ran. (90445 ms total) 2026-03-10T07:29:34.470 INFO:tasks.workunit.client.0.vm00.stdout: api_list: [ PASSED ] 11 tests. 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: audit 2026-03-10T07:29:34.442758+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: audit 2026-03-10T07:29:34.442758+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: audit 2026-03-10T07:29:34.442857+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: audit 2026-03-10T07:29:34.442857+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: audit 2026-03-10T07:29:34.442903+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: audit 2026-03-10T07:29:34.442903+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: cluster 2026-03-10T07:29:34.476504+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: cluster 2026-03-10T07:29:34.476504+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: audit 2026-03-10T07:29:34.485081+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: audit 2026-03-10T07:29:34.485081+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: audit 2026-03-10T07:29:34.487952+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: audit 2026-03-10T07:29:34.487952+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: cluster 2026-03-10T07:29:34.597203+0000 mgr.y (mgr.24407) 191 : cluster [DBG] pgmap v196: 372 pgs: 6 active+clean+snaptrim, 15 active+clean+snaptrim_wait, 104 unknown, 247 active+clean; 482 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:35 vm00 bash[28005]: cluster 2026-03-10T07:29:34.597203+0000 mgr.y (mgr.24407) 191 : cluster [DBG] pgmap v196: 372 pgs: 6 active+clean+snaptrim, 15 active+clean+snaptrim_wait, 104 unknown, 247 active+clean; 482 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: audit 2026-03-10T07:29:34.442758+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: audit 2026-03-10T07:29:34.442758+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: audit 2026-03-10T07:29:34.442857+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: audit 2026-03-10T07:29:34.442857+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: audit 2026-03-10T07:29:34.442903+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: audit 2026-03-10T07:29:34.442903+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: cluster 2026-03-10T07:29:34.476504+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: cluster 2026-03-10T07:29:34.476504+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: audit 2026-03-10T07:29:34.485081+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: audit 2026-03-10T07:29:34.485081+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: audit 2026-03-10T07:29:34.487952+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: audit 2026-03-10T07:29:34.487952+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: cluster 2026-03-10T07:29:34.597203+0000 mgr.y (mgr.24407) 191 : cluster [DBG] pgmap v196: 372 pgs: 6 active+clean+snaptrim, 15 active+clean+snaptrim_wait, 104 unknown, 247 active+clean; 482 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:35 vm00 bash[20701]: cluster 2026-03-10T07:29:34.597203+0000 mgr.y (mgr.24407) 191 : cluster [DBG] pgmap v196: 372 pgs: 6 active+clean+snaptrim, 15 active+clean+snaptrim_wait, 104 unknown, 247 active+clean; 482 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: audit 2026-03-10T07:29:34.442758+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: audit 2026-03-10T07:29:34.442758+0000 mon.a (mon.0) 1561 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm00-59637-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: audit 2026-03-10T07:29:34.442857+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: audit 2026-03-10T07:29:34.442857+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm00-59776-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: audit 2026-03-10T07:29:34.442903+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: audit 2026-03-10T07:29:34.442903+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm00-59704-3","pool2":"test-rados-api-vm00-59704-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: cluster 2026-03-10T07:29:34.476504+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: cluster 2026-03-10T07:29:34.476504+0000 mon.a (mon.0) 1564 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: audit 2026-03-10T07:29:34.485081+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: audit 2026-03-10T07:29:34.485081+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: audit 2026-03-10T07:29:34.487952+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: audit 2026-03-10T07:29:34.487952+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: cluster 2026-03-10T07:29:34.597203+0000 mgr.y (mgr.24407) 191 : cluster [DBG] pgmap v196: 372 pgs: 6 active+clean+snaptrim, 15 active+clean+snaptrim_wait, 104 unknown, 247 active+clean; 482 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:36.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:35 vm03 bash[23382]: cluster 2026-03-10T07:29:34.597203+0000 mgr.y (mgr.24407) 191 : cluster [DBG] pgmap v196: 372 pgs: 6 active+clean+snaptrim, 15 active+clean+snaptrim_wait, 104 unknown, 247 active+clean; 482 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: cluster 2026-03-10T07:29:35.451181+0000 mon.a (mon.0) 1566 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: cluster 2026-03-10T07:29:35.451181+0000 mon.a (mon.0) 1566 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:35.587512+0000 mon.a (mon.0) 1567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:35.587512+0000 mon.a (mon.0) 1567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: cluster 2026-03-10T07:29:35.591856+0000 mon.a (mon.0) 1568 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: cluster 2026-03-10T07:29:35.591856+0000 mon.a (mon.0) 1568 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:35.614846+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:35.614846+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:35.687846+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:35.687846+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.592664+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.592664+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.612688+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/3369643737' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.612688+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/3369643737' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: cluster 2026-03-10T07:29:36.612982+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: cluster 2026-03-10T07:29:36.612982+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.615558+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.615558+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.617187+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/2154043111' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.617187+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/2154043111' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.624853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.624853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.624954+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:36 vm03 bash[23382]: audit 2026-03-10T07:29:36.624954+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: cluster 2026-03-10T07:29:35.451181+0000 mon.a (mon.0) 1566 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: cluster 2026-03-10T07:29:35.451181+0000 mon.a (mon.0) 1566 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:35.587512+0000 mon.a (mon.0) 1567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:35.587512+0000 mon.a (mon.0) 1567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: cluster 2026-03-10T07:29:35.591856+0000 mon.a (mon.0) 1568 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T07:29:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: cluster 2026-03-10T07:29:35.591856+0000 mon.a (mon.0) 1568 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T07:29:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:35.614846+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:35.614846+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:35.687846+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:35.687846+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.592664+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.592664+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.612688+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/3369643737' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.612688+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/3369643737' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: cluster 2026-03-10T07:29:36.612982+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: cluster 2026-03-10T07:29:36.612982+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.615558+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.615558+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.617187+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/2154043111' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.617187+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/2154043111' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.624853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.624853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.624954+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:36 vm00 bash[28005]: audit 2026-03-10T07:29:36.624954+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: cluster 2026-03-10T07:29:35.451181+0000 mon.a (mon.0) 1566 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: cluster 2026-03-10T07:29:35.451181+0000 mon.a (mon.0) 1566 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:35.587512+0000 mon.a (mon.0) 1567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:35.587512+0000 mon.a (mon.0) 1567 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: cluster 2026-03-10T07:29:35.591856+0000 mon.a (mon.0) 1568 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: cluster 2026-03-10T07:29:35.591856+0000 mon.a (mon.0) 1568 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:35.614846+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:35.614846+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:35.687846+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:35.687846+0000 mon.a (mon.0) 1569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.592664+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.592664+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.612688+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/3369643737' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.612688+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.100:0/3369643737' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: cluster 2026-03-10T07:29:36.612982+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: cluster 2026-03-10T07:29:36.612982+0000 mon.a (mon.0) 1571 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.615558+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.615558+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.100:0/1724680225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.617187+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/2154043111' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.617187+0000 mon.c (mon.2) 183 : audit [INF] from='client.? 192.168.123.100:0/2154043111' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.624853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.624853+0000 mon.a (mon.0) 1572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.624954+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:37.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:36 vm00 bash[20701]: audit 2026-03-10T07:29:36.624954+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]: dispatch 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: cluster 2026-03-10T07:29:36.597683+0000 mgr.y (mgr.24407) 192 : cluster [DBG] pgmap v198: 300 pgs: 32 creating+peering, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 249 active+clean; 490 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 3.8 KiB/s wr, 0 op/s 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: cluster 2026-03-10T07:29:36.597683+0000 mgr.y (mgr.24407) 192 : cluster [DBG] pgmap v198: 300 pgs: 32 creating+peering, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 249 active+clean; 490 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 3.8 KiB/s wr, 0 op/s 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:36.625111+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:36.625111+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:36.645859+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:36.645859+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:36.657397+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:36.657397+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:37.625437+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:37.625437+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:37.625503+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:37.625503+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:37.625565+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:37.625565+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:37.625674+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:37.625674+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: cluster 2026-03-10T07:29:37.630381+0000 mon.a (mon.0) 1580 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: cluster 2026-03-10T07:29:37.630381+0000 mon.a (mon.0) 1580 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:37.632035+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:38.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:37 vm03 bash[23382]: audit 2026-03-10T07:29:37.632035+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:38.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: cluster 2026-03-10T07:29:36.597683+0000 mgr.y (mgr.24407) 192 : cluster [DBG] pgmap v198: 300 pgs: 32 creating+peering, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 249 active+clean; 490 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 3.8 KiB/s wr, 0 op/s 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: cluster 2026-03-10T07:29:36.597683+0000 mgr.y (mgr.24407) 192 : cluster [DBG] pgmap v198: 300 pgs: 32 creating+peering, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 249 active+clean; 490 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 3.8 KiB/s wr, 0 op/s 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:36.625111+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:36.625111+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:36.645859+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:36.645859+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:36.657397+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:36.657397+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:37.625437+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:37.625437+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:37.625503+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:37.625503+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:37.625565+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:37.625565+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:37.625674+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:37.625674+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: cluster 2026-03-10T07:29:37.630381+0000 mon.a (mon.0) 1580 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: cluster 2026-03-10T07:29:37.630381+0000 mon.a (mon.0) 1580 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:37.632035+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:37 vm00 bash[28005]: audit 2026-03-10T07:29:37.632035+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: cluster 2026-03-10T07:29:36.597683+0000 mgr.y (mgr.24407) 192 : cluster [DBG] pgmap v198: 300 pgs: 32 creating+peering, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 249 active+clean; 490 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 3.8 KiB/s wr, 0 op/s 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: cluster 2026-03-10T07:29:36.597683+0000 mgr.y (mgr.24407) 192 : cluster [DBG] pgmap v198: 300 pgs: 32 creating+peering, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 249 active+clean; 490 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 3.8 KiB/s wr, 0 op/s 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:36.625111+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:36.625111+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:36.645859+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:36.645859+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:36.657397+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:36.657397+0000 mon.a (mon.0) 1575 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:37.625437+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:37.625437+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm00-59776-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:37.625503+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:37.625503+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm00-59629-28"}]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:37.625565+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:37.625565+0000 mon.a (mon.0) 1578 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm00-59637-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:38.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:37.625674+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:38.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:37.625674+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:38.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: cluster 2026-03-10T07:29:37.630381+0000 mon.a (mon.0) 1580 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T07:29:38.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: cluster 2026-03-10T07:29:37.630381+0000 mon.a (mon.0) 1580 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T07:29:38.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:37.632035+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:38.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:37 vm00 bash[20701]: audit 2026-03-10T07:29:37.632035+0000 mon.a (mon.0) 1581 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:37.659332+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:37.659332+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:37.663957+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:37.663957+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:37.683062+0000 mon.a (mon.0) 1583 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:37.683062+0000 mon.a (mon.0) 1583 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:37.684723+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:37.684723+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:37.687809+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:37.687809+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.629317+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.629317+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.629369+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.629369+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.629719+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.629719+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.636195+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.636195+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: cluster 2026-03-10T07:29:38.638148+0000 mon.a (mon.0) 1589 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: cluster 2026-03-10T07:29:38.638148+0000 mon.a (mon.0) 1589 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.649853+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.649853+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.649978+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.649978+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.650163+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.016 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:38 vm03 bash[23382]: audit 2026-03-10T07:29:38.650163+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:37.659332+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:37.659332+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:37.663957+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:37.663957+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:37.683062+0000 mon.a (mon.0) 1583 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:37.683062+0000 mon.a (mon.0) 1583 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:37.684723+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:37.684723+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:37.687809+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:37.687809+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.629317+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.629317+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.629369+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.629369+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.629719+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.629719+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.636195+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.636195+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: cluster 2026-03-10T07:29:38.638148+0000 mon.a (mon.0) 1589 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: cluster 2026-03-10T07:29:38.638148+0000 mon.a (mon.0) 1589 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.649853+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.649853+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.649978+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.649978+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.650163+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:38 vm00 bash[28005]: audit 2026-03-10T07:29:38.650163+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:37.659332+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:37.659332+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:37.663957+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:37.663957+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:37.683062+0000 mon.a (mon.0) 1583 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:37.683062+0000 mon.a (mon.0) 1583 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:37.684723+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:37.684723+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:37.687809+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:37.687809+0000 mon.a (mon.0) 1585 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.629317+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.629317+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.629369+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.629369+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.629719+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.629719+0000 mon.a (mon.0) 1588 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm00-59629-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.636195+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.636195+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: cluster 2026-03-10T07:29:38.638148+0000 mon.a (mon.0) 1589 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: cluster 2026-03-10T07:29:38.638148+0000 mon.a (mon.0) 1589 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.649853+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.649853+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.649978+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.649978+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.650163+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:38 vm00 bash[20701]: audit 2026-03-10T07:29:38.650163+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]: dispatch 2026-03-10T07:29:39.667 INFO:tasks.workunit.client.0.vm00.stdout:RUN ] LibRadosSnapshotsECPP.SnapGetNamePP 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapGetNamePP (2008 ms) 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP (8324 ms total) 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.SnapPP 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.SnapPP (4240 ms) 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.RollbackPP 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.RollbackPP (4215 ms) 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.Bug11677 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.Bug11677 (4141 ms) 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP (12596 ms total) 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [----------] Global test environment tear-down 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [==========] 21 tests from 5 test suites ran. (95379 ms total) 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ PASSED ] 20 tests. 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ SKIPPED ] 1 test, listed below: 2026-03-10T07:29:39.668 INFO:tasks.workunit.client.0.vm00.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-10T07:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: cluster 2026-03-10T07:29:38.598101+0000 mgr.y (mgr.24407) 193 : cluster [DBG] pgmap v201: 356 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 458 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: cluster 2026-03-10T07:29:38.598101+0000 mgr.y (mgr.24407) 193 : cluster [DBG] pgmap v201: 356 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 458 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.219797+0000 mon.a (mon.0) 1593 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.219797+0000 mon.a (mon.0) 1593 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.223315+0000 mon.c (mon.2) 184 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.223315+0000 mon.c (mon.2) 184 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: cluster 2026-03-10T07:29:39.630124+0000 mon.a (mon.0) 1594 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: cluster 2026-03-10T07:29:39.630124+0000 mon.a (mon.0) 1594 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.634272+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.634272+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.634316+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]': finished 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.634316+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]': finished 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: cluster 2026-03-10T07:29:39.665867+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: cluster 2026-03-10T07:29:39.665867+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.666046+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/455670512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.666046+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/455670512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.670159+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.670159+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.670336+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:40 vm00 bash[28005]: audit 2026-03-10T07:29:39.670336+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: cluster 2026-03-10T07:29:38.598101+0000 mgr.y (mgr.24407) 193 : cluster [DBG] pgmap v201: 356 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 458 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: cluster 2026-03-10T07:29:38.598101+0000 mgr.y (mgr.24407) 193 : cluster [DBG] pgmap v201: 356 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 458 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.219797+0000 mon.a (mon.0) 1593 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.219797+0000 mon.a (mon.0) 1593 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.223315+0000 mon.c (mon.2) 184 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.223315+0000 mon.c (mon.2) 184 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: cluster 2026-03-10T07:29:39.630124+0000 mon.a (mon.0) 1594 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: cluster 2026-03-10T07:29:39.630124+0000 mon.a (mon.0) 1594 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.634272+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.634272+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.634316+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]': finished 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.634316+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]': finished 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: cluster 2026-03-10T07:29:39.665867+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: cluster 2026-03-10T07:29:39.665867+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.666046+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/455670512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.666046+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/455670512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.670159+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.670159+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.670336+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:40 vm00 bash[20701]: audit 2026-03-10T07:29:39.670336+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: cluster 2026-03-10T07:29:38.598101+0000 mgr.y (mgr.24407) 193 : cluster [DBG] pgmap v201: 356 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 458 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: cluster 2026-03-10T07:29:38.598101+0000 mgr.y (mgr.24407) 193 : cluster [DBG] pgmap v201: 356 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 458 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.219797+0000 mon.a (mon.0) 1593 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.219797+0000 mon.a (mon.0) 1593 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.223315+0000 mon.c (mon.2) 184 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.223315+0000 mon.c (mon.2) 184 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: cluster 2026-03-10T07:29:39.630124+0000 mon.a (mon.0) 1594 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: cluster 2026-03-10T07:29:39.630124+0000 mon.a (mon.0) 1594 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.634272+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.634272+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.100:0/2790551399' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm00-59956-21"}]': finished 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.634316+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]': finished 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.634316+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-15", "mode": "writeback"}]': finished 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: cluster 2026-03-10T07:29:39.665867+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: cluster 2026-03-10T07:29:39.665867+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.666046+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/455670512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.666046+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.100:0/455670512' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.670159+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.670159+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.670336+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:40.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:40 vm03 bash[23382]: audit 2026-03-10T07:29:39.670336+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:41.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:29:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:29:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: cluster 2026-03-10T07:29:40.598857+0000 mgr.y (mgr.24407) 194 : cluster [DBG] pgmap v204: 356 pgs: 32 creating+peering, 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: cluster 2026-03-10T07:29:40.598857+0000 mgr.y (mgr.24407) 194 : cluster [DBG] pgmap v204: 356 pgs: 32 creating+peering, 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: audit 2026-03-10T07:29:40.651996+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: audit 2026-03-10T07:29:40.651996+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: audit 2026-03-10T07:29:40.652213+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: audit 2026-03-10T07:29:40.652213+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: audit 2026-03-10T07:29:40.652553+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: audit 2026-03-10T07:29:40.652553+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: cluster 2026-03-10T07:29:40.667600+0000 mon.a (mon.0) 1603 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: cluster 2026-03-10T07:29:40.667600+0000 mon.a (mon.0) 1603 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: audit 2026-03-10T07:29:40.755206+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: audit 2026-03-10T07:29:40.755206+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: audit 2026-03-10T07:29:40.757223+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: audit 2026-03-10T07:29:40.757223+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: cluster 2026-03-10T07:29:40.926732+0000 mon.a (mon.0) 1605 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:41 vm03 bash[23382]: cluster 2026-03-10T07:29:40.926732+0000 mon.a (mon.0) 1605 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: cluster 2026-03-10T07:29:40.598857+0000 mgr.y (mgr.24407) 194 : cluster [DBG] pgmap v204: 356 pgs: 32 creating+peering, 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: cluster 2026-03-10T07:29:40.598857+0000 mgr.y (mgr.24407) 194 : cluster [DBG] pgmap v204: 356 pgs: 32 creating+peering, 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: audit 2026-03-10T07:29:40.651996+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: audit 2026-03-10T07:29:40.651996+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: audit 2026-03-10T07:29:40.652213+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: audit 2026-03-10T07:29:40.652213+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: audit 2026-03-10T07:29:40.652553+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: audit 2026-03-10T07:29:40.652553+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: cluster 2026-03-10T07:29:40.667600+0000 mon.a (mon.0) 1603 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: cluster 2026-03-10T07:29:40.667600+0000 mon.a (mon.0) 1603 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: audit 2026-03-10T07:29:40.755206+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: audit 2026-03-10T07:29:40.755206+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: audit 2026-03-10T07:29:40.757223+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: audit 2026-03-10T07:29:40.757223+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: cluster 2026-03-10T07:29:40.926732+0000 mon.a (mon.0) 1605 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:41 vm00 bash[28005]: cluster 2026-03-10T07:29:40.926732+0000 mon.a (mon.0) 1605 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: cluster 2026-03-10T07:29:40.598857+0000 mgr.y (mgr.24407) 194 : cluster [DBG] pgmap v204: 356 pgs: 32 creating+peering, 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: cluster 2026-03-10T07:29:40.598857+0000 mgr.y (mgr.24407) 194 : cluster [DBG] pgmap v204: 356 pgs: 32 creating+peering, 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: audit 2026-03-10T07:29:40.651996+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: audit 2026-03-10T07:29:40.651996+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm00-59629-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: audit 2026-03-10T07:29:40.652213+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: audit 2026-03-10T07:29:40.652213+0000 mon.a (mon.0) 1601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm00-59776-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: audit 2026-03-10T07:29:40.652553+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: audit 2026-03-10T07:29:40.652553+0000 mon.a (mon.0) 1602 : audit [INF] from='client.? 192.168.123.100:0/3140200981' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm00-59637-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: cluster 2026-03-10T07:29:40.667600+0000 mon.a (mon.0) 1603 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: cluster 2026-03-10T07:29:40.667600+0000 mon.a (mon.0) 1603 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: audit 2026-03-10T07:29:40.755206+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: audit 2026-03-10T07:29:40.755206+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: audit 2026-03-10T07:29:40.757223+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: audit 2026-03-10T07:29:40.757223+0000 mon.a (mon.0) 1604 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: cluster 2026-03-10T07:29:40.926732+0000 mon.a (mon.0) 1605 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:42.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:41 vm00 bash[20701]: cluster 2026-03-10T07:29:40.926732+0000 mon.a (mon.0) 1605 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.680259+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.680259+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.690538+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.690538+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: cluster 2026-03-10T07:29:41.691054+0000 mon.a (mon.0) 1607 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: cluster 2026-03-10T07:29:41.691054+0000 mon.a (mon.0) 1607 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.708875+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.708875+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.738285+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.738285+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.768402+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.768402+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.768556+0000 mon.a (mon.0) 1609 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.768556+0000 mon.a (mon.0) 1609 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.770905+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.770905+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.774493+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.774493+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.779448+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:41.779448+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: cluster 2026-03-10T07:29:42.680699+0000 mon.a (mon.0) 1612 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: cluster 2026-03-10T07:29:42.680699+0000 mon.a (mon.0) 1612 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.691748+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.691748+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.692078+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.692078+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.698131+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.698131+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: cluster 2026-03-10T07:29:42.698711+0000 mon.a (mon.0) 1615 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: cluster 2026-03-10T07:29:42.698711+0000 mon.a (mon.0) 1615 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.700745+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.700745+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.701108+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.701108+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.701148+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:43.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:42 vm03 bash[23382]: audit 2026-03-10T07:29:42.701148+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.680259+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.680259+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.690538+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.690538+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: cluster 2026-03-10T07:29:41.691054+0000 mon.a (mon.0) 1607 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T07:29:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: cluster 2026-03-10T07:29:41.691054+0000 mon.a (mon.0) 1607 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T07:29:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.708875+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.708875+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.738285+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.738285+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.768402+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.768402+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.768556+0000 mon.a (mon.0) 1609 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.768556+0000 mon.a (mon.0) 1609 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.770905+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.770905+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.774493+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.774493+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.779448+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:41.779448+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: cluster 2026-03-10T07:29:42.680699+0000 mon.a (mon.0) 1612 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: cluster 2026-03-10T07:29:42.680699+0000 mon.a (mon.0) 1612 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.691748+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.691748+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.692078+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.692078+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.698131+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.698131+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: cluster 2026-03-10T07:29:42.698711+0000 mon.a (mon.0) 1615 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: cluster 2026-03-10T07:29:42.698711+0000 mon.a (mon.0) 1615 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.700745+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.700745+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.701108+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.701108+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.701148+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:42 vm00 bash[28005]: audit 2026-03-10T07:29:42.701148+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.680259+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.680259+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.690538+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.690538+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: cluster 2026-03-10T07:29:41.691054+0000 mon.a (mon.0) 1607 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: cluster 2026-03-10T07:29:41.691054+0000 mon.a (mon.0) 1607 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.708875+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.708875+0000 mon.a (mon.0) 1608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.738285+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.738285+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.768402+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.768402+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.768556+0000 mon.a (mon.0) 1609 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.768556+0000 mon.a (mon.0) 1609 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.770905+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.770905+0000 mon.a (mon.0) 1610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.774493+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.774493+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.779448+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:41.779448+0000 mon.a (mon.0) 1611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: cluster 2026-03-10T07:29:42.680699+0000 mon.a (mon.0) 1612 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: cluster 2026-03-10T07:29:42.680699+0000 mon.a (mon.0) 1612 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.691748+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.691748+0000 mon.a (mon.0) 1613 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-15"}]': finished 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.692078+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.692078+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.698131+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.698131+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: cluster 2026-03-10T07:29:42.698711+0000 mon.a (mon.0) 1615 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: cluster 2026-03-10T07:29:42.698711+0000 mon.a (mon.0) 1615 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.700745+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.700745+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.701108+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.701108+0000 mon.a (mon.0) 1617 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.701148+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:43.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:42 vm00 bash[20701]: audit 2026-03-10T07:29:42.701148+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:43.514 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:29:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: cluster 2026-03-10T07:29:42.599336+0000 mgr.y (mgr.24407) 195 : cluster [DBG] pgmap v207: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: cluster 2026-03-10T07:29:42.599336+0000 mgr.y (mgr.24407) 195 : cluster [DBG] pgmap v207: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: audit 2026-03-10T07:29:43.028432+0000 mgr.y (mgr.24407) 196 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: audit 2026-03-10T07:29:43.028432+0000 mgr.y (mgr.24407) 196 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: audit 2026-03-10T07:29:43.696462+0000 mon.a (mon.0) 1619 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: audit 2026-03-10T07:29:43.696462+0000 mon.a (mon.0) 1619 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: audit 2026-03-10T07:29:43.696661+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: audit 2026-03-10T07:29:43.696661+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: cluster 2026-03-10T07:29:43.746097+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: cluster 2026-03-10T07:29:43.746097+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: audit 2026-03-10T07:29:43.750012+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:44.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:43 vm00 bash[28005]: audit 2026-03-10T07:29:43.750012+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: cluster 2026-03-10T07:29:42.599336+0000 mgr.y (mgr.24407) 195 : cluster [DBG] pgmap v207: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: cluster 2026-03-10T07:29:42.599336+0000 mgr.y (mgr.24407) 195 : cluster [DBG] pgmap v207: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: audit 2026-03-10T07:29:43.028432+0000 mgr.y (mgr.24407) 196 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: audit 2026-03-10T07:29:43.028432+0000 mgr.y (mgr.24407) 196 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: audit 2026-03-10T07:29:43.696462+0000 mon.a (mon.0) 1619 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: audit 2026-03-10T07:29:43.696462+0000 mon.a (mon.0) 1619 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: audit 2026-03-10T07:29:43.696661+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: audit 2026-03-10T07:29:43.696661+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: cluster 2026-03-10T07:29:43.746097+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: cluster 2026-03-10T07:29:43.746097+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: audit 2026-03-10T07:29:43.750012+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:44.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:43 vm00 bash[20701]: audit 2026-03-10T07:29:43.750012+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: cluster 2026-03-10T07:29:42.599336+0000 mgr.y (mgr.24407) 195 : cluster [DBG] pgmap v207: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: cluster 2026-03-10T07:29:42.599336+0000 mgr.y (mgr.24407) 195 : cluster [DBG] pgmap v207: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: audit 2026-03-10T07:29:43.028432+0000 mgr.y (mgr.24407) 196 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: audit 2026-03-10T07:29:43.028432+0000 mgr.y (mgr.24407) 196 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: audit 2026-03-10T07:29:43.696462+0000 mon.a (mon.0) 1619 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: audit 2026-03-10T07:29:43.696462+0000 mon.a (mon.0) 1619 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:44.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: audit 2026-03-10T07:29:43.696661+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:44.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: audit 2026-03-10T07:29:43.696661+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.100:0/251251247' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm00-59637-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:44.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: cluster 2026-03-10T07:29:43.746097+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T07:29:44.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: cluster 2026-03-10T07:29:43.746097+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T07:29:44.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: audit 2026-03-10T07:29:43.750012+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:44.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:43 vm03 bash[23382]: audit 2026-03-10T07:29:43.750012+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: cluster 2026-03-10T07:29:44.599676+0000 mgr.y (mgr.24407) 197 : cluster [DBG] pgmap v210: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: cluster 2026-03-10T07:29:44.599676+0000 mgr.y (mgr.24407) 197 : cluster [DBG] pgmap v210: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.700077+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.700077+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.700137+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.700137+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: cluster 2026-03-10T07:29:44.704543+0000 mon.a (mon.0) 1625 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: cluster 2026-03-10T07:29:44.704543+0000 mon.a (mon.0) 1625 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.709667+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.709667+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.731589+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.731589+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.734860+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.734860+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.742242+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.742242+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.743930+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.743930+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.746363+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.746363+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.747195+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.747195+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.752631+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:46.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:45 vm03 bash[23382]: audit 2026-03-10T07:29:44.752631+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: cluster 2026-03-10T07:29:44.599676+0000 mgr.y (mgr.24407) 197 : cluster [DBG] pgmap v210: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: cluster 2026-03-10T07:29:44.599676+0000 mgr.y (mgr.24407) 197 : cluster [DBG] pgmap v210: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.700077+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.700077+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.700137+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.700137+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: cluster 2026-03-10T07:29:44.704543+0000 mon.a (mon.0) 1625 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: cluster 2026-03-10T07:29:44.704543+0000 mon.a (mon.0) 1625 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.709667+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.709667+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.731589+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.731589+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.734860+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.734860+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.742242+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.742242+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.743930+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.743930+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.746363+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.746363+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.747195+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.747195+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.752631+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:45 vm00 bash[28005]: audit 2026-03-10T07:29:44.752631+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: cluster 2026-03-10T07:29:44.599676+0000 mgr.y (mgr.24407) 197 : cluster [DBG] pgmap v210: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: cluster 2026-03-10T07:29:44.599676+0000 mgr.y (mgr.24407) 197 : cluster [DBG] pgmap v210: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.700077+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.700077+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm00-59776-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.700137+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.700137+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? 192.168.123.100:0/2597238638' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm00-59629-29"}]': finished 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: cluster 2026-03-10T07:29:44.704543+0000 mon.a (mon.0) 1625 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: cluster 2026-03-10T07:29:44.704543+0000 mon.a (mon.0) 1625 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.709667+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.709667+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.731589+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.731589+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.734860+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.734860+0000 mon.a (mon.0) 1626 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.742242+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.742242+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.743930+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.743930+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.746363+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.746363+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.747195+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.747195+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.752631+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:45 vm00 bash[20701]: audit 2026-03-10T07:29:44.752631+0000 mon.a (mon.0) 1629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.705602+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.705602+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.705759+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.705759+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: cluster 2026-03-10T07:29:45.727646+0000 mon.a (mon.0) 1632 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: cluster 2026-03-10T07:29:45.727646+0000 mon.a (mon.0) 1632 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.739203+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.739203+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.744653+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.744653+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.746731+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2777303953' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.746731+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2777303953' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.770299+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.770299+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.780924+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.780924+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.783914+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:45.783914+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:46.709841+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:46.709841+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:46.709941+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:46 vm00 bash[28005]: audit 2026-03-10T07:29:46.709941+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.705602+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.705602+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.705759+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.705759+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: cluster 2026-03-10T07:29:45.727646+0000 mon.a (mon.0) 1632 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: cluster 2026-03-10T07:29:45.727646+0000 mon.a (mon.0) 1632 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.739203+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.739203+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.744653+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.744653+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.746731+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2777303953' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.746731+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2777303953' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.770299+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.770299+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.780924+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.780924+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.783914+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:45.783914+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:46.709841+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:46.709841+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:46.709941+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:47.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:46 vm00 bash[20701]: audit 2026-03-10T07:29:46.709941+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:47.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.705602+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.705602+0000 mon.a (mon.0) 1630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.705759+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:47.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.705759+0000 mon.a (mon.0) 1631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm00-59629-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:47.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: cluster 2026-03-10T07:29:45.727646+0000 mon.a (mon.0) 1632 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T07:29:47.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: cluster 2026-03-10T07:29:45.727646+0000 mon.a (mon.0) 1632 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T07:29:47.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.739203+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.739203+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.744653+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.744653+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.746731+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2777303953' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.746731+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.100:0/2777303953' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.770299+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.770299+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.780924+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.780924+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.783914+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:45.783914+0000 mon.a (mon.0) 1635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:46.709841+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:46.709841+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm00-59637-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:46.709941+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:47.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:46 vm03 bash[23382]: audit 2026-03-10T07:29:46.709941+0000 mon.a (mon.0) 1637 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: Running main() from gmock_main.cc 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [==========] Running 57 tests from 4 test suites. 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] Global test environment set-up. 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.TooBigPP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.TooBigPP (3897 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolQuotaPP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolQuotaPP (20602 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleWritePP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleWritePP (5541 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.WaitForSafePP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.WaitForSafePP (3163 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP (3051 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP2 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP2 (2745 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP3 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP3 (3265 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripSparseReadPP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripSparseReadPP (3087 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsCompletePP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.IsCompletePP (2699 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsSafePP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.IsSafePP (3169 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.ReturnValuePP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.ReturnValuePP (3846 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushPP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushPP (3218 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushAsyncPP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushAsyncPP (2975 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP (3258 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP2 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP2 (3171 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP (3124 ms) 2026-03-10T07:29:47.743 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP2 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP2 (2160 ms) 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPPNS 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPPNS (3019 ms) 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPP 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPP (3039 ms) 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime (3213 ms) 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime2 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime2 (2988 ms) 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.StatRemovePP 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.StatRemovePP (3265 ms) 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.ExecuteClassPP 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.ExecuteClassPP (3131 ms) 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.OmapPP 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.OmapPP (3043 ms) 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiWritePP 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiWritePP (3070 ms) 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.AioUnlockPP 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.AioUnlockPP (3021 ms) 2026-03-10T07:29:47.744 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripAppendPP 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: cluster 2026-03-10T07:29:46.600000+0000 mgr.y (mgr.24407) 198 : cluster [DBG] pgmap v213: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: cluster 2026-03-10T07:29:46.600000+0000 mgr.y (mgr.24407) 198 : cluster [DBG] pgmap v213: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:46.721012+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:46.721012+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: cluster 2026-03-10T07:29:46.755379+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: cluster 2026-03-10T07:29:46.755379+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:46.755721+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:46.755721+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: cluster 2026-03-10T07:29:46.755823+0000 mon.a (mon.0) 1639 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: cluster 2026-03-10T07:29:46.755823+0000 mon.a (mon.0) 1639 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:46.763138+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:46.763138+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:46.763702+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:46.763702+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.713548+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.713548+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.713692+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.713692+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.713913+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.713913+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.718630+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.718630+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.718879+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.718879+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: cluster 2026-03-10T07:29:47.730998+0000 mon.a (mon.0) 1645 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: cluster 2026-03-10T07:29:47.730998+0000 mon.a (mon.0) 1645 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.732944+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.732944+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.742485+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:47 vm00 bash[28005]: audit 2026-03-10T07:29:47.742485+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: cluster 2026-03-10T07:29:46.600000+0000 mgr.y (mgr.24407) 198 : cluster [DBG] pgmap v213: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: cluster 2026-03-10T07:29:46.600000+0000 mgr.y (mgr.24407) 198 : cluster [DBG] pgmap v213: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:46.721012+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:46.721012+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: cluster 2026-03-10T07:29:46.755379+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: cluster 2026-03-10T07:29:46.755379+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:46.755721+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:46.755721+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: cluster 2026-03-10T07:29:46.755823+0000 mon.a (mon.0) 1639 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: cluster 2026-03-10T07:29:46.755823+0000 mon.a (mon.0) 1639 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:46.763138+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:46.763138+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:46.763702+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:46.763702+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.713548+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.713548+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.713692+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.713692+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.713913+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.713913+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.718630+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.718630+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.718879+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.718879+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: cluster 2026-03-10T07:29:47.730998+0000 mon.a (mon.0) 1645 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: cluster 2026-03-10T07:29:47.730998+0000 mon.a (mon.0) 1645 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.732944+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.732944+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.742485+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:47 vm00 bash[20701]: audit 2026-03-10T07:29:47.742485+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: cluster 2026-03-10T07:29:46.600000+0000 mgr.y (mgr.24407) 198 : cluster [DBG] pgmap v213: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: cluster 2026-03-10T07:29:46.600000+0000 mgr.y (mgr.24407) 198 : cluster [DBG] pgmap v213: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:46.721012+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:46.721012+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: cluster 2026-03-10T07:29:46.755379+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: cluster 2026-03-10T07:29:46.755379+0000 mon.a (mon.0) 1638 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:46.755721+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:46.755721+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: cluster 2026-03-10T07:29:46.755823+0000 mon.a (mon.0) 1639 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: cluster 2026-03-10T07:29:46.755823+0000 mon.a (mon.0) 1639 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:46.763138+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:46.763138+0000 mon.a (mon.0) 1640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:46.763702+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:46.763702+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.713548+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.713548+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm00-59629-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.713692+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.713692+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.713913+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.713913+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.718630+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.718630+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.100:0/1021763534' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.718879+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.718879+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: cluster 2026-03-10T07:29:47.730998+0000 mon.a (mon.0) 1645 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: cluster 2026-03-10T07:29:47.730998+0000 mon.a (mon.0) 1645 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.732944+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.732944+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]: dispatch 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.742485+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:47 vm03 bash[23382]: audit 2026-03-10T07:29:47.742485+0000 mon.a (mon.0) 1647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]: dispatch 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioosMiscPP (74240 ms total) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosTwoPoolsECPP.CopyFrom 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosTwoPoolsECPP.CopyFrom (241 ms) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP (241 ms total) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)0, Checksummer::xxhash32, ceph_le > 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Subset 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Subset (94 ms) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Chunked 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Chunked (3 ms) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0 (97 ms total) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)1, Checksummer::xxhash64, ceph_le > 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Subset 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Subset (93 ms) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Chunked 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Chunked (12 ms) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1 (105 ms total) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)2, Checksummer::crc32c, ceph_le > 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Subset 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Subset (55 ms) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Chunked 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Chunked (2 ms) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2 (57 ms total) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ RUN ] LibRadosMiscECPP.CompareExtentRange 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ OK ] LibRadosMiscECPP.CompareExtentRange (1057 ms) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP (1057 ms total) 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: 2026-03-10T07:29:48.751 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [----------] Global test environment tear-down 2026-03-10T07:29:48.752 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [==========] 31 tests from 7 test suites ran. (104637 ms total) 2026-03-10T07:29:48.752 INFO:tasks.workunit.client.0.vm00.stdout: api_misc_pp: [ PASSED ] 31 tests. 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: cluster 2026-03-10T07:29:48.714031+0000 mon.a (mon.0) 1648 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: cluster 2026-03-10T07:29:48.714031+0000 mon.a (mon.0) 1648 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: audit 2026-03-10T07:29:48.717966+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: audit 2026-03-10T07:29:48.717966+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: audit 2026-03-10T07:29:48.718048+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]': finished 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: audit 2026-03-10T07:29:48.718048+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]': finished 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: cluster 2026-03-10T07:29:48.729427+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: cluster 2026-03-10T07:29:48.729427+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: audit 2026-03-10T07:29:48.751185+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/167055590' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: audit 2026-03-10T07:29:48.751185+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/167055590' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: audit 2026-03-10T07:29:48.757981+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:48 vm00 bash[28005]: audit 2026-03-10T07:29:48.757981+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: cluster 2026-03-10T07:29:48.714031+0000 mon.a (mon.0) 1648 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:49.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: cluster 2026-03-10T07:29:48.714031+0000 mon.a (mon.0) 1648 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:49.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: audit 2026-03-10T07:29:48.717966+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:49.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: audit 2026-03-10T07:29:48.717966+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:49.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: audit 2026-03-10T07:29:48.718048+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]': finished 2026-03-10T07:29:49.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: audit 2026-03-10T07:29:48.718048+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]': finished 2026-03-10T07:29:49.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: cluster 2026-03-10T07:29:48.729427+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T07:29:49.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: cluster 2026-03-10T07:29:48.729427+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T07:29:49.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: audit 2026-03-10T07:29:48.751185+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/167055590' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:49.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: audit 2026-03-10T07:29:48.751185+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/167055590' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:49.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: audit 2026-03-10T07:29:48.757981+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:49.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:48 vm00 bash[20701]: audit 2026-03-10T07:29:48.757981+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:49.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: cluster 2026-03-10T07:29:48.714031+0000 mon.a (mon.0) 1648 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:49.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: cluster 2026-03-10T07:29:48.714031+0000 mon.a (mon.0) 1648 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:49.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: audit 2026-03-10T07:29:48.717966+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:49.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: audit 2026-03-10T07:29:48.717966+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm00-59776-36"}]': finished 2026-03-10T07:29:49.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: audit 2026-03-10T07:29:48.718048+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]': finished 2026-03-10T07:29:49.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: audit 2026-03-10T07:29:48.718048+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-17", "mode": "writeback"}]': finished 2026-03-10T07:29:49.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: cluster 2026-03-10T07:29:48.729427+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T07:29:49.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: cluster 2026-03-10T07:29:48.729427+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T07:29:49.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: audit 2026-03-10T07:29:48.751185+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/167055590' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:49.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: audit 2026-03-10T07:29:48.751185+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.100:0/167055590' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:49.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: audit 2026-03-10T07:29:48.757981+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:49.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:48 vm03 bash[23382]: audit 2026-03-10T07:29:48.757981+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:50.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: cluster 2026-03-10T07:29:48.600340+0000 mgr.y (mgr.24407) 199 : cluster [DBG] pgmap v216: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:50.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: cluster 2026-03-10T07:29:48.600340+0000 mgr.y (mgr.24407) 199 : cluster [DBG] pgmap v216: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:48.803615+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:48.803615+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:48.805547+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:48.805547+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.721415+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.721415+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.721515+0000 mon.a (mon.0) 1655 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.721515+0000 mon.a (mon.0) 1655 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.726213+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.726213+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.734507+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.734507+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: cluster 2026-03-10T07:29:49.734788+0000 mon.a (mon.0) 1656 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: cluster 2026-03-10T07:29:49.734788+0000 mon.a (mon.0) 1656 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.735654+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.735654+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.736460+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:49 vm00 bash[28005]: audit 2026-03-10T07:29:49.736460+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: cluster 2026-03-10T07:29:48.600340+0000 mgr.y (mgr.24407) 199 : cluster [DBG] pgmap v216: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: cluster 2026-03-10T07:29:48.600340+0000 mgr.y (mgr.24407) 199 : cluster [DBG] pgmap v216: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:48.803615+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:48.803615+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:48.805547+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:48.805547+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.721415+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.721415+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.721515+0000 mon.a (mon.0) 1655 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.721515+0000 mon.a (mon.0) 1655 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.726213+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.726213+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.734507+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.734507+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: cluster 2026-03-10T07:29:49.734788+0000 mon.a (mon.0) 1656 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T07:29:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: cluster 2026-03-10T07:29:49.734788+0000 mon.a (mon.0) 1656 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T07:29:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.735654+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.735654+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.736460+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:49 vm00 bash[20701]: audit 2026-03-10T07:29:49.736460+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:50.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: cluster 2026-03-10T07:29:48.600340+0000 mgr.y (mgr.24407) 199 : cluster [DBG] pgmap v216: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: cluster 2026-03-10T07:29:48.600340+0000 mgr.y (mgr.24407) 199 : cluster [DBG] pgmap v216: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:48.803615+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:48.803615+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:48.805547+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:48.805547+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.721415+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.721415+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm00-59637-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.721515+0000 mon.a (mon.0) 1655 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.721515+0000 mon.a (mon.0) 1655 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.726213+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.726213+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.734507+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.734507+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: cluster 2026-03-10T07:29:49.734788+0000 mon.a (mon.0) 1656 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: cluster 2026-03-10T07:29:49.734788+0000 mon.a (mon.0) 1656 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.735654+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.735654+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]: dispatch 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.736460+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:50.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:49 vm03 bash[23382]: audit 2026-03-10T07:29:49.736460+0000 mon.a (mon.0) 1658 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.079 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: cluster 2026-03-10T07:29:50.721651+0000 mon.a (mon.0) 1659 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:51.079 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: cluster 2026-03-10T07:29:50.721651+0000 mon.a (mon.0) 1659 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:51.079 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: audit 2026-03-10T07:29:50.724676+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:51.079 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: audit 2026-03-10T07:29:50.724676+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:51.079 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: audit 2026-03-10T07:29:50.724707+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: audit 2026-03-10T07:29:50.724707+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: audit 2026-03-10T07:29:50.727353+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: audit 2026-03-10T07:29:50.727353+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: cluster 2026-03-10T07:29:50.728422+0000 mon.a (mon.0) 1662 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: cluster 2026-03-10T07:29:50.728422+0000 mon.a (mon.0) 1662 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: audit 2026-03-10T07:29:50.749156+0000 mon.a (mon.0) 1663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:50 vm00 bash[28005]: audit 2026-03-10T07:29:50.749156+0000 mon.a (mon.0) 1663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: cluster 2026-03-10T07:29:50.721651+0000 mon.a (mon.0) 1659 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: cluster 2026-03-10T07:29:50.721651+0000 mon.a (mon.0) 1659 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: audit 2026-03-10T07:29:50.724676+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: audit 2026-03-10T07:29:50.724676+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: audit 2026-03-10T07:29:50.724707+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: audit 2026-03-10T07:29:50.724707+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: audit 2026-03-10T07:29:50.727353+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: audit 2026-03-10T07:29:50.727353+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: cluster 2026-03-10T07:29:50.728422+0000 mon.a (mon.0) 1662 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: cluster 2026-03-10T07:29:50.728422+0000 mon.a (mon.0) 1662 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: audit 2026-03-10T07:29:50.749156+0000 mon.a (mon.0) 1663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:50 vm00 bash[20701]: audit 2026-03-10T07:29:50.749156+0000 mon.a (mon.0) 1663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: cluster 2026-03-10T07:29:50.721651+0000 mon.a (mon.0) 1659 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:51.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: cluster 2026-03-10T07:29:50.721651+0000 mon.a (mon.0) 1659 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:51.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: audit 2026-03-10T07:29:50.724676+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:51.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: audit 2026-03-10T07:29:50.724676+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-17"}]': finished 2026-03-10T07:29:51.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: audit 2026-03-10T07:29:50.724707+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:51.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: audit 2026-03-10T07:29:50.724707+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:51.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: audit 2026-03-10T07:29:50.727353+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: audit 2026-03-10T07:29:50.727353+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.100:0/260322920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: cluster 2026-03-10T07:29:50.728422+0000 mon.a (mon.0) 1662 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T07:29:51.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: cluster 2026-03-10T07:29:50.728422+0000 mon.a (mon.0) 1662 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T07:29:51.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: audit 2026-03-10T07:29:50.749156+0000 mon.a (mon.0) 1663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:50 vm03 bash[23382]: audit 2026-03-10T07:29:50.749156+0000 mon.a (mon.0) 1663 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]: dispatch 2026-03-10T07:29:51.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:29:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:29:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: cluster 2026-03-10T07:29:50.600677+0000 mgr.y (mgr.24407) 200 : cluster [DBG] pgmap v219: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: cluster 2026-03-10T07:29:50.600677+0000 mgr.y (mgr.24407) 200 : cluster [DBG] pgmap v219: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: audit 2026-03-10T07:29:51.781107+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: audit 2026-03-10T07:29:51.781107+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: cluster 2026-03-10T07:29:51.786800+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: cluster 2026-03-10T07:29:51.786800+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: audit 2026-03-10T07:29:51.802976+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.100:0/3949222174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: audit 2026-03-10T07:29:51.802976+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.100:0/3949222174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: audit 2026-03-10T07:29:51.819770+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: audit 2026-03-10T07:29:51.819770+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: audit 2026-03-10T07:29:51.830181+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: audit 2026-03-10T07:29:51.830181+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: audit 2026-03-10T07:29:51.830666+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:51 vm00 bash[28005]: audit 2026-03-10T07:29:51.830666+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: cluster 2026-03-10T07:29:50.600677+0000 mgr.y (mgr.24407) 200 : cluster [DBG] pgmap v219: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: cluster 2026-03-10T07:29:50.600677+0000 mgr.y (mgr.24407) 200 : cluster [DBG] pgmap v219: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: audit 2026-03-10T07:29:51.781107+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: audit 2026-03-10T07:29:51.781107+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: cluster 2026-03-10T07:29:51.786800+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: cluster 2026-03-10T07:29:51.786800+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: audit 2026-03-10T07:29:51.802976+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.100:0/3949222174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: audit 2026-03-10T07:29:51.802976+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.100:0/3949222174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: audit 2026-03-10T07:29:51.819770+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: audit 2026-03-10T07:29:51.819770+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: audit 2026-03-10T07:29:51.830181+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: audit 2026-03-10T07:29:51.830181+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: audit 2026-03-10T07:29:51.830666+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:51 vm00 bash[20701]: audit 2026-03-10T07:29:51.830666+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: cluster 2026-03-10T07:29:50.600677+0000 mgr.y (mgr.24407) 200 : cluster [DBG] pgmap v219: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: cluster 2026-03-10T07:29:50.600677+0000 mgr.y (mgr.24407) 200 : cluster [DBG] pgmap v219: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: audit 2026-03-10T07:29:51.781107+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: audit 2026-03-10T07:29:51.781107+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm00-59629-30"}]': finished 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: cluster 2026-03-10T07:29:51.786800+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: cluster 2026-03-10T07:29:51.786800+0000 mon.a (mon.0) 1665 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: audit 2026-03-10T07:29:51.802976+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.100:0/3949222174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: audit 2026-03-10T07:29:51.802976+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.100:0/3949222174' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: audit 2026-03-10T07:29:51.819770+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: audit 2026-03-10T07:29:51.819770+0000 mon.c (mon.2) 186 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: audit 2026-03-10T07:29:51.830181+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: audit 2026-03-10T07:29:51.830181+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: audit 2026-03-10T07:29:51.830666+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:52.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:51 vm03 bash[23382]: audit 2026-03-10T07:29:51.830666+0000 mon.a (mon.0) 1667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:51.832376+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:51.832376+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:51.834549+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:51.834549+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:51.836047+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:51.836047+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:51.860691+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:51.860691+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.787729+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.787729+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.787801+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.787801+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: cluster 2026-03-10T07:29:52.794512+0000 mon.a (mon.0) 1672 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: cluster 2026-03-10T07:29:52.794512+0000 mon.a (mon.0) 1672 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.795144+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.795144+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.807453+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.807453+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.810385+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.810385+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.810895+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:52 vm00 bash[28005]: audit 2026-03-10T07:29:52.810895+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:51.832376+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:51.832376+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:51.834549+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:51.834549+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:51.836047+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:51.836047+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:51.860691+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:51.860691+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.787729+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.787729+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.787801+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.787801+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: cluster 2026-03-10T07:29:52.794512+0000 mon.a (mon.0) 1672 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: cluster 2026-03-10T07:29:52.794512+0000 mon.a (mon.0) 1672 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.795144+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.795144+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.807453+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.807453+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.810385+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.810385+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.810895+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:52 vm00 bash[20701]: audit 2026-03-10T07:29:52.810895+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.264 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:29:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:51.832376+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:51.832376+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:51.834549+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:51.834549+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:51.836047+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:51.836047+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:51.860691+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:51.860691+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.787729+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.787729+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm00-59637-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.787801+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.787801+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm00-59629-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: cluster 2026-03-10T07:29:52.794512+0000 mon.a (mon.0) 1672 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: cluster 2026-03-10T07:29:52.794512+0000 mon.a (mon.0) 1672 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.795144+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.795144+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.807453+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.807453+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.810385+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.810385+0000 mon.c (mon.2) 189 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.810895+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:53.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:52 vm03 bash[23382]: audit 2026-03-10T07:29:52.810895+0000 mon.a (mon.0) 1674 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: cluster 2026-03-10T07:29:52.601095+0000 mgr.y (mgr.24407) 201 : cluster [DBG] pgmap v222: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: cluster 2026-03-10T07:29:52.601095+0000 mgr.y (mgr.24407) 201 : cluster [DBG] pgmap v222: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: cluster 2026-03-10T07:29:52.844011+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: cluster 2026-03-10T07:29:52.844011+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: audit 2026-03-10T07:29:53.036595+0000 mgr.y (mgr.24407) 202 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: audit 2026-03-10T07:29:53.036595+0000 mgr.y (mgr.24407) 202 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: audit 2026-03-10T07:29:53.792688+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: audit 2026-03-10T07:29:53.792688+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: audit 2026-03-10T07:29:53.805220+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: audit 2026-03-10T07:29:53.805220+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: cluster 2026-03-10T07:29:53.836016+0000 mon.a (mon.0) 1677 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: cluster 2026-03-10T07:29:53.836016+0000 mon.a (mon.0) 1677 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: audit 2026-03-10T07:29:53.837272+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:53 vm00 bash[28005]: audit 2026-03-10T07:29:53.837272+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: cluster 2026-03-10T07:29:52.601095+0000 mgr.y (mgr.24407) 201 : cluster [DBG] pgmap v222: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: cluster 2026-03-10T07:29:52.601095+0000 mgr.y (mgr.24407) 201 : cluster [DBG] pgmap v222: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: cluster 2026-03-10T07:29:52.844011+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: cluster 2026-03-10T07:29:52.844011+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: audit 2026-03-10T07:29:53.036595+0000 mgr.y (mgr.24407) 202 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: audit 2026-03-10T07:29:53.036595+0000 mgr.y (mgr.24407) 202 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: audit 2026-03-10T07:29:53.792688+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: audit 2026-03-10T07:29:53.792688+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: audit 2026-03-10T07:29:53.805220+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: audit 2026-03-10T07:29:53.805220+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: cluster 2026-03-10T07:29:53.836016+0000 mon.a (mon.0) 1677 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: cluster 2026-03-10T07:29:53.836016+0000 mon.a (mon.0) 1677 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: audit 2026-03-10T07:29:53.837272+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:53 vm00 bash[20701]: audit 2026-03-10T07:29:53.837272+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:54.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: cluster 2026-03-10T07:29:52.601095+0000 mgr.y (mgr.24407) 201 : cluster [DBG] pgmap v222: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:54.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: cluster 2026-03-10T07:29:52.601095+0000 mgr.y (mgr.24407) 201 : cluster [DBG] pgmap v222: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: cluster 2026-03-10T07:29:52.844011+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: cluster 2026-03-10T07:29:52.844011+0000 mon.a (mon.0) 1675 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: audit 2026-03-10T07:29:53.036595+0000 mgr.y (mgr.24407) 202 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: audit 2026-03-10T07:29:53.036595+0000 mgr.y (mgr.24407) 202 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: audit 2026-03-10T07:29:53.792688+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: audit 2026-03-10T07:29:53.792688+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: audit 2026-03-10T07:29:53.805220+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: audit 2026-03-10T07:29:53.805220+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: cluster 2026-03-10T07:29:53.836016+0000 mon.a (mon.0) 1677 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: cluster 2026-03-10T07:29:53.836016+0000 mon.a (mon.0) 1677 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: audit 2026-03-10T07:29:53.837272+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:54.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:53 vm03 bash[23382]: audit 2026-03-10T07:29:53.837272+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.234974+0000 mon.c (mon.2) 190 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.234974+0000 mon.c (mon.2) 190 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.806610+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.806610+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.806752+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.806752+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.809570+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.809570+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: cluster 2026-03-10T07:29:54.833018+0000 mon.a (mon.0) 1681 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: cluster 2026-03-10T07:29:54.833018+0000 mon.a (mon.0) 1681 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.834462+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.834462+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.840819+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:54 vm00 bash[28005]: audit 2026-03-10T07:29:54.840819+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.234974+0000 mon.c (mon.2) 190 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.234974+0000 mon.c (mon.2) 190 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.806610+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.806610+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.806752+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.806752+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.809570+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.809570+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: cluster 2026-03-10T07:29:54.833018+0000 mon.a (mon.0) 1681 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: cluster 2026-03-10T07:29:54.833018+0000 mon.a (mon.0) 1681 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.834462+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.834462+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.840819+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:54 vm00 bash[20701]: audit 2026-03-10T07:29:54.840819+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:55.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.234974+0000 mon.c (mon.2) 190 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:55.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.234974+0000 mon.c (mon.2) 190 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:29:55.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.806610+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:55.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.806610+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm00-59629-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:55.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.806752+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:55.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.806752+0000 mon.a (mon.0) 1680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:29:55.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.809570+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.809570+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: cluster 2026-03-10T07:29:54.833018+0000 mon.a (mon.0) 1681 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T07:29:55.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: cluster 2026-03-10T07:29:54.833018+0000 mon.a (mon.0) 1681 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T07:29:55.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.834462+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.834462+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:55.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.840819+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:55.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:54 vm03 bash[23382]: audit 2026-03-10T07:29:54.840819+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:56.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:55 vm03 bash[23382]: cluster 2026-03-10T07:29:54.601490+0000 mgr.y (mgr.24407) 203 : cluster [DBG] pgmap v225: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:56.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:55 vm03 bash[23382]: cluster 2026-03-10T07:29:54.601490+0000 mgr.y (mgr.24407) 203 : cluster [DBG] pgmap v225: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:55 vm00 bash[28005]: cluster 2026-03-10T07:29:54.601490+0000 mgr.y (mgr.24407) 203 : cluster [DBG] pgmap v225: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:55 vm00 bash[28005]: cluster 2026-03-10T07:29:54.601490+0000 mgr.y (mgr.24407) 203 : cluster [DBG] pgmap v225: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:55 vm00 bash[20701]: cluster 2026-03-10T07:29:54.601490+0000 mgr.y (mgr.24407) 203 : cluster [DBG] pgmap v225: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:55 vm00 bash[20701]: cluster 2026-03-10T07:29:54.601490+0000 mgr.y (mgr.24407) 203 : cluster [DBG] pgmap v225: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 680 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:57 vm00 bash[28005]: audit 2026-03-10T07:29:55.918124+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:57 vm00 bash[28005]: audit 2026-03-10T07:29:55.918124+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:57 vm00 bash[28005]: audit 2026-03-10T07:29:55.918209+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:57 vm00 bash[28005]: audit 2026-03-10T07:29:55.918209+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:57 vm00 bash[28005]: audit 2026-03-10T07:29:55.926273+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:57 vm00 bash[28005]: audit 2026-03-10T07:29:55.926273+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:57 vm00 bash[28005]: cluster 2026-03-10T07:29:55.939276+0000 mon.a (mon.0) 1686 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:57 vm00 bash[28005]: cluster 2026-03-10T07:29:55.939276+0000 mon.a (mon.0) 1686 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:57 vm00 bash[28005]: audit 2026-03-10T07:29:55.957277+0000 mon.a (mon.0) 1687 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:57 vm00 bash[28005]: audit 2026-03-10T07:29:55.957277+0000 mon.a (mon.0) 1687 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:57 vm00 bash[20701]: audit 2026-03-10T07:29:55.918124+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:57 vm00 bash[20701]: audit 2026-03-10T07:29:55.918124+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:57 vm00 bash[20701]: audit 2026-03-10T07:29:55.918209+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:57 vm00 bash[20701]: audit 2026-03-10T07:29:55.918209+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:57 vm00 bash[20701]: audit 2026-03-10T07:29:55.926273+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:57 vm00 bash[20701]: audit 2026-03-10T07:29:55.926273+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:57 vm00 bash[20701]: cluster 2026-03-10T07:29:55.939276+0000 mon.a (mon.0) 1686 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:57 vm00 bash[20701]: cluster 2026-03-10T07:29:55.939276+0000 mon.a (mon.0) 1686 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:57 vm00 bash[20701]: audit 2026-03-10T07:29:55.957277+0000 mon.a (mon.0) 1687 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:57 vm00 bash[20701]: audit 2026-03-10T07:29:55.957277+0000 mon.a (mon.0) 1687 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:57 vm03 bash[23382]: audit 2026-03-10T07:29:55.918124+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:57 vm03 bash[23382]: audit 2026-03-10T07:29:55.918124+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:57 vm03 bash[23382]: audit 2026-03-10T07:29:55.918209+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:57 vm03 bash[23382]: audit 2026-03-10T07:29:55.918209+0000 mon.a (mon.0) 1685 : audit [INF] from='client.? 192.168.123.100:0/4234956519' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm00-59637-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:57 vm03 bash[23382]: audit 2026-03-10T07:29:55.926273+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:57 vm03 bash[23382]: audit 2026-03-10T07:29:55.926273+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:57 vm03 bash[23382]: cluster 2026-03-10T07:29:55.939276+0000 mon.a (mon.0) 1686 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T07:29:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:57 vm03 bash[23382]: cluster 2026-03-10T07:29:55.939276+0000 mon.a (mon.0) 1686 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T07:29:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:57 vm03 bash[23382]: audit 2026-03-10T07:29:55.957277+0000 mon.a (mon.0) 1687 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:57 vm03 bash[23382]: audit 2026-03-10T07:29:55.957277+0000 mon.a (mon.0) 1687 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]: dispatch 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: cluster 2026-03-10T07:29:56.601889+0000 mgr.y (mgr.24407) 204 : cluster [DBG] pgmap v228: 332 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: cluster 2026-03-10T07:29:56.601889+0000 mgr.y (mgr.24407) 204 : cluster [DBG] pgmap v228: 332 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: cluster 2026-03-10T07:29:56.917920+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: cluster 2026-03-10T07:29:56.917920+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:57.042684+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]': finished 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:57.042684+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]': finished 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: cluster 2026-03-10T07:29:57.058782+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: cluster 2026-03-10T07:29:57.058782+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:57.058803+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:57.058803+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:57.063667+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:57.063667+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:57.137243+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:57.137243+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:57.139408+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:57.139408+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.046573+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.046573+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.046629+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.046629+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.049220+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.049220+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: cluster 2026-03-10T07:29:58.050871+0000 mon.a (mon.0) 1695 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: cluster 2026-03-10T07:29:58.050871+0000 mon.a (mon.0) 1695 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.054074+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.054074+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.055141+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.055141+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.055594+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.055594+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.071098+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.071098+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.071197+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:58 vm00 bash[28005]: audit 2026-03-10T07:29:58.071197+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: cluster 2026-03-10T07:29:56.601889+0000 mgr.y (mgr.24407) 204 : cluster [DBG] pgmap v228: 332 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: cluster 2026-03-10T07:29:56.601889+0000 mgr.y (mgr.24407) 204 : cluster [DBG] pgmap v228: 332 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: cluster 2026-03-10T07:29:56.917920+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: cluster 2026-03-10T07:29:56.917920+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:57.042684+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]': finished 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:57.042684+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]': finished 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: cluster 2026-03-10T07:29:57.058782+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: cluster 2026-03-10T07:29:57.058782+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:57.058803+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:57.058803+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:57.063667+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:57.063667+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:57.137243+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:57.137243+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:57.139408+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:57.139408+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.046573+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.046573+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.046629+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.046629+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.049220+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.049220+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: cluster 2026-03-10T07:29:58.050871+0000 mon.a (mon.0) 1695 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: cluster 2026-03-10T07:29:58.050871+0000 mon.a (mon.0) 1695 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.054074+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.054074+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.055141+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.055141+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.055594+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.055594+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.071098+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.071098+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.071197+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:58 vm00 bash[20701]: audit 2026-03-10T07:29:58.071197+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: cluster 2026-03-10T07:29:56.601889+0000 mgr.y (mgr.24407) 204 : cluster [DBG] pgmap v228: 332 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: cluster 2026-03-10T07:29:56.601889+0000 mgr.y (mgr.24407) 204 : cluster [DBG] pgmap v228: 332 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: cluster 2026-03-10T07:29:56.917920+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: cluster 2026-03-10T07:29:56.917920+0000 mon.a (mon.0) 1688 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:57.042684+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]': finished 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:57.042684+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-19", "mode": "writeback"}]': finished 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: cluster 2026-03-10T07:29:57.058782+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: cluster 2026-03-10T07:29:57.058782+0000 mon.a (mon.0) 1690 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:57.058803+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:57.058803+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:57.063667+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:57.063667+0000 mon.a (mon.0) 1691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:57.137243+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:57.137243+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:57.139408+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:57.139408+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.046573+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.046573+0000 mon.a (mon.0) 1693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.046629+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.046629+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.049220+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.049220+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: cluster 2026-03-10T07:29:58.050871+0000 mon.a (mon.0) 1695 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: cluster 2026-03-10T07:29:58.050871+0000 mon.a (mon.0) 1695 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.054074+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.054074+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.100:0/3812040926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.055141+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.055141+0000 mon.c (mon.2) 193 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.055594+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.055594+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.071098+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.071098+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.071197+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:58 vm03 bash[23382]: audit 2026-03-10T07:29:58.071197+0000 mon.a (mon.0) 1698 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:59 vm00 bash[28005]: cluster 2026-03-10T07:29:59.047042+0000 mon.a (mon.0) 1699 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:59 vm00 bash[28005]: cluster 2026-03-10T07:29:59.047042+0000 mon.a (mon.0) 1699 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:59 vm00 bash[28005]: audit 2026-03-10T07:29:59.050405+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:59 vm00 bash[28005]: audit 2026-03-10T07:29:59.050405+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:59 vm00 bash[28005]: audit 2026-03-10T07:29:59.050529+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:59 vm00 bash[28005]: audit 2026-03-10T07:29:59.050529+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:59 vm00 bash[28005]: audit 2026-03-10T07:29:59.050622+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:59 vm00 bash[28005]: audit 2026-03-10T07:29:59.050622+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:59 vm00 bash[28005]: cluster 2026-03-10T07:29:59.066463+0000 mon.a (mon.0) 1703 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:29:59 vm00 bash[28005]: cluster 2026-03-10T07:29:59.066463+0000 mon.a (mon.0) 1703 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:59 vm00 bash[20701]: cluster 2026-03-10T07:29:59.047042+0000 mon.a (mon.0) 1699 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:59 vm00 bash[20701]: cluster 2026-03-10T07:29:59.047042+0000 mon.a (mon.0) 1699 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:59 vm00 bash[20701]: audit 2026-03-10T07:29:59.050405+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:59 vm00 bash[20701]: audit 2026-03-10T07:29:59.050405+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:59 vm00 bash[20701]: audit 2026-03-10T07:29:59.050529+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:59 vm00 bash[20701]: audit 2026-03-10T07:29:59.050529+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:59 vm00 bash[20701]: audit 2026-03-10T07:29:59.050622+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:59 vm00 bash[20701]: audit 2026-03-10T07:29:59.050622+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:59 vm00 bash[20701]: cluster 2026-03-10T07:29:59.066463+0000 mon.a (mon.0) 1703 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T07:29:59.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:29:59 vm00 bash[20701]: cluster 2026-03-10T07:29:59.066463+0000 mon.a (mon.0) 1703 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T07:29:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:59 vm03 bash[23382]: cluster 2026-03-10T07:29:59.047042+0000 mon.a (mon.0) 1699 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:59 vm03 bash[23382]: cluster 2026-03-10T07:29:59.047042+0000 mon.a (mon.0) 1699 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:29:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:59 vm03 bash[23382]: audit 2026-03-10T07:29:59.050405+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:59 vm03 bash[23382]: audit 2026-03-10T07:29:59.050405+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm00-59629-31"}]': finished 2026-03-10T07:29:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:59 vm03 bash[23382]: audit 2026-03-10T07:29:59.050529+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:59 vm03 bash[23382]: audit 2026-03-10T07:29:59.050529+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-19"}]': finished 2026-03-10T07:29:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:59 vm03 bash[23382]: audit 2026-03-10T07:29:59.050622+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:59 vm03 bash[23382]: audit 2026-03-10T07:29:59.050622+0000 mon.a (mon.0) 1702 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm00-59637-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:29:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:59 vm03 bash[23382]: cluster 2026-03-10T07:29:59.066463+0000 mon.a (mon.0) 1703 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T07:29:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:29:59 vm03 bash[23382]: cluster 2026-03-10T07:29:59.066463+0000 mon.a (mon.0) 1703 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:29:58.602413+0000 mgr.y (mgr.24407) 205 : cluster [DBG] pgmap v231: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:29:58.602413+0000 mgr.y (mgr.24407) 205 : cluster [DBG] pgmap v231: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.079218+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.079218+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.082098+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.082098+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.083023+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.083023+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.090719+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.090719+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.095306+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.095306+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.108996+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.108996+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.561947+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.561947+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562041+0000 mon.c (mon.2) 198 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562041+0000 mon.c (mon.2) 198 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562104+0000 mon.c (mon.2) 199 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562104+0000 mon.c (mon.2) 199 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562162+0000 mon.c (mon.2) 200 : audit [INF] "var": "eio", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562162+0000 mon.c (mon.2) 200 : audit [INF] "var": "eio", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562224+0000 mon.c (mon.2) 201 : audit [INF] "val": "true" 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562224+0000 mon.c (mon.2) 201 : audit [INF] "val": "true" 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562283+0000 mon.c (mon.2) 202 : audit [INF] }]: dispatch 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562283+0000 mon.c (mon.2) 202 : audit [INF] }]: dispatch 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562805+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562805+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562822+0000 mon.a (mon.0) 1708 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562822+0000 mon.a (mon.0) 1708 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562830+0000 mon.a (mon.0) 1709 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562830+0000 mon.a (mon.0) 1709 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562838+0000 mon.a (mon.0) 1710 : audit [INF] "var": "eio", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562838+0000 mon.a (mon.0) 1710 : audit [INF] "var": "eio", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562845+0000 mon.a (mon.0) 1711 : audit [INF] "val": "true" 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562845+0000 mon.a (mon.0) 1711 : audit [INF] "val": "true" 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562852+0000 mon.a (mon.0) 1712 : audit [INF] }]: dispatch 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:29:59.562852+0000 mon.a (mon.0) 1712 : audit [INF] }]: dispatch 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000136+0000 mon.a (mon.0) 1713 : cluster [WRN] Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000136+0000 mon.a (mon.0) 1713 : cluster [WRN] Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000166+0000 mon.a (mon.0) 1714 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000166+0000 mon.a (mon.0) 1714 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000182+0000 mon.a (mon.0) 1715 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000182+0000 mon.a (mon.0) 1715 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000187+0000 mon.a (mon.0) 1716 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000187+0000 mon.a (mon.0) 1716 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000193+0000 mon.a (mon.0) 1717 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000193+0000 mon.a (mon.0) 1717 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000197+0000 mon.a (mon.0) 1718 : cluster [WRN] application not enabled on pool 'PoolEIOFlag_vm00-59637-33' 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000197+0000 mon.a (mon.0) 1718 : cluster [WRN] application not enabled on pool 'PoolEIOFlag_vm00-59637-33' 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000203+0000 mon.a (mon.0) 1719 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.000203+0000 mon.a (mon.0) 1719 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063465+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063465+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063580+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063580+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063626+0000 mon.a (mon.0) 1722 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063626+0000 mon.a (mon.0) 1722 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063667+0000 mon.a (mon.0) 1723 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063667+0000 mon.a (mon.0) 1723 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063710+0000 mon.a (mon.0) 1724 : audit [INF] "var": "eio", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063710+0000 mon.a (mon.0) 1724 : audit [INF] "var": "eio", 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063751+0000 mon.a (mon.0) 1725 : audit [INF] "val": "true" 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063751+0000 mon.a (mon.0) 1725 : audit [INF] "val": "true" 2026-03-10T07:30:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:29:58.602413+0000 mgr.y (mgr.24407) 205 : cluster [DBG] pgmap v231: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:29:58.602413+0000 mgr.y (mgr.24407) 205 : cluster [DBG] pgmap v231: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.079218+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.079218+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.082098+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.082098+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.083023+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.083023+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.090719+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.090719+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.095306+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.095306+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.108996+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.108996+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.561947+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.561947+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562041+0000 mon.c (mon.2) 198 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562041+0000 mon.c (mon.2) 198 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562104+0000 mon.c (mon.2) 199 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562104+0000 mon.c (mon.2) 199 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562162+0000 mon.c (mon.2) 200 : audit [INF] "var": "eio", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562162+0000 mon.c (mon.2) 200 : audit [INF] "var": "eio", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562224+0000 mon.c (mon.2) 201 : audit [INF] "val": "true" 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562224+0000 mon.c (mon.2) 201 : audit [INF] "val": "true" 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562283+0000 mon.c (mon.2) 202 : audit [INF] }]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562283+0000 mon.c (mon.2) 202 : audit [INF] }]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562805+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562805+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562822+0000 mon.a (mon.0) 1708 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562822+0000 mon.a (mon.0) 1708 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562830+0000 mon.a (mon.0) 1709 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562830+0000 mon.a (mon.0) 1709 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562838+0000 mon.a (mon.0) 1710 : audit [INF] "var": "eio", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562838+0000 mon.a (mon.0) 1710 : audit [INF] "var": "eio", 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562845+0000 mon.a (mon.0) 1711 : audit [INF] "val": "true" 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562845+0000 mon.a (mon.0) 1711 : audit [INF] "val": "true" 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562852+0000 mon.a (mon.0) 1712 : audit [INF] }]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:29:59.562852+0000 mon.a (mon.0) 1712 : audit [INF] }]: dispatch 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000136+0000 mon.a (mon.0) 1713 : cluster [WRN] Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000136+0000 mon.a (mon.0) 1713 : cluster [WRN] Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000166+0000 mon.a (mon.0) 1714 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000166+0000 mon.a (mon.0) 1714 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000182+0000 mon.a (mon.0) 1715 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000182+0000 mon.a (mon.0) 1715 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000187+0000 mon.a (mon.0) 1716 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000187+0000 mon.a (mon.0) 1716 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000193+0000 mon.a (mon.0) 1717 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000193+0000 mon.a (mon.0) 1717 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:30:00.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000197+0000 mon.a (mon.0) 1718 : cluster [WRN] application not enabled on pool 'PoolEIOFlag_vm00-59637-33' 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000197+0000 mon.a (mon.0) 1718 : cluster [WRN] application not enabled on pool 'PoolEIOFlag_vm00-59637-33' 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000203+0000 mon.a (mon.0) 1719 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.000203+0000 mon.a (mon.0) 1719 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063465+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063465+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063580+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063580+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063626+0000 mon.a (mon.0) 1722 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063626+0000 mon.a (mon.0) 1722 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063667+0000 mon.a (mon.0) 1723 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063667+0000 mon.a (mon.0) 1723 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063710+0000 mon.a (mon.0) 1724 : audit [INF] "var": "eio", 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063710+0000 mon.a (mon.0) 1724 : audit [INF] "var": "eio", 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063751+0000 mon.a (mon.0) 1725 : audit [INF] "val": "true" 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063751+0000 mon.a (mon.0) 1725 : audit [INF] "val": "true" 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063792+0000 mon.a (mon.0) 1726 : audit [INF] }]': finished 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.063792+0000 mon.a (mon.0) 1726 : audit [INF] }]': finished 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.068278+0000 mon.a (mon.0) 1727 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: cluster 2026-03-10T07:30:00.068278+0000 mon.a (mon.0) 1727 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.077041+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.077041+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.088406+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:00 vm00 bash[28005]: audit 2026-03-10T07:30:00.088406+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063792+0000 mon.a (mon.0) 1726 : audit [INF] }]': finished 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.063792+0000 mon.a (mon.0) 1726 : audit [INF] }]': finished 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.068278+0000 mon.a (mon.0) 1727 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: cluster 2026-03-10T07:30:00.068278+0000 mon.a (mon.0) 1727 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.077041+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.077041+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.088406+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:00 vm00 bash[20701]: audit 2026-03-10T07:30:00.088406+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:29:58.602413+0000 mgr.y (mgr.24407) 205 : cluster [DBG] pgmap v231: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:29:58.602413+0000 mgr.y (mgr.24407) 205 : cluster [DBG] pgmap v231: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.079218+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.079218+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.082098+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.082098+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.083023+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.083023+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.090719+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.090719+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.095306+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.095306+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.108996+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.108996+0000 mon.a (mon.0) 1706 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.561947+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.561947+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.100:0/2215633867' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562041+0000 mon.c (mon.2) 198 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562041+0000 mon.c (mon.2) 198 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562104+0000 mon.c (mon.2) 199 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562104+0000 mon.c (mon.2) 199 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562162+0000 mon.c (mon.2) 200 : audit [INF] "var": "eio", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562162+0000 mon.c (mon.2) 200 : audit [INF] "var": "eio", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562224+0000 mon.c (mon.2) 201 : audit [INF] "val": "true" 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562224+0000 mon.c (mon.2) 201 : audit [INF] "val": "true" 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562283+0000 mon.c (mon.2) 202 : audit [INF] }]: dispatch 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562283+0000 mon.c (mon.2) 202 : audit [INF] }]: dispatch 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562805+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562805+0000 mon.a (mon.0) 1707 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562822+0000 mon.a (mon.0) 1708 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562822+0000 mon.a (mon.0) 1708 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562830+0000 mon.a (mon.0) 1709 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562830+0000 mon.a (mon.0) 1709 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562838+0000 mon.a (mon.0) 1710 : audit [INF] "var": "eio", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562838+0000 mon.a (mon.0) 1710 : audit [INF] "var": "eio", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562845+0000 mon.a (mon.0) 1711 : audit [INF] "val": "true" 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562845+0000 mon.a (mon.0) 1711 : audit [INF] "val": "true" 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562852+0000 mon.a (mon.0) 1712 : audit [INF] }]: dispatch 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:29:59.562852+0000 mon.a (mon.0) 1712 : audit [INF] }]: dispatch 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000136+0000 mon.a (mon.0) 1713 : cluster [WRN] Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000136+0000 mon.a (mon.0) 1713 : cluster [WRN] Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000166+0000 mon.a (mon.0) 1714 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000166+0000 mon.a (mon.0) 1714 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000182+0000 mon.a (mon.0) 1715 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000182+0000 mon.a (mon.0) 1715 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000187+0000 mon.a (mon.0) 1716 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000187+0000 mon.a (mon.0) 1716 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000193+0000 mon.a (mon.0) 1717 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000193+0000 mon.a (mon.0) 1717 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000197+0000 mon.a (mon.0) 1718 : cluster [WRN] application not enabled on pool 'PoolEIOFlag_vm00-59637-33' 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000197+0000 mon.a (mon.0) 1718 : cluster [WRN] application not enabled on pool 'PoolEIOFlag_vm00-59637-33' 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000203+0000 mon.a (mon.0) 1719 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.000203+0000 mon.a (mon.0) 1719 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063465+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063465+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm00-59629-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063580+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063580+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063626+0000 mon.a (mon.0) 1722 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063626+0000 mon.a (mon.0) 1722 : audit [INF] "prefix": "osd pool set", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063667+0000 mon.a (mon.0) 1723 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063667+0000 mon.a (mon.0) 1723 : audit [INF] "pool": "PoolEIOFlag_vm00-59637-33", 2026-03-10T07:30:00.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063710+0000 mon.a (mon.0) 1724 : audit [INF] "var": "eio", 2026-03-10T07:30:00.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063710+0000 mon.a (mon.0) 1724 : audit [INF] "var": "eio", 2026-03-10T07:30:00.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063751+0000 mon.a (mon.0) 1725 : audit [INF] "val": "true" 2026-03-10T07:30:00.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063751+0000 mon.a (mon.0) 1725 : audit [INF] "val": "true" 2026-03-10T07:30:00.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063792+0000 mon.a (mon.0) 1726 : audit [INF] }]': finished 2026-03-10T07:30:00.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.063792+0000 mon.a (mon.0) 1726 : audit [INF] }]': finished 2026-03-10T07:30:00.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.068278+0000 mon.a (mon.0) 1727 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T07:30:00.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: cluster 2026-03-10T07:30:00.068278+0000 mon.a (mon.0) 1727 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T07:30:00.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.077041+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.077041+0000 mon.c (mon.2) 203 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.088406+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:00.516 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:00 vm03 bash[23382]: audit 2026-03-10T07:30:00.088406+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:01 vm00 bash[20701]: cluster 2026-03-10T07:30:01.099131+0000 mon.a (mon.0) 1729 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:01 vm00 bash[20701]: cluster 2026-03-10T07:30:01.099131+0000 mon.a (mon.0) 1729 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:01 vm00 bash[20701]: audit 2026-03-10T07:30:01.134343+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:01 vm00 bash[20701]: audit 2026-03-10T07:30:01.134343+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:01 vm00 bash[20701]: cluster 2026-03-10T07:30:01.147703+0000 mon.a (mon.0) 1730 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:01 vm00 bash[20701]: cluster 2026-03-10T07:30:01.147703+0000 mon.a (mon.0) 1730 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:01 vm00 bash[28005]: cluster 2026-03-10T07:30:01.099131+0000 mon.a (mon.0) 1729 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:01 vm00 bash[28005]: cluster 2026-03-10T07:30:01.099131+0000 mon.a (mon.0) 1729 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:01 vm00 bash[28005]: audit 2026-03-10T07:30:01.134343+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:01 vm00 bash[28005]: audit 2026-03-10T07:30:01.134343+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:01 vm00 bash[28005]: cluster 2026-03-10T07:30:01.147703+0000 mon.a (mon.0) 1730 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:01 vm00 bash[28005]: cluster 2026-03-10T07:30:01.147703+0000 mon.a (mon.0) 1730 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T07:30:01.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:30:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:30:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:30:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:01 vm03 bash[23382]: cluster 2026-03-10T07:30:01.099131+0000 mon.a (mon.0) 1729 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:01 vm03 bash[23382]: cluster 2026-03-10T07:30:01.099131+0000 mon.a (mon.0) 1729 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:01 vm03 bash[23382]: audit 2026-03-10T07:30:01.134343+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:01 vm03 bash[23382]: audit 2026-03-10T07:30:01.134343+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:01.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:01 vm03 bash[23382]: cluster 2026-03-10T07:30:01.147703+0000 mon.a (mon.0) 1730 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T07:30:01.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:01 vm03 bash[23382]: cluster 2026-03-10T07:30:01.147703+0000 mon.a (mon.0) 1730 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T07:30:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: cluster 2026-03-10T07:30:00.602763+0000 mgr.y (mgr.24407) 206 : cluster [DBG] pgmap v234: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 17 op/s 2026-03-10T07:30:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: cluster 2026-03-10T07:30:00.602763+0000 mgr.y (mgr.24407) 206 : cluster [DBG] pgmap v234: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 17 op/s 2026-03-10T07:30:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:01.163509+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:01.163509+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.118470+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.118470+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.118638+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.118638+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: cluster 2026-03-10T07:30:02.129946+0000 mon.a (mon.0) 1734 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T07:30:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: cluster 2026-03-10T07:30:02.129946+0000 mon.a (mon.0) 1734 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T07:30:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.156137+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/1404135922' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.156137+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/1404135922' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.169341+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.169341+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.173744+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.173744+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.175993+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:02.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:02 vm03 bash[23382]: audit 2026-03-10T07:30:02.175993+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: cluster 2026-03-10T07:30:00.602763+0000 mgr.y (mgr.24407) 206 : cluster [DBG] pgmap v234: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 17 op/s 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: cluster 2026-03-10T07:30:00.602763+0000 mgr.y (mgr.24407) 206 : cluster [DBG] pgmap v234: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 17 op/s 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:01.163509+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:01.163509+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.118470+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.118470+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.118638+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.118638+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: cluster 2026-03-10T07:30:02.129946+0000 mon.a (mon.0) 1734 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: cluster 2026-03-10T07:30:02.129946+0000 mon.a (mon.0) 1734 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.156137+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/1404135922' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.156137+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/1404135922' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.169341+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.169341+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.173744+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.173744+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.175993+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:02 vm00 bash[20701]: audit 2026-03-10T07:30:02.175993+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: cluster 2026-03-10T07:30:00.602763+0000 mgr.y (mgr.24407) 206 : cluster [DBG] pgmap v234: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 17 op/s 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: cluster 2026-03-10T07:30:00.602763+0000 mgr.y (mgr.24407) 206 : cluster [DBG] pgmap v234: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 17 op/s 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:01.163509+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:01.163509+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.118470+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.118470+0000 mon.a (mon.0) 1732 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm00-59629-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.118638+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.118638+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: cluster 2026-03-10T07:30:02.129946+0000 mon.a (mon.0) 1734 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: cluster 2026-03-10T07:30:02.129946+0000 mon.a (mon.0) 1734 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.156137+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/1404135922' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.156137+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.100:0/1404135922' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.169341+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.169341+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:02.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.173744+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:02.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.173744+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:02.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.175993+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:02.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:02 vm00 bash[28005]: audit 2026-03-10T07:30:02.175993+0000 mon.a (mon.0) 1736 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:03.514 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:30:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: cluster 2026-03-10T07:30:02.603102+0000 mgr.y (mgr.24407) 207 : cluster [DBG] pgmap v237: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: cluster 2026-03-10T07:30:02.603102+0000 mgr.y (mgr.24407) 207 : cluster [DBG] pgmap v237: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.045871+0000 mgr.y (mgr.24407) 208 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.045871+0000 mgr.y (mgr.24407) 208 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.124761+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.124761+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.124924+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.124924+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.127832+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.127832+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: cluster 2026-03-10T07:30:03.145343+0000 mon.a (mon.0) 1739 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: cluster 2026-03-10T07:30:03.145343+0000 mon.a (mon.0) 1739 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.157305+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.157305+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.469087+0000 mon.c (mon.2) 204 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.469087+0000 mon.c (mon.2) 204 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.793340+0000 mon.a (mon.0) 1741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.793340+0000 mon.a (mon.0) 1741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.801098+0000 mon.a (mon.0) 1742 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:04.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:04 vm03 bash[23382]: audit 2026-03-10T07:30:03.801098+0000 mon.a (mon.0) 1742 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: cluster 2026-03-10T07:30:02.603102+0000 mgr.y (mgr.24407) 207 : cluster [DBG] pgmap v237: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: cluster 2026-03-10T07:30:02.603102+0000 mgr.y (mgr.24407) 207 : cluster [DBG] pgmap v237: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.045871+0000 mgr.y (mgr.24407) 208 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.045871+0000 mgr.y (mgr.24407) 208 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.124761+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.124761+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.124924+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.124924+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.127832+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.127832+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: cluster 2026-03-10T07:30:03.145343+0000 mon.a (mon.0) 1739 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: cluster 2026-03-10T07:30:03.145343+0000 mon.a (mon.0) 1739 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.157305+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.157305+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.469087+0000 mon.c (mon.2) 204 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.469087+0000 mon.c (mon.2) 204 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.793340+0000 mon.a (mon.0) 1741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.793340+0000 mon.a (mon.0) 1741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.801098+0000 mon.a (mon.0) 1742 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:04 vm00 bash[20701]: audit 2026-03-10T07:30:03.801098+0000 mon.a (mon.0) 1742 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: cluster 2026-03-10T07:30:02.603102+0000 mgr.y (mgr.24407) 207 : cluster [DBG] pgmap v237: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: cluster 2026-03-10T07:30:02.603102+0000 mgr.y (mgr.24407) 207 : cluster [DBG] pgmap v237: 332 pgs: 72 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 686 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.045871+0000 mgr.y (mgr.24407) 208 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.045871+0000 mgr.y (mgr.24407) 208 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.124761+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.124761+0000 mon.a (mon.0) 1737 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm00-59637-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.124924+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.124924+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.127832+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.127832+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: cluster 2026-03-10T07:30:03.145343+0000 mon.a (mon.0) 1739 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: cluster 2026-03-10T07:30:03.145343+0000 mon.a (mon.0) 1739 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.157305+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.157305+0000 mon.a (mon.0) 1740 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.469087+0000 mon.c (mon.2) 204 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.469087+0000 mon.c (mon.2) 204 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.793340+0000 mon.a (mon.0) 1741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.793340+0000 mon.a (mon.0) 1741 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.801098+0000 mon.a (mon.0) 1742 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:04 vm00 bash[28005]: audit 2026-03-10T07:30:03.801098+0000 mon.a (mon.0) 1742 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.130273+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.130273+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: cluster 2026-03-10T07:30:04.135199+0000 mon.a (mon.0) 1744 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T07:30:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: cluster 2026-03-10T07:30:04.135199+0000 mon.a (mon.0) 1744 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T07:30:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.137126+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.137126+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.143465+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.143465+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.156904+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.156904+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.200240+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.200240+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.217936+0000 mon.c (mon.2) 206 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.217936+0000 mon.c (mon.2) 206 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.218637+0000 mon.c (mon.2) 207 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.218637+0000 mon.c (mon.2) 207 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.225428+0000 mon.a (mon.0) 1747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:04.225428+0000 mon.a (mon.0) 1747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: cluster 2026-03-10T07:30:05.130472+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: cluster 2026-03-10T07:30:05.130472+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.134147+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]': finished 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.134147+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]': finished 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.134307+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.134307+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: cluster 2026-03-10T07:30:05.150603+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: cluster 2026-03-10T07:30:05.150603+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.151321+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/1623935189' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.151321+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/1623935189' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.153143+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.153143+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.155476+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.155476+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.155577+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:05 vm03 bash[23382]: audit 2026-03-10T07:30:05.155577+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.130273+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.130273+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: cluster 2026-03-10T07:30:04.135199+0000 mon.a (mon.0) 1744 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: cluster 2026-03-10T07:30:04.135199+0000 mon.a (mon.0) 1744 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.137126+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.137126+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.143465+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.143465+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.156904+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.156904+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.200240+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.200240+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.217936+0000 mon.c (mon.2) 206 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.217936+0000 mon.c (mon.2) 206 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.218637+0000 mon.c (mon.2) 207 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.218637+0000 mon.c (mon.2) 207 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:30:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.225428+0000 mon.a (mon.0) 1747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.130273+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.130273+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: cluster 2026-03-10T07:30:04.135199+0000 mon.a (mon.0) 1744 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: cluster 2026-03-10T07:30:04.135199+0000 mon.a (mon.0) 1744 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.137126+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.137126+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.143465+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.143465+0000 mon.a (mon.0) 1745 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.156904+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.156904+0000 mon.c (mon.2) 205 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.200240+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.200240+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.217936+0000 mon.c (mon.2) 206 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.217936+0000 mon.c (mon.2) 206 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.218637+0000 mon.c (mon.2) 207 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.218637+0000 mon.c (mon.2) 207 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.225428+0000 mon.a (mon.0) 1747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:04.225428+0000 mon.a (mon.0) 1747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: cluster 2026-03-10T07:30:05.130472+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: cluster 2026-03-10T07:30:05.130472+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.134147+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]': finished 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.134147+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]': finished 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.134307+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.134307+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: cluster 2026-03-10T07:30:05.150603+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: cluster 2026-03-10T07:30:05.150603+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.151321+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/1623935189' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.151321+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/1623935189' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.153143+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.153143+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.155476+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.155476+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.155577+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:05 vm00 bash[28005]: audit 2026-03-10T07:30:05.155577+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:04.225428+0000 mon.a (mon.0) 1747 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: cluster 2026-03-10T07:30:05.130472+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: cluster 2026-03-10T07:30:05.130472+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.134147+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]': finished 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.134147+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-21", "mode": "writeback"}]': finished 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.134307+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.134307+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: cluster 2026-03-10T07:30:05.150603+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: cluster 2026-03-10T07:30:05.150603+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.151321+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/1623935189' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.151321+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.100:0/1623935189' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.153143+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.153143+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.100:0/3117979847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.155476+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.155476+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:05.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.155577+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:05.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:05 vm00 bash[20701]: audit 2026-03-10T07:30:05.155577+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]: dispatch 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: cluster 2026-03-10T07:30:04.603481+0000 mgr.y (mgr.24407) 209 : cluster [DBG] pgmap v240: 292 pgs: 28 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 248 active+clean; 457 KiB data, 690 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: cluster 2026-03-10T07:30:04.603481+0000 mgr.y (mgr.24407) 209 : cluster [DBG] pgmap v240: 292 pgs: 28 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 248 active+clean; 457 KiB data, 690 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:05.208674+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:05.208674+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:05.210854+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:05.210854+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:06.161019+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:06.161019+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:06.161073+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:06.161073+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:06.161147+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:06.161147+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: cluster 2026-03-10T07:30:06.165783+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: cluster 2026-03-10T07:30:06.165783+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:06.169451+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:06 vm03 bash[23382]: audit 2026-03-10T07:30:06.169451+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: cluster 2026-03-10T07:30:04.603481+0000 mgr.y (mgr.24407) 209 : cluster [DBG] pgmap v240: 292 pgs: 28 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 248 active+clean; 457 KiB data, 690 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: cluster 2026-03-10T07:30:04.603481+0000 mgr.y (mgr.24407) 209 : cluster [DBG] pgmap v240: 292 pgs: 28 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 248 active+clean; 457 KiB data, 690 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:05.208674+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:05.208674+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:05.210854+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:05.210854+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:06.161019+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:06.161019+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:06.161073+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:06.161073+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:06.161147+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:06.161147+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: cluster 2026-03-10T07:30:06.165783+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: cluster 2026-03-10T07:30:06.165783+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:06.169451+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:06 vm00 bash[20701]: audit 2026-03-10T07:30:06.169451+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: cluster 2026-03-10T07:30:04.603481+0000 mgr.y (mgr.24407) 209 : cluster [DBG] pgmap v240: 292 pgs: 28 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 248 active+clean; 457 KiB data, 690 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: cluster 2026-03-10T07:30:04.603481+0000 mgr.y (mgr.24407) 209 : cluster [DBG] pgmap v240: 292 pgs: 28 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 248 active+clean; 457 KiB data, 690 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:05.208674+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:05.208674+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:05.210854+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:05.210854+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:06.161019+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:06.161019+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm00-59637-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:06.161073+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:06.161073+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm00-59629-32"}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:06.161147+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:06.161147+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: cluster 2026-03-10T07:30:06.165783+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: cluster 2026-03-10T07:30:06.165783+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:06.169451+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:06 vm00 bash[28005]: audit 2026-03-10T07:30:06.169451+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:07.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.184424+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:07.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.184424+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:07.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.199643+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.199643+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.215434+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.215434+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.216921+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.216921+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.219088+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.219088+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.219847+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.219847+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.222132+0000 mon.a (mon.0) 1762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:06.222132+0000 mon.a (mon.0) 1762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: cluster 2026-03-10T07:30:07.161259+0000 mon.a (mon.0) 1763 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: cluster 2026-03-10T07:30:07.161259+0000 mon.a (mon.0) 1763 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:07.164773+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:07.164773+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:07.164942+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:07.164942+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: cluster 2026-03-10T07:30:07.169116+0000 mon.a (mon.0) 1766 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: cluster 2026-03-10T07:30:07.169116+0000 mon.a (mon.0) 1766 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:07.181206+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:07.181206+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:07.186245+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:07 vm03 bash[23382]: audit 2026-03-10T07:30:07.186245+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.184424+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.184424+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.199643+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.199643+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.215434+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.215434+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.216921+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.216921+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.219088+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.219088+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.219847+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.219847+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.222132+0000 mon.a (mon.0) 1762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:06.222132+0000 mon.a (mon.0) 1762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: cluster 2026-03-10T07:30:07.161259+0000 mon.a (mon.0) 1763 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: cluster 2026-03-10T07:30:07.161259+0000 mon.a (mon.0) 1763 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:07.164773+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:07.164773+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:07.164942+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:07.164942+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: cluster 2026-03-10T07:30:07.169116+0000 mon.a (mon.0) 1766 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: cluster 2026-03-10T07:30:07.169116+0000 mon.a (mon.0) 1766 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:07.181206+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:07.181206+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:07.186245+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:07 vm00 bash[20701]: audit 2026-03-10T07:30:07.186245+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.184424+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.184424+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.199643+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.199643+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.215434+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.215434+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.216921+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.216921+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.219088+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.219088+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.219847+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.219847+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.222132+0000 mon.a (mon.0) 1762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:06.222132+0000 mon.a (mon.0) 1762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: cluster 2026-03-10T07:30:07.161259+0000 mon.a (mon.0) 1763 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: cluster 2026-03-10T07:30:07.161259+0000 mon.a (mon.0) 1763 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:07.164773+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:07.164773+0000 mon.a (mon.0) 1764 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-21"}]': finished 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:07.164942+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:07.164942+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm00-59629-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: cluster 2026-03-10T07:30:07.169116+0000 mon.a (mon.0) 1766 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: cluster 2026-03-10T07:30:07.169116+0000 mon.a (mon.0) 1766 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:07.181206+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:07.181206+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:07.186245+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:07 vm00 bash[28005]: audit 2026-03-10T07:30:07.186245+0000 mon.a (mon.0) 1767 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:08 vm00 bash[28005]: cluster 2026-03-10T07:30:06.603821+0000 mgr.y (mgr.24407) 210 : cluster [DBG] pgmap v243: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:08 vm00 bash[28005]: cluster 2026-03-10T07:30:06.603821+0000 mgr.y (mgr.24407) 210 : cluster [DBG] pgmap v243: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:08 vm00 bash[28005]: cluster 2026-03-10T07:30:08.215350+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:08 vm00 bash[28005]: cluster 2026-03-10T07:30:08.215350+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:08 vm00 bash[28005]: audit 2026-03-10T07:30:08.218113+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:08 vm00 bash[28005]: audit 2026-03-10T07:30:08.218113+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:08 vm00 bash[20701]: cluster 2026-03-10T07:30:06.603821+0000 mgr.y (mgr.24407) 210 : cluster [DBG] pgmap v243: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:08 vm00 bash[20701]: cluster 2026-03-10T07:30:06.603821+0000 mgr.y (mgr.24407) 210 : cluster [DBG] pgmap v243: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:08 vm00 bash[20701]: cluster 2026-03-10T07:30:08.215350+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:08 vm00 bash[20701]: cluster 2026-03-10T07:30:08.215350+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:08 vm00 bash[20701]: audit 2026-03-10T07:30:08.218113+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:08 vm00 bash[20701]: audit 2026-03-10T07:30:08.218113+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:08.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:08 vm03 bash[23382]: cluster 2026-03-10T07:30:06.603821+0000 mgr.y (mgr.24407) 210 : cluster [DBG] pgmap v243: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:08.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:08 vm03 bash[23382]: cluster 2026-03-10T07:30:06.603821+0000 mgr.y (mgr.24407) 210 : cluster [DBG] pgmap v243: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:08.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:08 vm03 bash[23382]: cluster 2026-03-10T07:30:08.215350+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T07:30:08.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:08 vm03 bash[23382]: cluster 2026-03-10T07:30:08.215350+0000 mon.a (mon.0) 1768 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T07:30:08.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:08 vm03 bash[23382]: audit 2026-03-10T07:30:08.218113+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:08.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:08 vm03 bash[23382]: audit 2026-03-10T07:30:08.218113+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: cluster 2026-03-10T07:30:08.604130+0000 mgr.y (mgr.24407) 211 : cluster [DBG] pgmap v246: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: cluster 2026-03-10T07:30:08.604130+0000 mgr.y (mgr.24407) 211 : cluster [DBG] pgmap v246: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: audit 2026-03-10T07:30:09.174040+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: audit 2026-03-10T07:30:09.174040+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: audit 2026-03-10T07:30:09.174356+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: audit 2026-03-10T07:30:09.174356+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: cluster 2026-03-10T07:30:09.202197+0000 mon.a (mon.0) 1772 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T07:30:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: cluster 2026-03-10T07:30:09.202197+0000 mon.a (mon.0) 1772 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T07:30:10.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: audit 2026-03-10T07:30:09.212454+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: audit 2026-03-10T07:30:09.212454+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: audit 2026-03-10T07:30:09.218108+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: audit 2026-03-10T07:30:09.218108+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: audit 2026-03-10T07:30:09.246050+0000 mon.c (mon.2) 209 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:10.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: audit 2026-03-10T07:30:09.246050+0000 mon.c (mon.2) 209 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:10.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: cluster 2026-03-10T07:30:09.331619+0000 mon.a (mon.0) 1774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:10.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:10 vm03 bash[23382]: cluster 2026-03-10T07:30:09.331619+0000 mon.a (mon.0) 1774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: cluster 2026-03-10T07:30:08.604130+0000 mgr.y (mgr.24407) 211 : cluster [DBG] pgmap v246: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: cluster 2026-03-10T07:30:08.604130+0000 mgr.y (mgr.24407) 211 : cluster [DBG] pgmap v246: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: audit 2026-03-10T07:30:09.174040+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: audit 2026-03-10T07:30:09.174040+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: audit 2026-03-10T07:30:09.174356+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: audit 2026-03-10T07:30:09.174356+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: cluster 2026-03-10T07:30:09.202197+0000 mon.a (mon.0) 1772 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: cluster 2026-03-10T07:30:09.202197+0000 mon.a (mon.0) 1772 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: audit 2026-03-10T07:30:09.212454+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: audit 2026-03-10T07:30:09.212454+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: audit 2026-03-10T07:30:09.218108+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: audit 2026-03-10T07:30:09.218108+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: audit 2026-03-10T07:30:09.246050+0000 mon.c (mon.2) 209 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: audit 2026-03-10T07:30:09.246050+0000 mon.c (mon.2) 209 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: cluster 2026-03-10T07:30:09.331619+0000 mon.a (mon.0) 1774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:10 vm00 bash[28005]: cluster 2026-03-10T07:30:09.331619+0000 mon.a (mon.0) 1774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: cluster 2026-03-10T07:30:08.604130+0000 mgr.y (mgr.24407) 211 : cluster [DBG] pgmap v246: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: cluster 2026-03-10T07:30:08.604130+0000 mgr.y (mgr.24407) 211 : cluster [DBG] pgmap v246: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: audit 2026-03-10T07:30:09.174040+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: audit 2026-03-10T07:30:09.174040+0000 mon.a (mon.0) 1770 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm00-59629-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: audit 2026-03-10T07:30:09.174356+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: audit 2026-03-10T07:30:09.174356+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.100:0/4243676920' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: cluster 2026-03-10T07:30:09.202197+0000 mon.a (mon.0) 1772 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: cluster 2026-03-10T07:30:09.202197+0000 mon.a (mon.0) 1772 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: audit 2026-03-10T07:30:09.212454+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: audit 2026-03-10T07:30:09.212454+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: audit 2026-03-10T07:30:09.218108+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: audit 2026-03-10T07:30:09.218108+0000 mon.a (mon.0) 1773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: audit 2026-03-10T07:30:09.246050+0000 mon.c (mon.2) 209 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: audit 2026-03-10T07:30:09.246050+0000 mon.c (mon.2) 209 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: cluster 2026-03-10T07:30:09.331619+0000 mon.a (mon.0) 1774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:10 vm00 bash[20701]: cluster 2026-03-10T07:30:09.331619+0000 mon.a (mon.0) 1774 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:30:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:30:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:10.180145+0000 mon.a (mon.0) 1775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:10.180145+0000 mon.a (mon.0) 1775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: cluster 2026-03-10T07:30:10.232846+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: cluster 2026-03-10T07:30:10.232846+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:10.255492+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:10.255492+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:11.183431+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:11.183431+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: cluster 2026-03-10T07:30:11.188169+0000 mon.a (mon.0) 1779 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: cluster 2026-03-10T07:30:11.188169+0000 mon.a (mon.0) 1779 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:11.201679+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:11.201679+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:11.203859+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:11.203859+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:11.263917+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:11.263917+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:11.265796+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:11 vm00 bash[28005]: audit 2026-03-10T07:30:11.265796+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:10.180145+0000 mon.a (mon.0) 1775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:10.180145+0000 mon.a (mon.0) 1775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: cluster 2026-03-10T07:30:10.232846+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: cluster 2026-03-10T07:30:10.232846+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T07:30:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:10.255492+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:10.255492+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:11.183431+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:11.183431+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: cluster 2026-03-10T07:30:11.188169+0000 mon.a (mon.0) 1779 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: cluster 2026-03-10T07:30:11.188169+0000 mon.a (mon.0) 1779 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:11.201679+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:11.201679+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:11.203859+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:11.203859+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:11.263917+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:11.263917+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:11.265796+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:11 vm00 bash[20701]: audit 2026-03-10T07:30:11.265796+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:10.180145+0000 mon.a (mon.0) 1775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:10.180145+0000 mon.a (mon.0) 1775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: cluster 2026-03-10T07:30:10.232846+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: cluster 2026-03-10T07:30:10.232846+0000 mon.a (mon.0) 1776 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:10.255492+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:10.255492+0000 mon.a (mon.0) 1777 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:11.183431+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:11.183431+0000 mon.a (mon.0) 1778 : audit [INF] from='client.? 192.168.123.100:0/3834587606' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: cluster 2026-03-10T07:30:11.188169+0000 mon.a (mon.0) 1779 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: cluster 2026-03-10T07:30:11.188169+0000 mon.a (mon.0) 1779 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:11.201679+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:11.201679+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:11.203859+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:11.203859+0000 mon.a (mon.0) 1780 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:11.263917+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:11.263917+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:11.265796+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:11 vm03 bash[23382]: audit 2026-03-10T07:30:11.265796+0000 mon.a (mon.0) 1781 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: cluster 2026-03-10T07:30:10.604481+0000 mgr.y (mgr.24407) 212 : cluster [DBG] pgmap v249: 364 pgs: 32 unknown, 40 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: cluster 2026-03-10T07:30:10.604481+0000 mgr.y (mgr.24407) 212 : cluster [DBG] pgmap v249: 364 pgs: 32 unknown, 40 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.187972+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.187972+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.188114+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.188114+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.192727+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.192727+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.196708+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.196708+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: cluster 2026-03-10T07:30:12.206677+0000 mon.a (mon.0) 1784 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: cluster 2026-03-10T07:30:12.206677+0000 mon.a (mon.0) 1784 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.209031+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.209031+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.209140+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.209140+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.212802+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/980674915' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.212802+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/980674915' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.214983+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:12 vm00 bash[20701]: audit 2026-03-10T07:30:12.214983+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: cluster 2026-03-10T07:30:10.604481+0000 mgr.y (mgr.24407) 212 : cluster [DBG] pgmap v249: 364 pgs: 32 unknown, 40 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: cluster 2026-03-10T07:30:10.604481+0000 mgr.y (mgr.24407) 212 : cluster [DBG] pgmap v249: 364 pgs: 32 unknown, 40 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.187972+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.187972+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.188114+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.188114+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.192727+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.192727+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.196708+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.196708+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: cluster 2026-03-10T07:30:12.206677+0000 mon.a (mon.0) 1784 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: cluster 2026-03-10T07:30:12.206677+0000 mon.a (mon.0) 1784 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.209031+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.209031+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.209140+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.209140+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.212802+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/980674915' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.212802+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/980674915' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.214983+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:12 vm00 bash[28005]: audit 2026-03-10T07:30:12.214983+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: cluster 2026-03-10T07:30:10.604481+0000 mgr.y (mgr.24407) 212 : cluster [DBG] pgmap v249: 364 pgs: 32 unknown, 40 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: cluster 2026-03-10T07:30:10.604481+0000 mgr.y (mgr.24407) 212 : cluster [DBG] pgmap v249: 364 pgs: 32 unknown, 40 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.187972+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.187972+0000 mon.a (mon.0) 1782 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.188114+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.188114+0000 mon.a (mon.0) 1783 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.192727+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.192727+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.100:0/3601437479' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.196708+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.196708+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: cluster 2026-03-10T07:30:12.206677+0000 mon.a (mon.0) 1784 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: cluster 2026-03-10T07:30:12.206677+0000 mon.a (mon.0) 1784 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.209031+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.209031+0000 mon.a (mon.0) 1785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]: dispatch 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.209140+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.209140+0000 mon.a (mon.0) 1786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.212802+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/980674915' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.212802+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.100:0/980674915' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:12.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.214983+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:12.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:12 vm03 bash[23382]: audit 2026-03-10T07:30:12.214983+0000 mon.a (mon.0) 1787 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:13.514 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:30:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: cluster 2026-03-10T07:30:12.604909+0000 mgr.y (mgr.24407) 213 : cluster [DBG] pgmap v252: 388 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: cluster 2026-03-10T07:30:12.604909+0000 mgr.y (mgr.24407) 213 : cluster [DBG] pgmap v252: 388 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.056460+0000 mgr.y (mgr.24407) 214 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.056460+0000 mgr.y (mgr.24407) 214 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.192345+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.192345+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.192485+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.192485+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.192634+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.192634+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.200115+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.200115+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: cluster 2026-03-10T07:30:13.210413+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: cluster 2026-03-10T07:30:13.210413+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.225749+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.225749+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.226536+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.226536+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.229863+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.229863+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.230246+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.230246+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.231801+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.231801+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.232305+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.232305+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.235486+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:14.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:14 vm03 bash[23382]: audit 2026-03-10T07:30:13.235486+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: cluster 2026-03-10T07:30:12.604909+0000 mgr.y (mgr.24407) 213 : cluster [DBG] pgmap v252: 388 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: cluster 2026-03-10T07:30:12.604909+0000 mgr.y (mgr.24407) 213 : cluster [DBG] pgmap v252: 388 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.056460+0000 mgr.y (mgr.24407) 214 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.056460+0000 mgr.y (mgr.24407) 214 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.192345+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.192345+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.192485+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.192485+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.192634+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.192634+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.200115+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.200115+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: cluster 2026-03-10T07:30:13.210413+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: cluster 2026-03-10T07:30:13.210413+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.225749+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.225749+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.226536+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.226536+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.229863+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.229863+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.230246+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.230246+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.231801+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.231801+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.232305+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.232305+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.235486+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:14 vm00 bash[20701]: audit 2026-03-10T07:30:13.235486+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: cluster 2026-03-10T07:30:12.604909+0000 mgr.y (mgr.24407) 213 : cluster [DBG] pgmap v252: 388 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: cluster 2026-03-10T07:30:12.604909+0000 mgr.y (mgr.24407) 213 : cluster [DBG] pgmap v252: 388 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.056460+0000 mgr.y (mgr.24407) 214 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.056460+0000 mgr.y (mgr.24407) 214 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.192345+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.192345+0000 mon.a (mon.0) 1788 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm00-59629-33"}]': finished 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.192485+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.192485+0000 mon.a (mon.0) 1789 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.192634+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.192634+0000 mon.a (mon.0) 1790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm00-59637-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.200115+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.200115+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: cluster 2026-03-10T07:30:13.210413+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: cluster 2026-03-10T07:30:13.210413+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.225749+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.225749+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.226536+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.226536+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.229863+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.229863+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.230246+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.230246+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.231801+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.231801+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.232305+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.232305+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.235486+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:14 vm00 bash[28005]: audit 2026-03-10T07:30:13.235486+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: cluster 2026-03-10T07:30:14.192290+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: cluster 2026-03-10T07:30:14.192290+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: audit 2026-03-10T07:30:14.195886+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]': finished 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: audit 2026-03-10T07:30:14.195886+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]': finished 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: audit 2026-03-10T07:30:14.195973+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: audit 2026-03-10T07:30:14.195973+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: cluster 2026-03-10T07:30:14.200893+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: cluster 2026-03-10T07:30:14.200893+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: audit 2026-03-10T07:30:14.207987+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: audit 2026-03-10T07:30:14.207987+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: audit 2026-03-10T07:30:14.234957+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: audit 2026-03-10T07:30:14.234957+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: cluster 2026-03-10T07:30:15.211020+0000 mon.a (mon.0) 1801 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T07:30:15.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:15 vm03 bash[23382]: cluster 2026-03-10T07:30:15.211020+0000 mon.a (mon.0) 1801 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: cluster 2026-03-10T07:30:14.192290+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: cluster 2026-03-10T07:30:14.192290+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: audit 2026-03-10T07:30:14.195886+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]': finished 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: audit 2026-03-10T07:30:14.195886+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]': finished 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: audit 2026-03-10T07:30:14.195973+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: audit 2026-03-10T07:30:14.195973+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: cluster 2026-03-10T07:30:14.200893+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: cluster 2026-03-10T07:30:14.200893+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: audit 2026-03-10T07:30:14.207987+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: audit 2026-03-10T07:30:14.207987+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: audit 2026-03-10T07:30:14.234957+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: audit 2026-03-10T07:30:14.234957+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: cluster 2026-03-10T07:30:15.211020+0000 mon.a (mon.0) 1801 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:15 vm00 bash[20701]: cluster 2026-03-10T07:30:15.211020+0000 mon.a (mon.0) 1801 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: cluster 2026-03-10T07:30:14.192290+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: cluster 2026-03-10T07:30:14.192290+0000 mon.a (mon.0) 1796 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: audit 2026-03-10T07:30:14.195886+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]': finished 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: audit 2026-03-10T07:30:14.195886+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-23", "mode": "writeback"}]': finished 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: audit 2026-03-10T07:30:14.195973+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: audit 2026-03-10T07:30:14.195973+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm00-59629-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: cluster 2026-03-10T07:30:14.200893+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: cluster 2026-03-10T07:30:14.200893+0000 mon.a (mon.0) 1799 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: audit 2026-03-10T07:30:14.207987+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: audit 2026-03-10T07:30:14.207987+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: audit 2026-03-10T07:30:14.234957+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: audit 2026-03-10T07:30:14.234957+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: cluster 2026-03-10T07:30:15.211020+0000 mon.a (mon.0) 1801 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T07:30:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:15 vm00 bash[28005]: cluster 2026-03-10T07:30:15.211020+0000 mon.a (mon.0) 1801 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: cluster 2026-03-10T07:30:14.605562+0000 mgr.y (mgr.24407) 215 : cluster [DBG] pgmap v255: 356 pgs: 24 unknown, 27 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 289 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: cluster 2026-03-10T07:30:14.605562+0000 mgr.y (mgr.24407) 215 : cluster [DBG] pgmap v255: 356 pgs: 24 unknown, 27 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 289 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: cluster 2026-03-10T07:30:15.213196+0000 mon.a (mon.0) 1802 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: cluster 2026-03-10T07:30:15.213196+0000 mon.a (mon.0) 1802 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:15.282935+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:15.282935+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:15.285029+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:15.285029+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:16.204364+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:16.204364+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:16.204421+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:16.204421+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: cluster 2026-03-10T07:30:16.207279+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T07:30:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: cluster 2026-03-10T07:30:16.207279+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T07:30:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:16.212371+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:16.212371+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:16.215491+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:16.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:16 vm03 bash[23382]: audit 2026-03-10T07:30:16.215491+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: cluster 2026-03-10T07:30:14.605562+0000 mgr.y (mgr.24407) 215 : cluster [DBG] pgmap v255: 356 pgs: 24 unknown, 27 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 289 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: cluster 2026-03-10T07:30:14.605562+0000 mgr.y (mgr.24407) 215 : cluster [DBG] pgmap v255: 356 pgs: 24 unknown, 27 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 289 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: cluster 2026-03-10T07:30:15.213196+0000 mon.a (mon.0) 1802 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: cluster 2026-03-10T07:30:15.213196+0000 mon.a (mon.0) 1802 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:15.282935+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:15.282935+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:15.285029+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:15.285029+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:16.204364+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:16.204364+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:16.204421+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:16.204421+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: cluster 2026-03-10T07:30:16.207279+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: cluster 2026-03-10T07:30:16.207279+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:16.212371+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:16.212371+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:16.215491+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:16 vm00 bash[20701]: audit 2026-03-10T07:30:16.215491+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: cluster 2026-03-10T07:30:14.605562+0000 mgr.y (mgr.24407) 215 : cluster [DBG] pgmap v255: 356 pgs: 24 unknown, 27 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 289 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: cluster 2026-03-10T07:30:14.605562+0000 mgr.y (mgr.24407) 215 : cluster [DBG] pgmap v255: 356 pgs: 24 unknown, 27 creating+peering, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 289 active+clean; 457 KiB data, 699 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: cluster 2026-03-10T07:30:15.213196+0000 mon.a (mon.0) 1802 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: cluster 2026-03-10T07:30:15.213196+0000 mon.a (mon.0) 1802 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:15.282935+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:15.282935+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:15.285029+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:15.285029+0000 mon.a (mon.0) 1803 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:16.204364+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:16.204364+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm00-59629-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:16.204421+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:16.204421+0000 mon.a (mon.0) 1805 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: cluster 2026-03-10T07:30:16.207279+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T07:30:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: cluster 2026-03-10T07:30:16.207279+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T07:30:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:16.212371+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:16.212371+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:16.215491+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:16 vm00 bash[28005]: audit 2026-03-10T07:30:16.215491+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]: dispatch 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: cluster 2026-03-10T07:30:16.605917+0000 mgr.y (mgr.24407) 216 : cluster [DBG] pgmap v258: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: cluster 2026-03-10T07:30:16.605917+0000 mgr.y (mgr.24407) 216 : cluster [DBG] pgmap v258: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: cluster 2026-03-10T07:30:17.204936+0000 mon.a (mon.0) 1808 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: cluster 2026-03-10T07:30:17.204936+0000 mon.a (mon.0) 1808 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: audit 2026-03-10T07:30:17.287955+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: audit 2026-03-10T07:30:17.287955+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: cluster 2026-03-10T07:30:17.300305+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: cluster 2026-03-10T07:30:17.300305+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: audit 2026-03-10T07:30:17.306349+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3088402442' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: audit 2026-03-10T07:30:17.306349+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3088402442' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: audit 2026-03-10T07:30:17.331598+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:17 vm00 bash[28005]: audit 2026-03-10T07:30:17.331598+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: cluster 2026-03-10T07:30:16.605917+0000 mgr.y (mgr.24407) 216 : cluster [DBG] pgmap v258: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: cluster 2026-03-10T07:30:16.605917+0000 mgr.y (mgr.24407) 216 : cluster [DBG] pgmap v258: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: cluster 2026-03-10T07:30:17.204936+0000 mon.a (mon.0) 1808 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: cluster 2026-03-10T07:30:17.204936+0000 mon.a (mon.0) 1808 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: audit 2026-03-10T07:30:17.287955+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: audit 2026-03-10T07:30:17.287955+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: cluster 2026-03-10T07:30:17.300305+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: cluster 2026-03-10T07:30:17.300305+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: audit 2026-03-10T07:30:17.306349+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3088402442' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: audit 2026-03-10T07:30:17.306349+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3088402442' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: audit 2026-03-10T07:30:17.331598+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:17 vm00 bash[20701]: audit 2026-03-10T07:30:17.331598+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:18.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: cluster 2026-03-10T07:30:16.605917+0000 mgr.y (mgr.24407) 216 : cluster [DBG] pgmap v258: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:30:18.026 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: cluster 2026-03-10T07:30:16.605917+0000 mgr.y (mgr.24407) 216 : cluster [DBG] pgmap v258: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:30:18.026 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: cluster 2026-03-10T07:30:17.204936+0000 mon.a (mon.0) 1808 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:18.026 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: cluster 2026-03-10T07:30:17.204936+0000 mon.a (mon.0) 1808 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:18.026 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: audit 2026-03-10T07:30:17.287955+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:18.026 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: audit 2026-03-10T07:30:17.287955+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-23"}]': finished 2026-03-10T07:30:18.026 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: cluster 2026-03-10T07:30:17.300305+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T07:30:18.026 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: cluster 2026-03-10T07:30:17.300305+0000 mon.a (mon.0) 1810 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T07:30:18.026 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: audit 2026-03-10T07:30:17.306349+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3088402442' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:18.026 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: audit 2026-03-10T07:30:17.306349+0000 mon.b (mon.1) 222 : audit [INF] from='client.? 192.168.123.100:0/3088402442' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:18.026 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: audit 2026-03-10T07:30:17.331598+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:18.026 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:17 vm03 bash[23382]: audit 2026-03-10T07:30:17.331598+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:19 vm00 bash[20701]: audit 2026-03-10T07:30:18.291574+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:19 vm00 bash[20701]: audit 2026-03-10T07:30:18.291574+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:19 vm00 bash[20701]: cluster 2026-03-10T07:30:18.300439+0000 mon.a (mon.0) 1813 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:19 vm00 bash[20701]: cluster 2026-03-10T07:30:18.300439+0000 mon.a (mon.0) 1813 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:19 vm00 bash[20701]: audit 2026-03-10T07:30:18.301354+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:19 vm00 bash[20701]: audit 2026-03-10T07:30:18.301354+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:19 vm00 bash[20701]: audit 2026-03-10T07:30:18.306169+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:19 vm00 bash[20701]: audit 2026-03-10T07:30:18.306169+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:19 vm00 bash[28005]: audit 2026-03-10T07:30:18.291574+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:19 vm00 bash[28005]: audit 2026-03-10T07:30:18.291574+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:19 vm00 bash[28005]: cluster 2026-03-10T07:30:18.300439+0000 mon.a (mon.0) 1813 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:19 vm00 bash[28005]: cluster 2026-03-10T07:30:18.300439+0000 mon.a (mon.0) 1813 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:19 vm00 bash[28005]: audit 2026-03-10T07:30:18.301354+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:19 vm00 bash[28005]: audit 2026-03-10T07:30:18.301354+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:19 vm00 bash[28005]: audit 2026-03-10T07:30:18.306169+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:19 vm00 bash[28005]: audit 2026-03-10T07:30:18.306169+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:19 vm03 bash[23382]: audit 2026-03-10T07:30:18.291574+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:19 vm03 bash[23382]: audit 2026-03-10T07:30:18.291574+0000 mon.a (mon.0) 1812 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:19 vm03 bash[23382]: cluster 2026-03-10T07:30:18.300439+0000 mon.a (mon.0) 1813 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T07:30:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:19 vm03 bash[23382]: cluster 2026-03-10T07:30:18.300439+0000 mon.a (mon.0) 1813 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T07:30:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:19 vm03 bash[23382]: audit 2026-03-10T07:30:18.301354+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:19 vm03 bash[23382]: audit 2026-03-10T07:30:18.301354+0000 mon.b (mon.1) 223 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:19 vm03 bash[23382]: audit 2026-03-10T07:30:18.306169+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:19 vm03 bash[23382]: audit 2026-03-10T07:30:18.306169+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: cluster 2026-03-10T07:30:18.606278+0000 mgr.y (mgr.24407) 217 : cluster [DBG] pgmap v261: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: cluster 2026-03-10T07:30:18.606278+0000 mgr.y (mgr.24407) 217 : cluster [DBG] pgmap v261: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.296177+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.296177+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.300384+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.300384+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: cluster 2026-03-10T07:30:19.306287+0000 mon.a (mon.0) 1816 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: cluster 2026-03-10T07:30:19.306287+0000 mon.a (mon.0) 1816 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.311556+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.311556+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.311776+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/1924667098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.311776+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/1924667098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.313686+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.313686+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.314182+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: cluster 2026-03-10T07:30:18.606278+0000 mgr.y (mgr.24407) 217 : cluster [DBG] pgmap v261: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: cluster 2026-03-10T07:30:18.606278+0000 mgr.y (mgr.24407) 217 : cluster [DBG] pgmap v261: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.296177+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.296177+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.300384+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.300384+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: cluster 2026-03-10T07:30:19.306287+0000 mon.a (mon.0) 1816 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: cluster 2026-03-10T07:30:19.306287+0000 mon.a (mon.0) 1816 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.311556+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.311556+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.311776+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/1924667098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.311776+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/1924667098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.313686+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.313686+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.314182+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.314182+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.314280+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:19.314280+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:20.301073+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:20.301073+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:20.301247+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:20.301247+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:20.301523+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: audit 2026-03-10T07:30:20.301523+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: cluster 2026-03-10T07:30:20.321532+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:20 vm00 bash[28005]: cluster 2026-03-10T07:30:20.321532+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.314182+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.314280+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:19.314280+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:20.301073+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:20.301073+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:20.301247+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:20.301247+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:20.301523+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: audit 2026-03-10T07:30:20.301523+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: cluster 2026-03-10T07:30:20.321532+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T07:30:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:20 vm00 bash[20701]: cluster 2026-03-10T07:30:20.321532+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: cluster 2026-03-10T07:30:18.606278+0000 mgr.y (mgr.24407) 217 : cluster [DBG] pgmap v261: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: cluster 2026-03-10T07:30:18.606278+0000 mgr.y (mgr.24407) 217 : cluster [DBG] pgmap v261: 292 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.296177+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.296177+0000 mon.a (mon.0) 1815 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.300384+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.300384+0000 mon.b (mon.1) 224 : audit [INF] from='client.? 192.168.123.100:0/3675719372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: cluster 2026-03-10T07:30:19.306287+0000 mon.a (mon.0) 1816 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: cluster 2026-03-10T07:30:19.306287+0000 mon.a (mon.0) 1816 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.311556+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.311556+0000 mon.b (mon.1) 225 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.311776+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/1924667098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.311776+0000 mon.b (mon.1) 226 : audit [INF] from='client.? 192.168.123.100:0/1924667098' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.313686+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.313686+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]: dispatch 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.314182+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.314182+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.314280+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:19.314280+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:20.301073+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:20.301073+0000 mon.a (mon.0) 1820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm00-59629-34"}]': finished 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:20.301247+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:20.301247+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:20.301523+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: audit 2026-03-10T07:30:20.301523+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: cluster 2026-03-10T07:30:20.321532+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T07:30:20.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:20 vm03 bash[23382]: cluster 2026-03-10T07:30:20.321532+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T07:30:21.333 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:30:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:30:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.355252+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.355252+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.382610+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.382610+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.386220+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.386220+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.389925+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.389925+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.390416+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.390416+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.404858+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.404858+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: cluster 2026-03-10T07:30:20.606749+0000 mgr.y (mgr.24407) 218 : cluster [DBG] pgmap v264: 356 pgs: 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: cluster 2026-03-10T07:30:20.606749+0000 mgr.y (mgr.24407) 218 : cluster [DBG] pgmap v264: 356 pgs: 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.965048+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.965048+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.988001+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:20.988001+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: cluster 2026-03-10T07:30:21.000940+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: cluster 2026-03-10T07:30:21.000940+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:21.003307+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:21.003307+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:21.005527+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.100:0/3860071936' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:21.005527+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.100:0/3860071936' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:21.006496+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:21.006496+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:21.008620+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:21.008620+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:21.009222+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:21 vm00 bash[20701]: audit 2026-03-10T07:30:21.009222+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.355252+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.355252+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.382610+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.382610+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.386220+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.386220+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.389925+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.389925+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.390416+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.390416+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.404858+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.404858+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: cluster 2026-03-10T07:30:20.606749+0000 mgr.y (mgr.24407) 218 : cluster [DBG] pgmap v264: 356 pgs: 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: cluster 2026-03-10T07:30:20.606749+0000 mgr.y (mgr.24407) 218 : cluster [DBG] pgmap v264: 356 pgs: 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.965048+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.965048+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.988001+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:20.988001+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: cluster 2026-03-10T07:30:21.000940+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: cluster 2026-03-10T07:30:21.000940+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:21.003307+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:21.003307+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:21.005527+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.100:0/3860071936' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:21.005527+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.100:0/3860071936' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:21.006496+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:21.006496+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:21.008620+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:21.008620+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:21.009222+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:21 vm00 bash[28005]: audit 2026-03-10T07:30:21.009222+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.355252+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.355252+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.382610+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.382610+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.386220+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.386220+0000 mon.c (mon.2) 211 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.389925+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.389925+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.390416+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.390416+0000 mon.c (mon.2) 212 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.404858+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.404858+0000 mon.a (mon.0) 1826 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: cluster 2026-03-10T07:30:20.606749+0000 mgr.y (mgr.24407) 218 : cluster [DBG] pgmap v264: 356 pgs: 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: cluster 2026-03-10T07:30:20.606749+0000 mgr.y (mgr.24407) 218 : cluster [DBG] pgmap v264: 356 pgs: 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.965048+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.965048+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm00-59629-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.988001+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:20.988001+0000 mon.c (mon.2) 213 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: cluster 2026-03-10T07:30:21.000940+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: cluster 2026-03-10T07:30:21.000940+0000 mon.a (mon.0) 1828 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:21.003307+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:21.003307+0000 mon.a (mon.0) 1829 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:21.005527+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.100:0/3860071936' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:21.005527+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.100:0/3860071936' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:21.006496+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:21.006496+0000 mon.b (mon.1) 227 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:21.008620+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:21.008620+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:21.009222+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:21 vm03 bash[23382]: audit 2026-03-10T07:30:21.009222+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:23.264 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:30:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:30:23.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:22 vm03 bash[23382]: audit 2026-03-10T07:30:21.969110+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:23.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:22 vm03 bash[23382]: audit 2026-03-10T07:30:21.969110+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:23.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:22 vm03 bash[23382]: audit 2026-03-10T07:30:21.969236+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:23.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:22 vm03 bash[23382]: audit 2026-03-10T07:30:21.969236+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:23.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:22 vm03 bash[23382]: audit 2026-03-10T07:30:21.974150+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:23.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:22 vm03 bash[23382]: audit 2026-03-10T07:30:21.974150+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:23.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:22 vm03 bash[23382]: cluster 2026-03-10T07:30:21.987210+0000 mon.a (mon.0) 1834 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T07:30:23.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:22 vm03 bash[23382]: cluster 2026-03-10T07:30:21.987210+0000 mon.a (mon.0) 1834 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T07:30:23.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:22 vm03 bash[23382]: audit 2026-03-10T07:30:21.989907+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:23.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:22 vm03 bash[23382]: audit 2026-03-10T07:30:21.989907+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:23.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:23 vm00 bash[28005]: audit 2026-03-10T07:30:21.969110+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:23 vm00 bash[28005]: audit 2026-03-10T07:30:21.969110+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:23 vm00 bash[28005]: audit 2026-03-10T07:30:21.969236+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:23 vm00 bash[28005]: audit 2026-03-10T07:30:21.969236+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:23 vm00 bash[28005]: audit 2026-03-10T07:30:21.974150+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:23 vm00 bash[28005]: audit 2026-03-10T07:30:21.974150+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:23 vm00 bash[28005]: cluster 2026-03-10T07:30:21.987210+0000 mon.a (mon.0) 1834 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:23 vm00 bash[28005]: cluster 2026-03-10T07:30:21.987210+0000 mon.a (mon.0) 1834 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:23 vm00 bash[28005]: audit 2026-03-10T07:30:21.989907+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:23 vm00 bash[28005]: audit 2026-03-10T07:30:21.989907+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:22 vm00 bash[20701]: audit 2026-03-10T07:30:21.969110+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:22 vm00 bash[20701]: audit 2026-03-10T07:30:21.969110+0000 mon.a (mon.0) 1832 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:22 vm00 bash[20701]: audit 2026-03-10T07:30:21.969236+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:22 vm00 bash[20701]: audit 2026-03-10T07:30:21.969236+0000 mon.a (mon.0) 1833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:22 vm00 bash[20701]: audit 2026-03-10T07:30:21.974150+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:22 vm00 bash[20701]: audit 2026-03-10T07:30:21.974150+0000 mon.b (mon.1) 228 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:22 vm00 bash[20701]: cluster 2026-03-10T07:30:21.987210+0000 mon.a (mon.0) 1834 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:22 vm00 bash[20701]: cluster 2026-03-10T07:30:21.987210+0000 mon.a (mon.0) 1834 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:22 vm00 bash[20701]: audit 2026-03-10T07:30:21.989907+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:23.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:22 vm00 bash[20701]: audit 2026-03-10T07:30:21.989907+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: cluster 2026-03-10T07:30:22.607113+0000 mgr.y (mgr.24407) 219 : cluster [DBG] pgmap v267: 388 pgs: 32 unknown, 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: cluster 2026-03-10T07:30:22.607113+0000 mgr.y (mgr.24407) 219 : cluster [DBG] pgmap v267: 388 pgs: 32 unknown, 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:22.978488+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:22.978488+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:22.978605+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:22.978605+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:22.992523+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:22.992523+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: cluster 2026-03-10T07:30:22.996950+0000 mon.a (mon.0) 1838 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: cluster 2026-03-10T07:30:22.996950+0000 mon.a (mon.0) 1838 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:23.007743+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/856029291' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:23.007743+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/856029291' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:23.014393+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:23.014393+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:23.016807+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:23.016807+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:23.059138+0000 mgr.y (mgr.24407) 220 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:24.265 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:24 vm03 bash[23382]: audit 2026-03-10T07:30:23.059138+0000 mgr.y (mgr.24407) 220 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: cluster 2026-03-10T07:30:22.607113+0000 mgr.y (mgr.24407) 219 : cluster [DBG] pgmap v267: 388 pgs: 32 unknown, 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: cluster 2026-03-10T07:30:22.607113+0000 mgr.y (mgr.24407) 219 : cluster [DBG] pgmap v267: 388 pgs: 32 unknown, 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:22.978488+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:22.978488+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:22.978605+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:22.978605+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:22.992523+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:22.992523+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: cluster 2026-03-10T07:30:22.996950+0000 mon.a (mon.0) 1838 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: cluster 2026-03-10T07:30:22.996950+0000 mon.a (mon.0) 1838 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:23.007743+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/856029291' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:23.007743+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/856029291' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:23.014393+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:23.014393+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:23.016807+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:23.016807+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:23.059138+0000 mgr.y (mgr.24407) 220 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:24 vm00 bash[20701]: audit 2026-03-10T07:30:23.059138+0000 mgr.y (mgr.24407) 220 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: cluster 2026-03-10T07:30:22.607113+0000 mgr.y (mgr.24407) 219 : cluster [DBG] pgmap v267: 388 pgs: 32 unknown, 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: cluster 2026-03-10T07:30:22.607113+0000 mgr.y (mgr.24407) 219 : cluster [DBG] pgmap v267: 388 pgs: 32 unknown, 64 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 275 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:22.978488+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:22.978488+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm00-59629-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:22.978605+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:22.978605+0000 mon.a (mon.0) 1837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:22.992523+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:22.992523+0000 mon.b (mon.1) 229 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: cluster 2026-03-10T07:30:22.996950+0000 mon.a (mon.0) 1838 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: cluster 2026-03-10T07:30:22.996950+0000 mon.a (mon.0) 1838 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:23.007743+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/856029291' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:23.007743+0000 mon.b (mon.1) 230 : audit [INF] from='client.? 192.168.123.100:0/856029291' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:23.014393+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:23.014393+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]: dispatch 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:23.016807+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:23.016807+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:23.059138+0000 mgr.y (mgr.24407) 220 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:24 vm00 bash[28005]: audit 2026-03-10T07:30:23.059138+0000 mgr.y (mgr.24407) 220 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: cluster 2026-03-10T07:30:23.978981+0000 mon.a (mon.0) 1841 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: cluster 2026-03-10T07:30:23.978981+0000 mon.a (mon.0) 1841 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:23.982606+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]': finished 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:23.982606+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]': finished 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:23.982708+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:23.982708+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: cluster 2026-03-10T07:30:23.990505+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: cluster 2026-03-10T07:30:23.990505+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:24.074095+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:24.074095+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:24.076293+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:24.076293+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:24.278408+0000 mon.a (mon.0) 1846 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:24.278408+0000 mon.a (mon.0) 1846 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:24.279351+0000 mon.c (mon.2) 215 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:25 vm00 bash[28005]: audit 2026-03-10T07:30:24.279351+0000 mon.c (mon.2) 215 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: cluster 2026-03-10T07:30:23.978981+0000 mon.a (mon.0) 1841 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: cluster 2026-03-10T07:30:23.978981+0000 mon.a (mon.0) 1841 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:23.982606+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]': finished 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:23.982606+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]': finished 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:23.982708+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:23.982708+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: cluster 2026-03-10T07:30:23.990505+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: cluster 2026-03-10T07:30:23.990505+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:24.074095+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:24.074095+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:24.076293+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:24.076293+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:24.278408+0000 mon.a (mon.0) 1846 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:24.278408+0000 mon.a (mon.0) 1846 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:24.279351+0000 mon.c (mon.2) 215 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:25 vm00 bash[20701]: audit 2026-03-10T07:30:24.279351+0000 mon.c (mon.2) 215 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: cluster 2026-03-10T07:30:23.978981+0000 mon.a (mon.0) 1841 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: cluster 2026-03-10T07:30:23.978981+0000 mon.a (mon.0) 1841 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:23.982606+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]': finished 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:23.982606+0000 mon.a (mon.0) 1842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-25", "mode": "writeback"}]': finished 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:23.982708+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:23.982708+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: cluster 2026-03-10T07:30:23.990505+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: cluster 2026-03-10T07:30:23.990505+0000 mon.a (mon.0) 1844 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:24.074095+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:24.074095+0000 mon.b (mon.1) 231 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:24.076293+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:24.076293+0000 mon.a (mon.0) 1845 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:24.278408+0000 mon.a (mon.0) 1846 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:24.278408+0000 mon.a (mon.0) 1846 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:24.279351+0000 mon.c (mon.2) 215 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:25.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:25 vm03 bash[23382]: audit 2026-03-10T07:30:24.279351+0000 mon.c (mon.2) 215 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: cluster 2026-03-10T07:30:24.607614+0000 mgr.y (mgr.24407) 221 : cluster [DBG] pgmap v270: 428 pgs: 56 unknown, 48 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 307 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: cluster 2026-03-10T07:30:24.607614+0000 mgr.y (mgr.24407) 221 : cluster [DBG] pgmap v270: 428 pgs: 56 unknown, 48 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 307 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.016092+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.016092+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.019059+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.019059+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: cluster 2026-03-10T07:30:25.021123+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: cluster 2026-03-10T07:30:25.021123+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.024651+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.024651+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.032436+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.100:0/1805561761' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.032436+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.100:0/1805561761' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.033984+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.033984+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.054088+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: cluster 2026-03-10T07:30:24.607614+0000 mgr.y (mgr.24407) 221 : cluster [DBG] pgmap v270: 428 pgs: 56 unknown, 48 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 307 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: cluster 2026-03-10T07:30:24.607614+0000 mgr.y (mgr.24407) 221 : cluster [DBG] pgmap v270: 428 pgs: 56 unknown, 48 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 307 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.016092+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.016092+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.019059+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.019059+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: cluster 2026-03-10T07:30:25.021123+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: cluster 2026-03-10T07:30:25.021123+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.024651+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.024651+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.032436+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.100:0/1805561761' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.032436+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.100:0/1805561761' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.033984+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.033984+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.054088+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.054088+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.054199+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:25.054199+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: cluster 2026-03-10T07:30:25.274595+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: cluster 2026-03-10T07:30:25.274595+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: cluster 2026-03-10T07:30:26.016339+0000 mon.a (mon.0) 1853 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: cluster 2026-03-10T07:30:26.016339+0000 mon.a (mon.0) 1853 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:26.025425+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:26.025425+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:26.025683+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:26.025683+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:26.025749+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:26.025749+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:26.035704+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: audit 2026-03-10T07:30:26.035704+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: cluster 2026-03-10T07:30:26.047800+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:26 vm00 bash[28005]: cluster 2026-03-10T07:30:26.047800+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.054088+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.054199+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:25.054199+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: cluster 2026-03-10T07:30:25.274595+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: cluster 2026-03-10T07:30:25.274595+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: cluster 2026-03-10T07:30:26.016339+0000 mon.a (mon.0) 1853 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: cluster 2026-03-10T07:30:26.016339+0000 mon.a (mon.0) 1853 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:26.025425+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:26.025425+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:26.025683+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:26.025683+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:26.025749+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:26.025749+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:26.035704+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: audit 2026-03-10T07:30:26.035704+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: cluster 2026-03-10T07:30:26.047800+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T07:30:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:26 vm00 bash[20701]: cluster 2026-03-10T07:30:26.047800+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: cluster 2026-03-10T07:30:24.607614+0000 mgr.y (mgr.24407) 221 : cluster [DBG] pgmap v270: 428 pgs: 56 unknown, 48 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 307 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: cluster 2026-03-10T07:30:24.607614+0000 mgr.y (mgr.24407) 221 : cluster [DBG] pgmap v270: 428 pgs: 56 unknown, 48 creating+peering, 7 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 307 active+clean; 457 KiB data, 700 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.016092+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.016092+0000 mon.a (mon.0) 1847 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.019059+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.019059+0000 mon.b (mon.1) 232 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: cluster 2026-03-10T07:30:25.021123+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: cluster 2026-03-10T07:30:25.021123+0000 mon.a (mon.0) 1848 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.024651+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.024651+0000 mon.c (mon.2) 216 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.032436+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.100:0/1805561761' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.032436+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.100:0/1805561761' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.033984+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.033984+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.054088+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.054088+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.054199+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:25.054199+0000 mon.a (mon.0) 1851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: cluster 2026-03-10T07:30:25.274595+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: cluster 2026-03-10T07:30:25.274595+0000 mon.a (mon.0) 1852 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: cluster 2026-03-10T07:30:26.016339+0000 mon.a (mon.0) 1853 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: cluster 2026-03-10T07:30:26.016339+0000 mon.a (mon.0) 1853 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:26.025425+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:26.025425+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-25"}]': finished 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:26.025683+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:26.025683+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:26.025749+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:26.025749+0000 mon.a (mon.0) 1856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm00-59637-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:26.035704+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: audit 2026-03-10T07:30:26.035704+0000 mon.c (mon.2) 218 : audit [INF] from='client.? 192.168.123.100:0/506825491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: cluster 2026-03-10T07:30:26.047800+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T07:30:26.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:26 vm03 bash[23382]: cluster 2026-03-10T07:30:26.047800+0000 mon.a (mon.0) 1857 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T07:30:27.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:27 vm03 bash[23382]: audit 2026-03-10T07:30:26.049666+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:27.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:27 vm03 bash[23382]: audit 2026-03-10T07:30:26.049666+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:27 vm00 bash[28005]: audit 2026-03-10T07:30:26.049666+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:27 vm00 bash[28005]: audit 2026-03-10T07:30:26.049666+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:27 vm00 bash[20701]: audit 2026-03-10T07:30:26.049666+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:27 vm00 bash[20701]: audit 2026-03-10T07:30:26.049666+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]: dispatch 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: cluster 2026-03-10T07:30:26.608045+0000 mgr.y (mgr.24407) 222 : cluster [DBG] pgmap v273: 452 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 404 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: cluster 2026-03-10T07:30:26.608045+0000 mgr.y (mgr.24407) 222 : cluster [DBG] pgmap v273: 452 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 404 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:27.177983+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:27.177983+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: cluster 2026-03-10T07:30:27.190001+0000 mon.a (mon.0) 1860 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: cluster 2026-03-10T07:30:27.190001+0000 mon.a (mon.0) 1860 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:27.215358+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:27.215358+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:27.223312+0000 mon.a (mon.0) 1862 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:27.223312+0000 mon.a (mon.0) 1862 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:27.223897+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:27.223897+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:28.183961+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:28.183961+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:28.207676+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:28.207676+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: cluster 2026-03-10T07:30:28.208143+0000 mon.a (mon.0) 1865 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T07:30:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: cluster 2026-03-10T07:30:28.208143+0000 mon.a (mon.0) 1865 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T07:30:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:28.215380+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:28 vm03 bash[23382]: audit 2026-03-10T07:30:28.215380+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: cluster 2026-03-10T07:30:26.608045+0000 mgr.y (mgr.24407) 222 : cluster [DBG] pgmap v273: 452 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 404 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: cluster 2026-03-10T07:30:26.608045+0000 mgr.y (mgr.24407) 222 : cluster [DBG] pgmap v273: 452 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 404 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:27.177983+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:27.177983+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: cluster 2026-03-10T07:30:27.190001+0000 mon.a (mon.0) 1860 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: cluster 2026-03-10T07:30:27.190001+0000 mon.a (mon.0) 1860 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:27.215358+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:27.215358+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:27.223312+0000 mon.a (mon.0) 1862 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:27.223312+0000 mon.a (mon.0) 1862 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:27.223897+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:27.223897+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:28.183961+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:28.183961+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:28.207676+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:28.207676+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: cluster 2026-03-10T07:30:28.208143+0000 mon.a (mon.0) 1865 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: cluster 2026-03-10T07:30:28.208143+0000 mon.a (mon.0) 1865 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:28.215380+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:28 vm00 bash[28005]: audit 2026-03-10T07:30:28.215380+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: cluster 2026-03-10T07:30:26.608045+0000 mgr.y (mgr.24407) 222 : cluster [DBG] pgmap v273: 452 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 404 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: cluster 2026-03-10T07:30:26.608045+0000 mgr.y (mgr.24407) 222 : cluster [DBG] pgmap v273: 452 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 404 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:27.177983+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:27.177983+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm00-59629-35"}]': finished 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: cluster 2026-03-10T07:30:27.190001+0000 mon.a (mon.0) 1860 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: cluster 2026-03-10T07:30:27.190001+0000 mon.a (mon.0) 1860 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:27.215358+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:27.215358+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:27.223312+0000 mon.a (mon.0) 1862 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:27.223312+0000 mon.a (mon.0) 1862 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:27.223897+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:27.223897+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:28.183961+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:28.183961+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm00-59629-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:28.207676+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:28.207676+0000 mon.b (mon.1) 233 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: cluster 2026-03-10T07:30:28.208143+0000 mon.a (mon.0) 1865 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: cluster 2026-03-10T07:30:28.208143+0000 mon.a (mon.0) 1865 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T07:30:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:28.215380+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:28 vm00 bash[20701]: audit 2026-03-10T07:30:28.215380+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:29.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:29 vm03 bash[23382]: audit 2026-03-10T07:30:28.217244+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:29.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:29 vm03 bash[23382]: audit 2026-03-10T07:30:28.217244+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:29 vm00 bash[28005]: audit 2026-03-10T07:30:28.217244+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:29.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:29 vm00 bash[28005]: audit 2026-03-10T07:30:28.217244+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:29.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:29 vm00 bash[20701]: audit 2026-03-10T07:30:28.217244+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:29.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:29 vm00 bash[20701]: audit 2026-03-10T07:30:28.217244+0000 mon.a (mon.0) 1867 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:30 vm03 bash[23382]: cluster 2026-03-10T07:30:28.608518+0000 mgr.y (mgr.24407) 223 : cluster [DBG] pgmap v276: 388 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 340 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:30 vm03 bash[23382]: cluster 2026-03-10T07:30:28.608518+0000 mgr.y (mgr.24407) 223 : cluster [DBG] pgmap v276: 388 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 340 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:30 vm03 bash[23382]: audit 2026-03-10T07:30:29.350584+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:30 vm03 bash[23382]: audit 2026-03-10T07:30:29.350584+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:30 vm03 bash[23382]: cluster 2026-03-10T07:30:29.372878+0000 mon.a (mon.0) 1869 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T07:30:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:30 vm03 bash[23382]: cluster 2026-03-10T07:30:29.372878+0000 mon.a (mon.0) 1869 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T07:30:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:30 vm03 bash[23382]: audit 2026-03-10T07:30:30.353907+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:30 vm03 bash[23382]: audit 2026-03-10T07:30:30.353907+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:30 vm03 bash[23382]: cluster 2026-03-10T07:30:30.358175+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T07:30:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:30 vm03 bash[23382]: cluster 2026-03-10T07:30:30.358175+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T07:30:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:30 vm00 bash[28005]: cluster 2026-03-10T07:30:28.608518+0000 mgr.y (mgr.24407) 223 : cluster [DBG] pgmap v276: 388 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 340 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:30 vm00 bash[28005]: cluster 2026-03-10T07:30:28.608518+0000 mgr.y (mgr.24407) 223 : cluster [DBG] pgmap v276: 388 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 340 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:30 vm00 bash[28005]: audit 2026-03-10T07:30:29.350584+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:30 vm00 bash[28005]: audit 2026-03-10T07:30:29.350584+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:30 vm00 bash[28005]: cluster 2026-03-10T07:30:29.372878+0000 mon.a (mon.0) 1869 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T07:30:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:30 vm00 bash[28005]: cluster 2026-03-10T07:30:29.372878+0000 mon.a (mon.0) 1869 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T07:30:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:30 vm00 bash[28005]: audit 2026-03-10T07:30:30.353907+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:30 vm00 bash[28005]: audit 2026-03-10T07:30:30.353907+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:30 vm00 bash[28005]: cluster 2026-03-10T07:30:30.358175+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T07:30:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:30 vm00 bash[28005]: cluster 2026-03-10T07:30:30.358175+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T07:30:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:30 vm00 bash[20701]: cluster 2026-03-10T07:30:28.608518+0000 mgr.y (mgr.24407) 223 : cluster [DBG] pgmap v276: 388 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 340 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:30 vm00 bash[20701]: cluster 2026-03-10T07:30:28.608518+0000 mgr.y (mgr.24407) 223 : cluster [DBG] pgmap v276: 388 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 340 active+clean; 457 KiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:30 vm00 bash[20701]: audit 2026-03-10T07:30:29.350584+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:30 vm00 bash[20701]: audit 2026-03-10T07:30:29.350584+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:30 vm00 bash[20701]: cluster 2026-03-10T07:30:29.372878+0000 mon.a (mon.0) 1869 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T07:30:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:30 vm00 bash[20701]: cluster 2026-03-10T07:30:29.372878+0000 mon.a (mon.0) 1869 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T07:30:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:30 vm00 bash[20701]: audit 2026-03-10T07:30:30.353907+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:30 vm00 bash[20701]: audit 2026-03-10T07:30:30.353907+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm00-59629-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:30 vm00 bash[20701]: cluster 2026-03-10T07:30:30.358175+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T07:30:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:30 vm00 bash[20701]: cluster 2026-03-10T07:30:30.358175+0000 mon.a (mon.0) 1871 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T07:30:31.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:30:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:30:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:30:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:31 vm03 bash[23382]: audit 2026-03-10T07:30:30.406450+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:31 vm03 bash[23382]: audit 2026-03-10T07:30:30.406450+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:31 vm03 bash[23382]: audit 2026-03-10T07:30:30.408172+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:31 vm03 bash[23382]: audit 2026-03-10T07:30:30.408172+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:31 vm03 bash[23382]: cluster 2026-03-10T07:30:30.608859+0000 mgr.y (mgr.24407) 224 : cluster [DBG] pgmap v279: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 308 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:30:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:31 vm03 bash[23382]: cluster 2026-03-10T07:30:30.608859+0000 mgr.y (mgr.24407) 224 : cluster [DBG] pgmap v279: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 308 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:31 vm00 bash[28005]: audit 2026-03-10T07:30:30.406450+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:31 vm00 bash[28005]: audit 2026-03-10T07:30:30.406450+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:31 vm00 bash[28005]: audit 2026-03-10T07:30:30.408172+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:31 vm00 bash[28005]: audit 2026-03-10T07:30:30.408172+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:31 vm00 bash[28005]: cluster 2026-03-10T07:30:30.608859+0000 mgr.y (mgr.24407) 224 : cluster [DBG] pgmap v279: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 308 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:31 vm00 bash[28005]: cluster 2026-03-10T07:30:30.608859+0000 mgr.y (mgr.24407) 224 : cluster [DBG] pgmap v279: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 308 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:31 vm00 bash[20701]: audit 2026-03-10T07:30:30.406450+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:31 vm00 bash[20701]: audit 2026-03-10T07:30:30.406450+0000 mon.b (mon.1) 234 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:31 vm00 bash[20701]: audit 2026-03-10T07:30:30.408172+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:31 vm00 bash[20701]: audit 2026-03-10T07:30:30.408172+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:31 vm00 bash[20701]: cluster 2026-03-10T07:30:30.608859+0000 mgr.y (mgr.24407) 224 : cluster [DBG] pgmap v279: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 308 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:30:31.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:31 vm00 bash[20701]: cluster 2026-03-10T07:30:30.608859+0000 mgr.y (mgr.24407) 224 : cluster [DBG] pgmap v279: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 308 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:30:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:32 vm03 bash[23382]: audit 2026-03-10T07:30:31.411972+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:32 vm03 bash[23382]: audit 2026-03-10T07:30:31.411972+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:32 vm03 bash[23382]: audit 2026-03-10T07:30:31.414744+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:32 vm03 bash[23382]: audit 2026-03-10T07:30:31.414744+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:32 vm03 bash[23382]: cluster 2026-03-10T07:30:31.434188+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T07:30:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:32 vm03 bash[23382]: cluster 2026-03-10T07:30:31.434188+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T07:30:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:32 vm03 bash[23382]: audit 2026-03-10T07:30:31.438404+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:32 vm03 bash[23382]: audit 2026-03-10T07:30:31.438404+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:32.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:32 vm00 bash[28005]: audit 2026-03-10T07:30:31.411972+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:32 vm00 bash[28005]: audit 2026-03-10T07:30:31.411972+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:32 vm00 bash[28005]: audit 2026-03-10T07:30:31.414744+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:32 vm00 bash[28005]: audit 2026-03-10T07:30:31.414744+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:32 vm00 bash[28005]: cluster 2026-03-10T07:30:31.434188+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:32 vm00 bash[28005]: cluster 2026-03-10T07:30:31.434188+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:32 vm00 bash[28005]: audit 2026-03-10T07:30:31.438404+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:32 vm00 bash[28005]: audit 2026-03-10T07:30:31.438404+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:32 vm00 bash[20701]: audit 2026-03-10T07:30:31.411972+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:32 vm00 bash[20701]: audit 2026-03-10T07:30:31.411972+0000 mon.a (mon.0) 1873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:32 vm00 bash[20701]: audit 2026-03-10T07:30:31.414744+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:32 vm00 bash[20701]: audit 2026-03-10T07:30:31.414744+0000 mon.b (mon.1) 235 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:32 vm00 bash[20701]: cluster 2026-03-10T07:30:31.434188+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:32 vm00 bash[20701]: cluster 2026-03-10T07:30:31.434188+0000 mon.a (mon.0) 1874 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:32 vm00 bash[20701]: audit 2026-03-10T07:30:31.438404+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:32 vm00 bash[20701]: audit 2026-03-10T07:30:31.438404+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:33.499 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:30:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.453528+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.453528+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.456968+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.456968+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: cluster 2026-03-10T07:30:32.468750+0000 mon.a (mon.0) 1877 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: cluster 2026-03-10T07:30:32.468750+0000 mon.a (mon.0) 1877 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.471298+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.471298+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.472560+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.472560+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.495909+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.100:0/3266235947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.495909+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.100:0/3266235947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.496222+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:32.496222+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: cluster 2026-03-10T07:30:32.609202+0000 mgr.y (mgr.24407) 225 : cluster [DBG] pgmap v282: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: cluster 2026-03-10T07:30:32.609202+0000 mgr.y (mgr.24407) 225 : cluster [DBG] pgmap v282: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:33.069343+0000 mgr.y (mgr.24407) 226 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:33.069343+0000 mgr.y (mgr.24407) 226 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: cluster 2026-03-10T07:30:33.453868+0000 mon.a (mon.0) 1881 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: cluster 2026-03-10T07:30:33.453868+0000 mon.a (mon.0) 1881 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:33.457286+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]': finished 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:33.457286+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]': finished 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:33.457388+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:33.457388+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:33.457868+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:33.457868+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: cluster 2026-03-10T07:30:33.466040+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T07:30:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: cluster 2026-03-10T07:30:33.466040+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T07:30:33.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:33.473770+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:33.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:33 vm03 bash[23382]: audit 2026-03-10T07:30:33.473770+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.453528+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.453528+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.456968+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.456968+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: cluster 2026-03-10T07:30:32.468750+0000 mon.a (mon.0) 1877 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: cluster 2026-03-10T07:30:32.468750+0000 mon.a (mon.0) 1877 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.471298+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.471298+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.472560+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.472560+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.495909+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.100:0/3266235947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.495909+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.100:0/3266235947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.496222+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:32.496222+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: cluster 2026-03-10T07:30:32.609202+0000 mgr.y (mgr.24407) 225 : cluster [DBG] pgmap v282: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: cluster 2026-03-10T07:30:32.609202+0000 mgr.y (mgr.24407) 225 : cluster [DBG] pgmap v282: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:33.069343+0000 mgr.y (mgr.24407) 226 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:33.069343+0000 mgr.y (mgr.24407) 226 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: cluster 2026-03-10T07:30:33.453868+0000 mon.a (mon.0) 1881 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: cluster 2026-03-10T07:30:33.453868+0000 mon.a (mon.0) 1881 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:33.457286+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]': finished 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:33.457286+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]': finished 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:33.457388+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:33.457388+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:33.457868+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:33.457868+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: cluster 2026-03-10T07:30:33.466040+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: cluster 2026-03-10T07:30:33.466040+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:33.473770+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:33 vm00 bash[28005]: audit 2026-03-10T07:30:33.473770+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.453528+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.453528+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.456968+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.456968+0000 mon.b (mon.1) 236 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: cluster 2026-03-10T07:30:32.468750+0000 mon.a (mon.0) 1877 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: cluster 2026-03-10T07:30:32.468750+0000 mon.a (mon.0) 1877 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.471298+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.471298+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]: dispatch 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.472560+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.472560+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.495909+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.100:0/3266235947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.495909+0000 mon.c (mon.2) 219 : audit [INF] from='client.? 192.168.123.100:0/3266235947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.496222+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:32.496222+0000 mon.a (mon.0) 1880 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: cluster 2026-03-10T07:30:32.609202+0000 mgr.y (mgr.24407) 225 : cluster [DBG] pgmap v282: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: cluster 2026-03-10T07:30:32.609202+0000 mgr.y (mgr.24407) 225 : cluster [DBG] pgmap v282: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 705 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:33.069343+0000 mgr.y (mgr.24407) 226 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:33.069343+0000 mgr.y (mgr.24407) 226 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: cluster 2026-03-10T07:30:33.453868+0000 mon.a (mon.0) 1881 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: cluster 2026-03-10T07:30:33.453868+0000 mon.a (mon.0) 1881 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:33.457286+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]': finished 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:33.457286+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-27", "mode": "writeback"}]': finished 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:33.457388+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:33.457388+0000 mon.a (mon.0) 1883 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:33.457868+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:33.457868+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm00-59637-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: cluster 2026-03-10T07:30:33.466040+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: cluster 2026-03-10T07:30:33.466040+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:33.473770+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:33.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:33 vm00 bash[20701]: audit 2026-03-10T07:30:33.473770+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.461610+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.461610+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: cluster 2026-03-10T07:30:34.471349+0000 mon.a (mon.0) 1888 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: cluster 2026-03-10T07:30:34.471349+0000 mon.a (mon.0) 1888 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.514631+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.514631+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.515267+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.515267+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.516411+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.516411+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.516695+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.516695+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.517632+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.517632+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.517931+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.517931+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.523278+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.523278+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.524943+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: audit 2026-03-10T07:30:34.524943+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: cluster 2026-03-10T07:30:34.609772+0000 mgr.y (mgr.24407) 227 : cluster [DBG] pgmap v285: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 706 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:35.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:35 vm03 bash[23382]: cluster 2026-03-10T07:30:34.609772+0000 mgr.y (mgr.24407) 227 : cluster [DBG] pgmap v285: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 706 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:35.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.461610+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.461610+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: cluster 2026-03-10T07:30:34.471349+0000 mon.a (mon.0) 1888 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: cluster 2026-03-10T07:30:34.471349+0000 mon.a (mon.0) 1888 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.514631+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.514631+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.515267+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.515267+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.516411+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.516411+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.516695+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.516695+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.517632+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.517632+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.517931+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.517931+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.523278+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.523278+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.524943+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: audit 2026-03-10T07:30:34.524943+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: cluster 2026-03-10T07:30:34.609772+0000 mgr.y (mgr.24407) 227 : cluster [DBG] pgmap v285: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 706 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:35 vm00 bash[28005]: cluster 2026-03-10T07:30:34.609772+0000 mgr.y (mgr.24407) 227 : cluster [DBG] pgmap v285: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 706 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.461610+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.461610+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? 192.168.123.100:0/200144758' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm00-59629-36"}]': finished 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: cluster 2026-03-10T07:30:34.471349+0000 mon.a (mon.0) 1888 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: cluster 2026-03-10T07:30:34.471349+0000 mon.a (mon.0) 1888 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.514631+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.514631+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.515267+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.515267+0000 mon.a (mon.0) 1889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.516411+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.516411+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.516695+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.516695+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.517632+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.517632+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.517931+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.517931+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.523278+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.523278+0000 mon.b (mon.1) 237 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.524943+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: audit 2026-03-10T07:30:34.524943+0000 mon.a (mon.0) 1892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: cluster 2026-03-10T07:30:34.609772+0000 mgr.y (mgr.24407) 227 : cluster [DBG] pgmap v285: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 706 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:35 vm00 bash[20701]: cluster 2026-03-10T07:30:34.609772+0000 mgr.y (mgr.24407) 227 : cluster [DBG] pgmap v285: 292 pgs: 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 276 active+clean; 457 KiB data, 706 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: cluster 2026-03-10T07:30:35.461631+0000 mon.a (mon.0) 1893 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: cluster 2026-03-10T07:30:35.461631+0000 mon.a (mon.0) 1893 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.473330+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.473330+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.473440+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.473440+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.480146+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.480146+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: cluster 2026-03-10T07:30:35.493811+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: cluster 2026-03-10T07:30:35.493811+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.500196+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2517656642' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.500196+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2517656642' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.500711+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.500711+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.511264+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.511264+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.512579+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.512579+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.513145+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:36.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:36 vm03 bash[23382]: audit 2026-03-10T07:30:35.513145+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: cluster 2026-03-10T07:30:35.461631+0000 mon.a (mon.0) 1893 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: cluster 2026-03-10T07:30:35.461631+0000 mon.a (mon.0) 1893 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.473330+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.473330+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.473440+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.473440+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.480146+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.480146+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: cluster 2026-03-10T07:30:35.493811+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: cluster 2026-03-10T07:30:35.493811+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.500196+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2517656642' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.500196+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2517656642' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.500711+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.500711+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.511264+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.511264+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.512579+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.512579+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.513145+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:36 vm00 bash[28005]: audit 2026-03-10T07:30:35.513145+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: cluster 2026-03-10T07:30:35.461631+0000 mon.a (mon.0) 1893 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: cluster 2026-03-10T07:30:35.461631+0000 mon.a (mon.0) 1893 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.473330+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.473330+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm00-59629-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.473440+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.473440+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.480146+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.480146+0000 mon.b (mon.1) 238 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: cluster 2026-03-10T07:30:35.493811+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: cluster 2026-03-10T07:30:35.493811+0000 mon.a (mon.0) 1896 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.500196+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2517656642' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.500196+0000 mon.b (mon.1) 239 : audit [INF] from='client.? 192.168.123.100:0/2517656642' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.500711+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.500711+0000 mon.a (mon.0) 1897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]: dispatch 2026-03-10T07:30:36.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.511264+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.511264+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:36.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.512579+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:36.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.512579+0000 mon.c (mon.2) 223 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:36.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.513145+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:36.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:36 vm00 bash[20701]: audit 2026-03-10T07:30:35.513145+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:37.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:37 vm03 bash[23382]: cluster 2026-03-10T07:30:36.475023+0000 mon.a (mon.0) 1900 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:37.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:37 vm03 bash[23382]: cluster 2026-03-10T07:30:36.475023+0000 mon.a (mon.0) 1900 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:37.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:37 vm03 bash[23382]: audit 2026-03-10T07:30:36.485182+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:37.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:37 vm03 bash[23382]: audit 2026-03-10T07:30:36.485182+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:37.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:37 vm03 bash[23382]: audit 2026-03-10T07:30:36.485231+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:37.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:37 vm03 bash[23382]: audit 2026-03-10T07:30:36.485231+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:37.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:37 vm03 bash[23382]: cluster 2026-03-10T07:30:36.511444+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T07:30:37.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:37 vm03 bash[23382]: cluster 2026-03-10T07:30:36.511444+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T07:30:37.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:37 vm03 bash[23382]: cluster 2026-03-10T07:30:36.610143+0000 mgr.y (mgr.24407) 228 : cluster [DBG] pgmap v288: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 23 active+clean+snaptrim_wait, 263 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:37.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:37 vm03 bash[23382]: cluster 2026-03-10T07:30:36.610143+0000 mgr.y (mgr.24407) 228 : cluster [DBG] pgmap v288: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 23 active+clean+snaptrim_wait, 263 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:37 vm00 bash[28005]: cluster 2026-03-10T07:30:36.475023+0000 mon.a (mon.0) 1900 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:37 vm00 bash[28005]: cluster 2026-03-10T07:30:36.475023+0000 mon.a (mon.0) 1900 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:37 vm00 bash[28005]: audit 2026-03-10T07:30:36.485182+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:37 vm00 bash[28005]: audit 2026-03-10T07:30:36.485182+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:37 vm00 bash[28005]: audit 2026-03-10T07:30:36.485231+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:37 vm00 bash[28005]: audit 2026-03-10T07:30:36.485231+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:37 vm00 bash[28005]: cluster 2026-03-10T07:30:36.511444+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:37 vm00 bash[28005]: cluster 2026-03-10T07:30:36.511444+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:37 vm00 bash[28005]: cluster 2026-03-10T07:30:36.610143+0000 mgr.y (mgr.24407) 228 : cluster [DBG] pgmap v288: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 23 active+clean+snaptrim_wait, 263 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:37 vm00 bash[28005]: cluster 2026-03-10T07:30:36.610143+0000 mgr.y (mgr.24407) 228 : cluster [DBG] pgmap v288: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 23 active+clean+snaptrim_wait, 263 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:37 vm00 bash[20701]: cluster 2026-03-10T07:30:36.475023+0000 mon.a (mon.0) 1900 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:37 vm00 bash[20701]: cluster 2026-03-10T07:30:36.475023+0000 mon.a (mon.0) 1900 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:37 vm00 bash[20701]: audit 2026-03-10T07:30:36.485182+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:37 vm00 bash[20701]: audit 2026-03-10T07:30:36.485182+0000 mon.a (mon.0) 1901 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-27"}]': finished 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:37 vm00 bash[20701]: audit 2026-03-10T07:30:36.485231+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:37 vm00 bash[20701]: audit 2026-03-10T07:30:36.485231+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm00-59637-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:37 vm00 bash[20701]: cluster 2026-03-10T07:30:36.511444+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:37 vm00 bash[20701]: cluster 2026-03-10T07:30:36.511444+0000 mon.a (mon.0) 1903 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:37 vm00 bash[20701]: cluster 2026-03-10T07:30:36.610143+0000 mgr.y (mgr.24407) 228 : cluster [DBG] pgmap v288: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 23 active+clean+snaptrim_wait, 263 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:37 vm00 bash[20701]: cluster 2026-03-10T07:30:36.610143+0000 mgr.y (mgr.24407) 228 : cluster [DBG] pgmap v288: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 23 active+clean+snaptrim_wait, 263 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T07:30:38.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.494589+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:38.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.494589+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: cluster 2026-03-10T07:30:37.518878+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: cluster 2026-03-10T07:30:37.518878+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.541059+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.541059+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.543870+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.543870+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.544962+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.544962+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.545366+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.545366+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.546243+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.546243+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.546752+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:38 vm00 bash[28005]: audit 2026-03-10T07:30:37.546752+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.494589+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.494589+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: cluster 2026-03-10T07:30:37.518878+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: cluster 2026-03-10T07:30:37.518878+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.541059+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.541059+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.543870+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.543870+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.544962+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.544962+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.545366+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.545366+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.546243+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.546243+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.546752+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:38.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:38 vm00 bash[20701]: audit 2026-03-10T07:30:37.546752+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:39.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.494589+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:39.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.494589+0000 mon.a (mon.0) 1904 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm00-59629-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:39.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: cluster 2026-03-10T07:30:37.518878+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T07:30:39.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: cluster 2026-03-10T07:30:37.518878+0000 mon.a (mon.0) 1905 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T07:30:39.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.541059+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.541059+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.543870+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.543870+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.544962+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.544962+0000 mon.c (mon.2) 225 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.545366+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.545366+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.546243+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.546243+0000 mon.c (mon.2) 226 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.546752+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:39.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:38 vm03 bash[23382]: audit 2026-03-10T07:30:37.546752+0000 mon.a (mon.0) 1908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:38.498460+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:38.498460+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:38.509222+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:38.509222+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: cluster 2026-03-10T07:30:38.515736+0000 mon.a (mon.0) 1910 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: cluster 2026-03-10T07:30:38.515736+0000 mon.a (mon.0) 1910 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:38.520436+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:38.520436+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:38.522578+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:38.522578+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:38.524145+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:38.524145+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: cluster 2026-03-10T07:30:38.610611+0000 mgr.y (mgr.24407) 229 : cluster [DBG] pgmap v291: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: cluster 2026-03-10T07:30:38.610611+0000 mgr.y (mgr.24407) 229 : cluster [DBG] pgmap v291: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:39.309047+0000 mon.a (mon.0) 1913 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:39.309047+0000 mon.a (mon.0) 1913 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:39.312542+0000 mon.c (mon.2) 228 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:39.312542+0000 mon.c (mon.2) 228 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:39.502006+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:39 vm00 bash[28005]: audit 2026-03-10T07:30:39.502006+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:38.498460+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:38.498460+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:38.509222+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:38.509222+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: cluster 2026-03-10T07:30:38.515736+0000 mon.a (mon.0) 1910 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: cluster 2026-03-10T07:30:38.515736+0000 mon.a (mon.0) 1910 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:38.520436+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:38.520436+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:38.522578+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:38.522578+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:38.524145+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:38.524145+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: cluster 2026-03-10T07:30:38.610611+0000 mgr.y (mgr.24407) 229 : cluster [DBG] pgmap v291: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: cluster 2026-03-10T07:30:38.610611+0000 mgr.y (mgr.24407) 229 : cluster [DBG] pgmap v291: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:39.309047+0000 mon.a (mon.0) 1913 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:39.309047+0000 mon.a (mon.0) 1913 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:39.312542+0000 mon.c (mon.2) 228 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:39.312542+0000 mon.c (mon.2) 228 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:39.502006+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:39 vm00 bash[20701]: audit 2026-03-10T07:30:39.502006+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:38.498460+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:38.498460+0000 mon.a (mon.0) 1909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:38.509222+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:38.509222+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: cluster 2026-03-10T07:30:38.515736+0000 mon.a (mon.0) 1910 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: cluster 2026-03-10T07:30:38.515736+0000 mon.a (mon.0) 1910 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:38.520436+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:38.520436+0000 mon.b (mon.1) 240 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:38.522578+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:38.522578+0000 mon.a (mon.0) 1911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:38.524145+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:38.524145+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: cluster 2026-03-10T07:30:38.610611+0000 mgr.y (mgr.24407) 229 : cluster [DBG] pgmap v291: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:40.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: cluster 2026-03-10T07:30:38.610611+0000 mgr.y (mgr.24407) 229 : cluster [DBG] pgmap v291: 300 pgs: 40 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 244 active+clean; 456 KiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:30:40.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:39.309047+0000 mon.a (mon.0) 1913 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:40.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:39.309047+0000 mon.a (mon.0) 1913 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:30:40.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:39.312542+0000 mon.c (mon.2) 228 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:40.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:39.312542+0000 mon.c (mon.2) 228 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:40.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:39.502006+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:40.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:39 vm03 bash[23382]: audit 2026-03-10T07:30:39.502006+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: cluster 2026-03-10T07:30:39.547362+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: cluster 2026-03-10T07:30:39.547362+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:39.547899+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:39.547899+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:39.552934+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:39.552934+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:40.505931+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:40.505931+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:40.506016+0000 mon.a (mon.0) 1918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:40.506016+0000 mon.a (mon.0) 1918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:40.523237+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:40.523237+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: cluster 2026-03-10T07:30:40.526257+0000 mon.a (mon.0) 1919 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: cluster 2026-03-10T07:30:40.526257+0000 mon.a (mon.0) 1919 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:40.533387+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:40 vm00 bash[28005]: audit 2026-03-10T07:30:40.533387+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: cluster 2026-03-10T07:30:39.547362+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: cluster 2026-03-10T07:30:39.547362+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:39.547899+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:39.547899+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:39.552934+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:39.552934+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:40.505931+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:40.505931+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:40.506016+0000 mon.a (mon.0) 1918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:40.506016+0000 mon.a (mon.0) 1918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:40.523237+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:40.523237+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: cluster 2026-03-10T07:30:40.526257+0000 mon.a (mon.0) 1919 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: cluster 2026-03-10T07:30:40.526257+0000 mon.a (mon.0) 1919 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:40.533387+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:40.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:40 vm00 bash[20701]: audit 2026-03-10T07:30:40.533387+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: cluster 2026-03-10T07:30:39.547362+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: cluster 2026-03-10T07:30:39.547362+0000 mon.a (mon.0) 1915 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:39.547899+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:39.547899+0000 mon.c (mon.2) 229 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:39.552934+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:39.552934+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:40.505931+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:40.505931+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:40.506016+0000 mon.a (mon.0) 1918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:40.506016+0000 mon.a (mon.0) 1918 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:40.523237+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:40.523237+0000 mon.c (mon.2) 230 : audit [INF] from='client.? 192.168.123.100:0/1979108439' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: cluster 2026-03-10T07:30:40.526257+0000 mon.a (mon.0) 1919 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: cluster 2026-03-10T07:30:40.526257+0000 mon.a (mon.0) 1919 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:40.533387+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:41.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:40 vm03 bash[23382]: audit 2026-03-10T07:30:40.533387+0000 mon.a (mon.0) 1920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]: dispatch 2026-03-10T07:30:41.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:30:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:30:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: cluster 2026-03-10T07:30:40.610970+0000 mgr.y (mgr.24407) 230 : cluster [DBG] pgmap v294: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: cluster 2026-03-10T07:30:40.610970+0000 mgr.y (mgr.24407) 230 : cluster [DBG] pgmap v294: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: cluster 2026-03-10T07:30:40.965628+0000 mon.a (mon.0) 1921 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: cluster 2026-03-10T07:30:40.965628+0000 mon.a (mon.0) 1921 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.509630+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.509630+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: cluster 2026-03-10T07:30:41.515732+0000 mon.a (mon.0) 1923 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: cluster 2026-03-10T07:30:41.515732+0000 mon.a (mon.0) 1923 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.534864+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.534864+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.540179+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.540179+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.541145+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.541145+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.541358+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.541358+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.541626+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.541626+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.543144+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.543144+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.546615+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.546615+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.546800+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:41 vm00 bash[28005]: audit 2026-03-10T07:30:41.546800+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: cluster 2026-03-10T07:30:40.610970+0000 mgr.y (mgr.24407) 230 : cluster [DBG] pgmap v294: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: cluster 2026-03-10T07:30:40.610970+0000 mgr.y (mgr.24407) 230 : cluster [DBG] pgmap v294: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: cluster 2026-03-10T07:30:40.965628+0000 mon.a (mon.0) 1921 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:41.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: cluster 2026-03-10T07:30:40.965628+0000 mon.a (mon.0) 1921 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.509630+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.509630+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: cluster 2026-03-10T07:30:41.515732+0000 mon.a (mon.0) 1923 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: cluster 2026-03-10T07:30:41.515732+0000 mon.a (mon.0) 1923 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.534864+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.534864+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.540179+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.540179+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.541145+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.541145+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.541358+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.541358+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.541626+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.541626+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.543144+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.543144+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.546615+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.546615+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.546800+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:41 vm00 bash[20701]: audit 2026-03-10T07:30:41.546800+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: cluster 2026-03-10T07:30:40.610970+0000 mgr.y (mgr.24407) 230 : cluster [DBG] pgmap v294: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: cluster 2026-03-10T07:30:40.610970+0000 mgr.y (mgr.24407) 230 : cluster [DBG] pgmap v294: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: cluster 2026-03-10T07:30:40.965628+0000 mon.a (mon.0) 1921 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: cluster 2026-03-10T07:30:40.965628+0000 mon.a (mon.0) 1921 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.509630+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.509630+0000 mon.a (mon.0) 1922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm00-59629-37"}]': finished 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: cluster 2026-03-10T07:30:41.515732+0000 mon.a (mon.0) 1923 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: cluster 2026-03-10T07:30:41.515732+0000 mon.a (mon.0) 1923 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.534864+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.534864+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.540179+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.540179+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.541145+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.541145+0000 mon.c (mon.2) 232 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.541358+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.541358+0000 mon.b (mon.1) 241 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.541626+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.541626+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.543144+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.543144+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:42.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.546615+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.546615+0000 mon.a (mon.0) 1926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.546800+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:42.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:41 vm03 bash[23382]: audit 2026-03-10T07:30:41.546800+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:43.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:30:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.513052+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.513052+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.513208+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.513208+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.523915+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.523915+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: cluster 2026-03-10T07:30:42.524518+0000 mon.a (mon.0) 1930 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: cluster 2026-03-10T07:30:42.524518+0000 mon.a (mon.0) 1930 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.530420+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.530420+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.530706+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.530706+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.531165+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.531165+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.531620+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.531620+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.531809+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:42.531809+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: cluster 2026-03-10T07:30:42.611258+0000 mgr.y (mgr.24407) 231 : cluster [DBG] pgmap v297: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: cluster 2026-03-10T07:30:42.611258+0000 mgr.y (mgr.24407) 231 : cluster [DBG] pgmap v297: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:43.077048+0000 mgr.y (mgr.24407) 232 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:43 vm00 bash[28005]: audit 2026-03-10T07:30:43.077048+0000 mgr.y (mgr.24407) 232 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.513052+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.513052+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.513208+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.513208+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.523915+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.523915+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: cluster 2026-03-10T07:30:42.524518+0000 mon.a (mon.0) 1930 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: cluster 2026-03-10T07:30:42.524518+0000 mon.a (mon.0) 1930 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.530420+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.530420+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.530706+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.530706+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.531165+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.531165+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.531620+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.531620+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.531809+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:42.531809+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: cluster 2026-03-10T07:30:42.611258+0000 mgr.y (mgr.24407) 231 : cluster [DBG] pgmap v297: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: cluster 2026-03-10T07:30:42.611258+0000 mgr.y (mgr.24407) 231 : cluster [DBG] pgmap v297: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:43.077048+0000 mgr.y (mgr.24407) 232 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:43 vm00 bash[20701]: audit 2026-03-10T07:30:43.077048+0000 mgr.y (mgr.24407) 232 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:44.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.513052+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.513052+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm00-59629-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.513208+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.513208+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.523915+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.523915+0000 mon.b (mon.1) 242 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: cluster 2026-03-10T07:30:42.524518+0000 mon.a (mon.0) 1930 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: cluster 2026-03-10T07:30:42.524518+0000 mon.a (mon.0) 1930 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.530420+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.530420+0000 mon.a (mon.0) 1931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.530706+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.530706+0000 mon.c (mon.2) 234 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.531165+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.531165+0000 mon.c (mon.2) 235 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.531620+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.531620+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.531809+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:42.531809+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: cluster 2026-03-10T07:30:42.611258+0000 mgr.y (mgr.24407) 231 : cluster [DBG] pgmap v297: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: cluster 2026-03-10T07:30:42.611258+0000 mgr.y (mgr.24407) 231 : cluster [DBG] pgmap v297: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:43.077048+0000 mgr.y (mgr.24407) 232 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:44.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:43 vm03 bash[23382]: audit 2026-03-10T07:30:43.077048+0000 mgr.y (mgr.24407) 232 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.523871+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.523871+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.524061+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.524061+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.536708+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.536708+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: cluster 2026-03-10T07:30:43.545207+0000 mon.a (mon.0) 1936 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: cluster 2026-03-10T07:30:43.545207+0000 mon.a (mon.0) 1936 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.545834+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.545834+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.568501+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.568501+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.568806+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:44 vm00 bash[28005]: audit 2026-03-10T07:30:43.568806+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.523871+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.523871+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.524061+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.524061+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.536708+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.536708+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: cluster 2026-03-10T07:30:43.545207+0000 mon.a (mon.0) 1936 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: cluster 2026-03-10T07:30:43.545207+0000 mon.a (mon.0) 1936 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.545834+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.545834+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.568501+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.568501+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.568806+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:44 vm00 bash[20701]: audit 2026-03-10T07:30:43.568806+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.523871+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.523871+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.524061+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.524061+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.536708+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.536708+0000 mon.b (mon.1) 243 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: cluster 2026-03-10T07:30:43.545207+0000 mon.a (mon.0) 1936 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: cluster 2026-03-10T07:30:43.545207+0000 mon.a (mon.0) 1936 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.545834+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.545834+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]: dispatch 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.568501+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.568501+0000 mon.c (mon.2) 236 : audit [INF] from='client.? 192.168.123.100:0/872691184' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.568806+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:45.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:44 vm03 bash[23382]: audit 2026-03-10T07:30:43.568806+0000 mon.a (mon.0) 1938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: cluster 2026-03-10T07:30:44.524302+0000 mon.a (mon.0) 1939 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: cluster 2026-03-10T07:30:44.524302+0000 mon.a (mon.0) 1939 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.528137+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.528137+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.528238+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.528238+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.528270+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.528270+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: cluster 2026-03-10T07:30:44.538002+0000 mon.a (mon.0) 1943 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: cluster 2026-03-10T07:30:44.538002+0000 mon.a (mon.0) 1943 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.583656+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.583656+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.586109+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.586109+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.586787+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:44.586787+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: cluster 2026-03-10T07:30:44.613247+0000 mgr.y (mgr.24407) 233 : cluster [DBG] pgmap v300: 300 pgs: 2 creating+peering, 6 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: cluster 2026-03-10T07:30:44.613247+0000 mgr.y (mgr.24407) 233 : cluster [DBG] pgmap v300: 300 pgs: 2 creating+peering, 6 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:45.533943+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:45.533943+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: cluster 2026-03-10T07:30:45.547735+0000 mon.a (mon.0) 1948 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: cluster 2026-03-10T07:30:45.547735+0000 mon.a (mon.0) 1948 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:45.557104+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:45 vm00 bash[28005]: audit 2026-03-10T07:30:45.557104+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: cluster 2026-03-10T07:30:44.524302+0000 mon.a (mon.0) 1939 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: cluster 2026-03-10T07:30:44.524302+0000 mon.a (mon.0) 1939 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.528137+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.528137+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.528238+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.528238+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.528270+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.528270+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: cluster 2026-03-10T07:30:44.538002+0000 mon.a (mon.0) 1943 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: cluster 2026-03-10T07:30:44.538002+0000 mon.a (mon.0) 1943 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.583656+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.583656+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.586109+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.586109+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.586787+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:44.586787+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: cluster 2026-03-10T07:30:44.613247+0000 mgr.y (mgr.24407) 233 : cluster [DBG] pgmap v300: 300 pgs: 2 creating+peering, 6 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: cluster 2026-03-10T07:30:44.613247+0000 mgr.y (mgr.24407) 233 : cluster [DBG] pgmap v300: 300 pgs: 2 creating+peering, 6 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:45.533943+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:45.533943+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: cluster 2026-03-10T07:30:45.547735+0000 mon.a (mon.0) 1948 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T07:30:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: cluster 2026-03-10T07:30:45.547735+0000 mon.a (mon.0) 1948 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T07:30:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:45.557104+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:45 vm00 bash[20701]: audit 2026-03-10T07:30:45.557104+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: cluster 2026-03-10T07:30:44.524302+0000 mon.a (mon.0) 1939 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: cluster 2026-03-10T07:30:44.524302+0000 mon.a (mon.0) 1939 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.528137+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.528137+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm00-59629-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.528238+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]': finished 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.528238+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-29", "mode": "writeback"}]': finished 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.528270+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.528270+0000 mon.a (mon.0) 1942 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-46"}]': finished 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: cluster 2026-03-10T07:30:44.538002+0000 mon.a (mon.0) 1943 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: cluster 2026-03-10T07:30:44.538002+0000 mon.a (mon.0) 1943 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.583656+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.583656+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.586109+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.586109+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.586787+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:44.586787+0000 mon.a (mon.0) 1946 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: cluster 2026-03-10T07:30:44.613247+0000 mgr.y (mgr.24407) 233 : cluster [DBG] pgmap v300: 300 pgs: 2 creating+peering, 6 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: cluster 2026-03-10T07:30:44.613247+0000 mgr.y (mgr.24407) 233 : cluster [DBG] pgmap v300: 300 pgs: 2 creating+peering, 6 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:45.533943+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:45.533943+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm00-59637-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: cluster 2026-03-10T07:30:45.547735+0000 mon.a (mon.0) 1948 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: cluster 2026-03-10T07:30:45.547735+0000 mon.a (mon.0) 1948 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:45.557104+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:46.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:45 vm03 bash[23382]: audit 2026-03-10T07:30:45.557104+0000 mon.a (mon.0) 1949 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:46.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:46 vm00 bash[28005]: cluster 2026-03-10T07:30:45.966556+0000 mon.a (mon.0) 1950 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:46.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:46 vm00 bash[28005]: cluster 2026-03-10T07:30:45.966556+0000 mon.a (mon.0) 1950 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:46.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:46 vm00 bash[28005]: cluster 2026-03-10T07:30:46.566154+0000 mon.a (mon.0) 1951 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:46 vm00 bash[28005]: cluster 2026-03-10T07:30:46.566154+0000 mon.a (mon.0) 1951 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:46 vm00 bash[28005]: audit 2026-03-10T07:30:46.567295+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:46 vm00 bash[28005]: audit 2026-03-10T07:30:46.567295+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:46 vm00 bash[28005]: audit 2026-03-10T07:30:46.568124+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:46 vm00 bash[28005]: audit 2026-03-10T07:30:46.568124+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:46 vm00 bash[20701]: cluster 2026-03-10T07:30:45.966556+0000 mon.a (mon.0) 1950 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:46 vm00 bash[20701]: cluster 2026-03-10T07:30:45.966556+0000 mon.a (mon.0) 1950 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:46 vm00 bash[20701]: cluster 2026-03-10T07:30:46.566154+0000 mon.a (mon.0) 1951 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:46 vm00 bash[20701]: cluster 2026-03-10T07:30:46.566154+0000 mon.a (mon.0) 1951 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:46 vm00 bash[20701]: audit 2026-03-10T07:30:46.567295+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:46 vm00 bash[20701]: audit 2026-03-10T07:30:46.567295+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:46 vm00 bash[20701]: audit 2026-03-10T07:30:46.568124+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:46.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:46 vm00 bash[20701]: audit 2026-03-10T07:30:46.568124+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:46 vm03 bash[23382]: cluster 2026-03-10T07:30:45.966556+0000 mon.a (mon.0) 1950 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:47.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:46 vm03 bash[23382]: cluster 2026-03-10T07:30:45.966556+0000 mon.a (mon.0) 1950 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:47.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:46 vm03 bash[23382]: cluster 2026-03-10T07:30:46.566154+0000 mon.a (mon.0) 1951 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T07:30:47.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:46 vm03 bash[23382]: cluster 2026-03-10T07:30:46.566154+0000 mon.a (mon.0) 1951 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T07:30:47.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:46 vm03 bash[23382]: audit 2026-03-10T07:30:46.567295+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:46 vm03 bash[23382]: audit 2026-03-10T07:30:46.567295+0000 mon.c (mon.2) 237 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:46 vm03 bash[23382]: audit 2026-03-10T07:30:46.568124+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:46 vm03 bash[23382]: audit 2026-03-10T07:30:46.568124+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: cluster 2026-03-10T07:30:46.613662+0000 mgr.y (mgr.24407) 234 : cluster [DBG] pgmap v303: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:47.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: cluster 2026-03-10T07:30:46.613662+0000 mgr.y (mgr.24407) 234 : cluster [DBG] pgmap v303: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:47.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: audit 2026-03-10T07:30:47.541728+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:47.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: audit 2026-03-10T07:30:47.541728+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:47.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: audit 2026-03-10T07:30:47.541816+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:47.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: audit 2026-03-10T07:30:47.541816+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:47.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: cluster 2026-03-10T07:30:47.544437+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T07:30:47.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: cluster 2026-03-10T07:30:47.544437+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: audit 2026-03-10T07:30:47.563301+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: audit 2026-03-10T07:30:47.563301+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: audit 2026-03-10T07:30:47.571249+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:47 vm00 bash[28005]: audit 2026-03-10T07:30:47.571249+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: cluster 2026-03-10T07:30:46.613662+0000 mgr.y (mgr.24407) 234 : cluster [DBG] pgmap v303: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: cluster 2026-03-10T07:30:46.613662+0000 mgr.y (mgr.24407) 234 : cluster [DBG] pgmap v303: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: audit 2026-03-10T07:30:47.541728+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: audit 2026-03-10T07:30:47.541728+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: audit 2026-03-10T07:30:47.541816+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: audit 2026-03-10T07:30:47.541816+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: cluster 2026-03-10T07:30:47.544437+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: cluster 2026-03-10T07:30:47.544437+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: audit 2026-03-10T07:30:47.563301+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: audit 2026-03-10T07:30:47.563301+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: audit 2026-03-10T07:30:47.571249+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:47.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:47 vm00 bash[20701]: audit 2026-03-10T07:30:47.571249+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:48.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: cluster 2026-03-10T07:30:46.613662+0000 mgr.y (mgr.24407) 234 : cluster [DBG] pgmap v303: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:48.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: cluster 2026-03-10T07:30:46.613662+0000 mgr.y (mgr.24407) 234 : cluster [DBG] pgmap v303: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:48.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: audit 2026-03-10T07:30:47.541728+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:48.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: audit 2026-03-10T07:30:47.541728+0000 mon.a (mon.0) 1953 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm00-59637-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:48.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: audit 2026-03-10T07:30:47.541816+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:48.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: audit 2026-03-10T07:30:47.541816+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:48.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: cluster 2026-03-10T07:30:47.544437+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T07:30:48.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: cluster 2026-03-10T07:30:47.544437+0000 mon.a (mon.0) 1955 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T07:30:48.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: audit 2026-03-10T07:30:47.563301+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:48.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: audit 2026-03-10T07:30:47.563301+0000 mon.c (mon.2) 238 : audit [INF] from='client.? 192.168.123.100:0/2254741842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:48.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: audit 2026-03-10T07:30:47.571249+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:48.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:47 vm03 bash[23382]: audit 2026-03-10T07:30:47.571249+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]: dispatch 2026-03-10T07:30:48.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:47.621866+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:47.621866+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:47.631659+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:47.631659+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.545673+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.545673+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.545821+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.545821+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.550064+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.550064+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: cluster 2026-03-10T07:30:48.550117+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: cluster 2026-03-10T07:30:48.550117+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.552243+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.552243+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.573920+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.573920+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.574798+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.574798+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.575463+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.575463+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.575689+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.575689+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.576440+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.576440+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.577109+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:48 vm00 bash[28005]: audit 2026-03-10T07:30:48.577109+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:47.621866+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:47.621866+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:47.631659+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:47.631659+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.545673+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.545673+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.545821+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.545821+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.550064+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.550064+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: cluster 2026-03-10T07:30:48.550117+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: cluster 2026-03-10T07:30:48.550117+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.552243+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.552243+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.573920+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.573920+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.574798+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.574798+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.575463+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.575463+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.575689+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.575689+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.576440+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.576440+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:48.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.577109+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:48.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:48 vm00 bash[20701]: audit 2026-03-10T07:30:48.577109+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:47.621866+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:47.621866+0000 mon.b (mon.1) 244 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:47.631659+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:47.631659+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.545673+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.545673+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm00-59629-38"}]': finished 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.545821+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.545821+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.550064+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.550064+0000 mon.b (mon.1) 245 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: cluster 2026-03-10T07:30:48.550117+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: cluster 2026-03-10T07:30:48.550117+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.552243+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.552243+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.573920+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.573920+0000 mon.b (mon.1) 246 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.574798+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.574798+0000 mon.b (mon.1) 247 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.575463+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.575463+0000 mon.b (mon.1) 248 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.575689+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.575689+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.576440+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.576440+0000 mon.a (mon.0) 1963 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.577109+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:49.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:48 vm03 bash[23382]: audit 2026-03-10T07:30:48.577109+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:50.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:49 vm03 bash[23382]: cluster 2026-03-10T07:30:48.614121+0000 mgr.y (mgr.24407) 235 : cluster [DBG] pgmap v306: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:50.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:49 vm03 bash[23382]: cluster 2026-03-10T07:30:48.614121+0000 mgr.y (mgr.24407) 235 : cluster [DBG] pgmap v306: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:50.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:49 vm03 bash[23382]: cluster 2026-03-10T07:30:49.617777+0000 mon.a (mon.0) 1965 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:50.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:49 vm03 bash[23382]: cluster 2026-03-10T07:30:49.617777+0000 mon.a (mon.0) 1965 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:50.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:49 vm03 bash[23382]: audit 2026-03-10T07:30:49.624315+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:50.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:49 vm03 bash[23382]: audit 2026-03-10T07:30:49.624315+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:50.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:49 vm03 bash[23382]: audit 2026-03-10T07:30:49.624366+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:50.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:49 vm03 bash[23382]: audit 2026-03-10T07:30:49.624366+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:50.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:49 vm03 bash[23382]: cluster 2026-03-10T07:30:49.626996+0000 mon.a (mon.0) 1968 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T07:30:50.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:49 vm03 bash[23382]: cluster 2026-03-10T07:30:49.626996+0000 mon.a (mon.0) 1968 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T07:30:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:49 vm00 bash[28005]: cluster 2026-03-10T07:30:48.614121+0000 mgr.y (mgr.24407) 235 : cluster [DBG] pgmap v306: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:49 vm00 bash[28005]: cluster 2026-03-10T07:30:48.614121+0000 mgr.y (mgr.24407) 235 : cluster [DBG] pgmap v306: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:49 vm00 bash[28005]: cluster 2026-03-10T07:30:49.617777+0000 mon.a (mon.0) 1965 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:49 vm00 bash[28005]: cluster 2026-03-10T07:30:49.617777+0000 mon.a (mon.0) 1965 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:49 vm00 bash[28005]: audit 2026-03-10T07:30:49.624315+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:49 vm00 bash[28005]: audit 2026-03-10T07:30:49.624315+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:49 vm00 bash[28005]: audit 2026-03-10T07:30:49.624366+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:49 vm00 bash[28005]: audit 2026-03-10T07:30:49.624366+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:49 vm00 bash[28005]: cluster 2026-03-10T07:30:49.626996+0000 mon.a (mon.0) 1968 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T07:30:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:49 vm00 bash[28005]: cluster 2026-03-10T07:30:49.626996+0000 mon.a (mon.0) 1968 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T07:30:50.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:49 vm00 bash[20701]: cluster 2026-03-10T07:30:48.614121+0000 mgr.y (mgr.24407) 235 : cluster [DBG] pgmap v306: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:50.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:49 vm00 bash[20701]: cluster 2026-03-10T07:30:48.614121+0000 mgr.y (mgr.24407) 235 : cluster [DBG] pgmap v306: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 12 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T07:30:50.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:49 vm00 bash[20701]: cluster 2026-03-10T07:30:49.617777+0000 mon.a (mon.0) 1965 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:50.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:49 vm00 bash[20701]: cluster 2026-03-10T07:30:49.617777+0000 mon.a (mon.0) 1965 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:50.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:49 vm00 bash[20701]: audit 2026-03-10T07:30:49.624315+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:50.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:49 vm00 bash[20701]: audit 2026-03-10T07:30:49.624315+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-29"}]': finished 2026-03-10T07:30:50.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:49 vm00 bash[20701]: audit 2026-03-10T07:30:49.624366+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:50.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:49 vm00 bash[20701]: audit 2026-03-10T07:30:49.624366+0000 mon.a (mon.0) 1967 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm00-59629-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:50.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:49 vm00 bash[20701]: cluster 2026-03-10T07:30:49.626996+0000 mon.a (mon.0) 1968 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T07:30:50.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:49 vm00 bash[20701]: cluster 2026-03-10T07:30:49.626996+0000 mon.a (mon.0) 1968 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T07:30:51.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:50 vm03 bash[23382]: audit 2026-03-10T07:30:49.626826+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:50 vm03 bash[23382]: audit 2026-03-10T07:30:49.626826+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:50 vm03 bash[23382]: audit 2026-03-10T07:30:49.633477+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:51.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:50 vm03 bash[23382]: audit 2026-03-10T07:30:49.633477+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:51.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:50 vm03 bash[23382]: audit 2026-03-10T07:30:49.633585+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:50 vm03 bash[23382]: audit 2026-03-10T07:30:49.633585+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:50 vm03 bash[23382]: audit 2026-03-10T07:30:50.629415+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:51.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:50 vm03 bash[23382]: audit 2026-03-10T07:30:50.629415+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:50 vm00 bash[28005]: audit 2026-03-10T07:30:49.626826+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:50 vm00 bash[28005]: audit 2026-03-10T07:30:49.626826+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:50 vm00 bash[28005]: audit 2026-03-10T07:30:49.633477+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:50 vm00 bash[28005]: audit 2026-03-10T07:30:49.633477+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:50 vm00 bash[28005]: audit 2026-03-10T07:30:49.633585+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:50 vm00 bash[28005]: audit 2026-03-10T07:30:49.633585+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:50 vm00 bash[28005]: audit 2026-03-10T07:30:50.629415+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:50 vm00 bash[28005]: audit 2026-03-10T07:30:50.629415+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:50 vm00 bash[20701]: audit 2026-03-10T07:30:49.626826+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:50 vm00 bash[20701]: audit 2026-03-10T07:30:49.626826+0000 mon.b (mon.1) 249 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:50 vm00 bash[20701]: audit 2026-03-10T07:30:49.633477+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:50 vm00 bash[20701]: audit 2026-03-10T07:30:49.633477+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:50 vm00 bash[20701]: audit 2026-03-10T07:30:49.633585+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:50 vm00 bash[20701]: audit 2026-03-10T07:30:49.633585+0000 mon.a (mon.0) 1970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:50 vm00 bash[20701]: audit 2026-03-10T07:30:50.629415+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:51.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:50 vm00 bash[20701]: audit 2026-03-10T07:30:50.629415+0000 mon.a (mon.0) 1971 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:51.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:30:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:30:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: cluster 2026-03-10T07:30:50.614764+0000 mgr.y (mgr.24407) 236 : cluster [DBG] pgmap v308: 292 pgs: 4 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 271 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 13 KiB/s rd, 1006 KiB/s wr, 24 op/s 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: cluster 2026-03-10T07:30:50.614764+0000 mgr.y (mgr.24407) 236 : cluster [DBG] pgmap v308: 292 pgs: 4 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 271 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 13 KiB/s rd, 1006 KiB/s wr, 24 op/s 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: cluster 2026-03-10T07:30:50.655670+0000 mon.a (mon.0) 1972 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: cluster 2026-03-10T07:30:50.655670+0000 mon.a (mon.0) 1972 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:50.658162+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:50.658162+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: cluster 2026-03-10T07:30:50.967308+0000 mon.a (mon.0) 1974 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: cluster 2026-03-10T07:30:50.967308+0000 mon.a (mon.0) 1974 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:50.972880+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:50.972880+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:50.973014+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:50.973014+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.016164+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.016164+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: cluster 2026-03-10T07:30:51.016525+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: cluster 2026-03-10T07:30:51.016525+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.023460+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.023460+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.039644+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.039644+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.040457+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.040457+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.040940+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.040940+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.041318+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.041318+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.042026+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.042026+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.042405+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:52.015 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:51 vm03 bash[23382]: audit 2026-03-10T07:30:51.042405+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: cluster 2026-03-10T07:30:50.614764+0000 mgr.y (mgr.24407) 236 : cluster [DBG] pgmap v308: 292 pgs: 4 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 271 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 13 KiB/s rd, 1006 KiB/s wr, 24 op/s 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: cluster 2026-03-10T07:30:50.614764+0000 mgr.y (mgr.24407) 236 : cluster [DBG] pgmap v308: 292 pgs: 4 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 271 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 13 KiB/s rd, 1006 KiB/s wr, 24 op/s 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: cluster 2026-03-10T07:30:50.655670+0000 mon.a (mon.0) 1972 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: cluster 2026-03-10T07:30:50.655670+0000 mon.a (mon.0) 1972 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:50.658162+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:50.658162+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: cluster 2026-03-10T07:30:50.967308+0000 mon.a (mon.0) 1974 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: cluster 2026-03-10T07:30:50.967308+0000 mon.a (mon.0) 1974 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:50.972880+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:50.972880+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:50.973014+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:50.973014+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.016164+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.016164+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: cluster 2026-03-10T07:30:51.016525+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: cluster 2026-03-10T07:30:51.016525+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.023460+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.023460+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.039644+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.039644+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.040457+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.040457+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.040940+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.040940+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.041318+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.041318+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.042026+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.042026+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.042405+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:51 vm00 bash[28005]: audit 2026-03-10T07:30:51.042405+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:52.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: cluster 2026-03-10T07:30:50.614764+0000 mgr.y (mgr.24407) 236 : cluster [DBG] pgmap v308: 292 pgs: 4 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 271 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 13 KiB/s rd, 1006 KiB/s wr, 24 op/s 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: cluster 2026-03-10T07:30:50.614764+0000 mgr.y (mgr.24407) 236 : cluster [DBG] pgmap v308: 292 pgs: 4 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 271 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 13 KiB/s rd, 1006 KiB/s wr, 24 op/s 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: cluster 2026-03-10T07:30:50.655670+0000 mon.a (mon.0) 1972 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: cluster 2026-03-10T07:30:50.655670+0000 mon.a (mon.0) 1972 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:50.658162+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:50.658162+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: cluster 2026-03-10T07:30:50.967308+0000 mon.a (mon.0) 1974 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: cluster 2026-03-10T07:30:50.967308+0000 mon.a (mon.0) 1974 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:50.972880+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:50.972880+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm00-59629-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:50.973014+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:50.973014+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? 192.168.123.100:0/923107848' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm00-59637-47"}]': finished 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.016164+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.016164+0000 mon.b (mon.1) 250 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: cluster 2026-03-10T07:30:51.016525+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: cluster 2026-03-10T07:30:51.016525+0000 mon.a (mon.0) 1977 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.023460+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.023460+0000 mon.a (mon.0) 1978 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.039644+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.039644+0000 mon.c (mon.2) 239 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.040457+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.040457+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.040940+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.040940+0000 mon.c (mon.2) 240 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.041318+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.041318+0000 mon.a (mon.0) 1980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.042026+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.042026+0000 mon.c (mon.2) 241 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.042405+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:51 vm00 bash[20701]: audit 2026-03-10T07:30:51.042405+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:51.975458+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:51.975458+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:51.975802+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:51.975802+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:51.990230+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:51.990230+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: cluster 2026-03-10T07:30:51.993573+0000 mon.a (mon.0) 1984 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: cluster 2026-03-10T07:30:51.993573+0000 mon.a (mon.0) 1984 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:52.010496+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:52.010496+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:52.016954+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:52.016954+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:52.017067+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:53 vm00 bash[28005]: audit 2026-03-10T07:30:52.017067+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:51.975458+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:51.975458+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:51.975802+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:51.975802+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:51.990230+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:51.990230+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: cluster 2026-03-10T07:30:51.993573+0000 mon.a (mon.0) 1984 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: cluster 2026-03-10T07:30:51.993573+0000 mon.a (mon.0) 1984 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:52.010496+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:52.010496+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:52.016954+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:52.016954+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:52.017067+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:53.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:53 vm00 bash[20701]: audit 2026-03-10T07:30:52.017067+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:53.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:30:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:51.975458+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:51.975458+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:51.975802+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:51.975802+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm00-59637-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:51.990230+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:51.990230+0000 mon.b (mon.1) 251 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: cluster 2026-03-10T07:30:51.993573+0000 mon.a (mon.0) 1984 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: cluster 2026-03-10T07:30:51.993573+0000 mon.a (mon.0) 1984 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:52.010496+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:52.010496+0000 mon.c (mon.2) 242 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:52.016954+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:52.016954+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:52.017067+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:53.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:53 vm03 bash[23382]: audit 2026-03-10T07:30:52.017067+0000 mon.a (mon.0) 1986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: cluster 2026-03-10T07:30:52.615125+0000 mgr.y (mgr.24407) 237 : cluster [DBG] pgmap v312: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: cluster 2026-03-10T07:30:52.615125+0000 mgr.y (mgr.24407) 237 : cluster [DBG] pgmap v312: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.040624+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.040624+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.043358+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.043358+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.053831+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.053831+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: cluster 2026-03-10T07:30:53.054549+0000 mon.a (mon.0) 1988 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: cluster 2026-03-10T07:30:53.054549+0000 mon.a (mon.0) 1988 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.058952+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.058952+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.059022+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.059022+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.080084+0000 mgr.y (mgr.24407) 238 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:53.080084+0000 mgr.y (mgr.24407) 238 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.045973+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.045973+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.046369+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.046369+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.046513+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.046513+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.048684+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.048684+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.048898+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.048898+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: cluster 2026-03-10T07:30:54.057943+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: cluster 2026-03-10T07:30:54.057943+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.061690+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.061690+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.061785+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:54 vm00 bash[28005]: audit 2026-03-10T07:30:54.061785+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: cluster 2026-03-10T07:30:52.615125+0000 mgr.y (mgr.24407) 237 : cluster [DBG] pgmap v312: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: cluster 2026-03-10T07:30:52.615125+0000 mgr.y (mgr.24407) 237 : cluster [DBG] pgmap v312: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.040624+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.040624+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.043358+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.043358+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.053831+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.053831+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: cluster 2026-03-10T07:30:53.054549+0000 mon.a (mon.0) 1988 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: cluster 2026-03-10T07:30:53.054549+0000 mon.a (mon.0) 1988 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.058952+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.058952+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.059022+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.059022+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.080084+0000 mgr.y (mgr.24407) 238 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:53.080084+0000 mgr.y (mgr.24407) 238 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.045973+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.045973+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.046369+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.046369+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.046513+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.046513+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.048684+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.048684+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.048898+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.048898+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: cluster 2026-03-10T07:30:54.057943+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: cluster 2026-03-10T07:30:54.057943+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.061690+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.061690+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.061785+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:54 vm00 bash[20701]: audit 2026-03-10T07:30:54.061785+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: cluster 2026-03-10T07:30:52.615125+0000 mgr.y (mgr.24407) 237 : cluster [DBG] pgmap v312: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: cluster 2026-03-10T07:30:52.615125+0000 mgr.y (mgr.24407) 237 : cluster [DBG] pgmap v312: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.040624+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.040624+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.043358+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.043358+0000 mon.b (mon.1) 252 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.053831+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.053831+0000 mon.b (mon.1) 253 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: cluster 2026-03-10T07:30:53.054549+0000 mon.a (mon.0) 1988 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: cluster 2026-03-10T07:30:53.054549+0000 mon.a (mon.0) 1988 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.058952+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.058952+0000 mon.a (mon.0) 1989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.059022+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.059022+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.080084+0000 mgr.y (mgr.24407) 238 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:53.080084+0000 mgr.y (mgr.24407) 238 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.045973+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.045973+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm00-59637-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.046369+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.046369+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.046513+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.046513+0000 mon.a (mon.0) 1993 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.048684+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.048684+0000 mon.b (mon.1) 254 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.048898+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.048898+0000 mon.b (mon.1) 255 : audit [INF] from='client.? 192.168.123.100:0/1290364237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: cluster 2026-03-10T07:30:54.057943+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: cluster 2026-03-10T07:30:54.057943+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.061690+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.061690+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.061785+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:54 vm03 bash[23382]: audit 2026-03-10T07:30:54.061785+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]: dispatch 2026-03-10T07:30:55.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:55 vm00 bash[28005]: audit 2026-03-10T07:30:54.319770+0000 mon.c (mon.2) 243 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:55.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:55 vm00 bash[28005]: audit 2026-03-10T07:30:54.319770+0000 mon.c (mon.2) 243 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:55.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:55 vm00 bash[28005]: cluster 2026-03-10T07:30:55.046284+0000 mon.a (mon.0) 1997 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:55.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:55 vm00 bash[28005]: cluster 2026-03-10T07:30:55.046284+0000 mon.a (mon.0) 1997 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:55.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:55 vm00 bash[20701]: audit 2026-03-10T07:30:54.319770+0000 mon.c (mon.2) 243 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:55.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:55 vm00 bash[20701]: audit 2026-03-10T07:30:54.319770+0000 mon.c (mon.2) 243 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:55.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:55 vm00 bash[20701]: cluster 2026-03-10T07:30:55.046284+0000 mon.a (mon.0) 1997 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:55.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:55 vm00 bash[20701]: cluster 2026-03-10T07:30:55.046284+0000 mon.a (mon.0) 1997 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:55 vm03 bash[23382]: audit 2026-03-10T07:30:54.319770+0000 mon.c (mon.2) 243 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:55 vm03 bash[23382]: audit 2026-03-10T07:30:54.319770+0000 mon.c (mon.2) 243 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:30:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:55 vm03 bash[23382]: cluster 2026-03-10T07:30:55.046284+0000 mon.a (mon.0) 1997 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:55 vm03 bash[23382]: cluster 2026-03-10T07:30:55.046284+0000 mon.a (mon.0) 1997 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:30:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: cluster 2026-03-10T07:30:54.615541+0000 mgr.y (mgr.24407) 239 : cluster [DBG] pgmap v315: 300 pgs: 3 creating+peering, 27 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 258 active+clean; 8.4 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: cluster 2026-03-10T07:30:54.615541+0000 mgr.y (mgr.24407) 239 : cluster [DBG] pgmap v315: 300 pgs: 3 creating+peering, 27 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 258 active+clean; 8.4 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.049920+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]': finished 2026-03-10T07:30:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.049920+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]': finished 2026-03-10T07:30:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.050066+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.050066+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: cluster 2026-03-10T07:30:55.054210+0000 mon.a (mon.0) 2000 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T07:30:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: cluster 2026-03-10T07:30:55.054210+0000 mon.a (mon.0) 2000 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T07:30:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.071811+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.071811+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.079753+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.079753+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.082403+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.082403+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.098131+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.098131+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.099311+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.099311+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.114469+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.114469+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.137424+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.137424+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.139180+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: audit 2026-03-10T07:30:55.139180+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: cluster 2026-03-10T07:30:55.970528+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:56 vm00 bash[28005]: cluster 2026-03-10T07:30:55.970528+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: cluster 2026-03-10T07:30:54.615541+0000 mgr.y (mgr.24407) 239 : cluster [DBG] pgmap v315: 300 pgs: 3 creating+peering, 27 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 258 active+clean; 8.4 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: cluster 2026-03-10T07:30:54.615541+0000 mgr.y (mgr.24407) 239 : cluster [DBG] pgmap v315: 300 pgs: 3 creating+peering, 27 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 258 active+clean; 8.4 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.049920+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]': finished 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.049920+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]': finished 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.050066+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.050066+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: cluster 2026-03-10T07:30:55.054210+0000 mon.a (mon.0) 2000 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: cluster 2026-03-10T07:30:55.054210+0000 mon.a (mon.0) 2000 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.071811+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.071811+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.079753+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.079753+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.082403+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.082403+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.098131+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.098131+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.099311+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.099311+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.114469+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.114469+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.137424+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.137424+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.139180+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: audit 2026-03-10T07:30:55.139180+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: cluster 2026-03-10T07:30:55.970528+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:56 vm00 bash[20701]: cluster 2026-03-10T07:30:55.970528+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: cluster 2026-03-10T07:30:54.615541+0000 mgr.y (mgr.24407) 239 : cluster [DBG] pgmap v315: 300 pgs: 3 creating+peering, 27 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 258 active+clean; 8.4 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: cluster 2026-03-10T07:30:54.615541+0000 mgr.y (mgr.24407) 239 : cluster [DBG] pgmap v315: 300 pgs: 3 creating+peering, 27 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 258 active+clean; 8.4 MiB data, 728 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:30:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.049920+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]': finished 2026-03-10T07:30:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.049920+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-31", "mode": "writeback"}]': finished 2026-03-10T07:30:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.050066+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.050066+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm00-59629-39"}]': finished 2026-03-10T07:30:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: cluster 2026-03-10T07:30:55.054210+0000 mon.a (mon.0) 2000 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T07:30:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: cluster 2026-03-10T07:30:55.054210+0000 mon.a (mon.0) 2000 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T07:30:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.071811+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.071811+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.079753+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.079753+0000 mon.a (mon.0) 2001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.082403+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.082403+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.098131+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.098131+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.099311+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.099311+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.114469+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.114469+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.137424+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.137424+0000 mon.b (mon.1) 256 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.139180+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: audit 2026-03-10T07:30:55.139180+0000 mon.a (mon.0) 2004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: cluster 2026-03-10T07:30:55.970528+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:56 vm03 bash[23382]: cluster 2026-03-10T07:30:55.970528+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:30:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.117028+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.117028+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.117127+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.117127+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.119483+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.119483+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.121262+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.121262+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: cluster 2026-03-10T07:30:56.122397+0000 mon.a (mon.0) 2008 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: cluster 2026-03-10T07:30:56.122397+0000 mon.a (mon.0) 2008 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.124851+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.124851+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.124951+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.124951+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.125106+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.125106+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.128831+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: audit 2026-03-10T07:30:56.128831+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: cluster 2026-03-10T07:30:57.117531+0000 mon.a (mon.0) 2012 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:57 vm00 bash[28005]: cluster 2026-03-10T07:30:57.117531+0000 mon.a (mon.0) 2012 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.117028+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.117028+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.117127+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.117127+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.119483+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.119483+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.121262+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.121262+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: cluster 2026-03-10T07:30:56.122397+0000 mon.a (mon.0) 2008 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: cluster 2026-03-10T07:30:56.122397+0000 mon.a (mon.0) 2008 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.124851+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.124851+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.124951+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.124951+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.125106+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.125106+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.128831+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: audit 2026-03-10T07:30:56.128831+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: cluster 2026-03-10T07:30:57.117531+0000 mon.a (mon.0) 2012 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:57 vm00 bash[20701]: cluster 2026-03-10T07:30:57.117531+0000 mon.a (mon.0) 2012 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.117028+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.117028+0000 mon.a (mon.0) 2006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm00-59629-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:30:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.117127+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.117127+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:30:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.119483+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.119483+0000 mon.b (mon.1) 257 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.121262+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.121262+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: cluster 2026-03-10T07:30:56.122397+0000 mon.a (mon.0) 2008 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T07:30:57.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: cluster 2026-03-10T07:30:56.122397+0000 mon.a (mon.0) 2008 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T07:30:57.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.124851+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.124851+0000 mon.c (mon.2) 248 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.124951+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.124951+0000 mon.a (mon.0) 2009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]: dispatch 2026-03-10T07:30:57.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.125106+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.125106+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:30:57.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.128831+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: audit 2026-03-10T07:30:56.128831+0000 mon.a (mon.0) 2011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:57.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: cluster 2026-03-10T07:30:57.117531+0000 mon.a (mon.0) 2012 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:57.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:57 vm03 bash[23382]: cluster 2026-03-10T07:30:57.117531+0000 mon.a (mon.0) 2012 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: cluster 2026-03-10T07:30:56.615830+0000 mgr.y (mgr.24407) 240 : cluster [DBG] pgmap v318: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: cluster 2026-03-10T07:30:56.615830+0000 mgr.y (mgr.24407) 240 : cluster [DBG] pgmap v318: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:57.154363+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:57.154363+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:57.154430+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:57.154430+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:57.160900+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:57.160900+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: cluster 2026-03-10T07:30:57.173055+0000 mon.a (mon.0) 2015 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: cluster 2026-03-10T07:30:57.173055+0000 mon.a (mon.0) 2015 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:57.242894+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:57.242894+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.157899+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.157899+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.158005+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.158005+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: cluster 2026-03-10T07:30:58.166279+0000 mon.a (mon.0) 2019 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: cluster 2026-03-10T07:30:58.166279+0000 mon.a (mon.0) 2019 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.181742+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.181742+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.207529+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.207529+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.207921+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.207921+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.208552+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.208552+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.209247+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.209247+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.210131+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:58.515 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:30:58 vm03 bash[23382]: audit 2026-03-10T07:30:58.210131+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:58.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: cluster 2026-03-10T07:30:56.615830+0000 mgr.y (mgr.24407) 240 : cluster [DBG] pgmap v318: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:58.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: cluster 2026-03-10T07:30:56.615830+0000 mgr.y (mgr.24407) 240 : cluster [DBG] pgmap v318: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:57.154363+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:57.154363+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:57.154430+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:57.154430+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:57.160900+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:57.160900+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: cluster 2026-03-10T07:30:57.173055+0000 mon.a (mon.0) 2015 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: cluster 2026-03-10T07:30:57.173055+0000 mon.a (mon.0) 2015 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:57.242894+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:57.242894+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.157899+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.157899+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.158005+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.158005+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: cluster 2026-03-10T07:30:58.166279+0000 mon.a (mon.0) 2019 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: cluster 2026-03-10T07:30:58.166279+0000 mon.a (mon.0) 2019 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.181742+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.181742+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.207529+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.207529+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.207921+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.207921+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.208552+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.208552+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.209247+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.209247+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.210131+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:30:58 vm00 bash[28005]: audit 2026-03-10T07:30:58.210131+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: cluster 2026-03-10T07:30:56.615830+0000 mgr.y (mgr.24407) 240 : cluster [DBG] pgmap v318: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: cluster 2026-03-10T07:30:56.615830+0000 mgr.y (mgr.24407) 240 : cluster [DBG] pgmap v318: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:57.154363+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:57.154363+0000 mon.a (mon.0) 2013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-31"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:57.154430+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:57.154430+0000 mon.a (mon.0) 2014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:57.160900+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:57.160900+0000 mon.c (mon.2) 249 : audit [INF] from='client.? 192.168.123.100:0/3776221583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: cluster 2026-03-10T07:30:57.173055+0000 mon.a (mon.0) 2015 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: cluster 2026-03-10T07:30:57.173055+0000 mon.a (mon.0) 2015 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:57.242894+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:57.242894+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]: dispatch 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.157899+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:30:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.157899+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm00-59629-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.158005+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.158005+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm00-59637-48"}]': finished 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: cluster 2026-03-10T07:30:58.166279+0000 mon.a (mon.0) 2019 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: cluster 2026-03-10T07:30:58.166279+0000 mon.a (mon.0) 2019 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.181742+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.181742+0000 mon.b (mon.1) 258 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.207529+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.207529+0000 mon.b (mon.1) 259 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.207921+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.207921+0000 mon.a (mon.0) 2020 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.208552+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.208552+0000 mon.b (mon.1) 260 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.209247+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.209247+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.210131+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:30:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:30:58 vm00 bash[20701]: audit 2026-03-10T07:30:58.210131+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:00.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: cluster 2026-03-10T07:30:58.616286+0000 mgr.y (mgr.24407) 241 : cluster [DBG] pgmap v321: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:00.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: cluster 2026-03-10T07:30:58.616286+0000 mgr.y (mgr.24407) 241 : cluster [DBG] pgmap v321: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: audit 2026-03-10T07:30:59.161257+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: audit 2026-03-10T07:30:59.161257+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: audit 2026-03-10T07:30:59.164811+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: audit 2026-03-10T07:30:59.164811+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: audit 2026-03-10T07:30:59.179114+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: audit 2026-03-10T07:30:59.179114+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: cluster 2026-03-10T07:30:59.179206+0000 mon.a (mon.0) 2024 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: cluster 2026-03-10T07:30:59.179206+0000 mon.a (mon.0) 2024 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: audit 2026-03-10T07:30:59.183178+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: audit 2026-03-10T07:30:59.183178+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: audit 2026-03-10T07:30:59.183588+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:00 vm03 bash[23382]: audit 2026-03-10T07:30:59.183588+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: cluster 2026-03-10T07:30:58.616286+0000 mgr.y (mgr.24407) 241 : cluster [DBG] pgmap v321: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: cluster 2026-03-10T07:30:58.616286+0000 mgr.y (mgr.24407) 241 : cluster [DBG] pgmap v321: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: audit 2026-03-10T07:30:59.161257+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: audit 2026-03-10T07:30:59.161257+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: audit 2026-03-10T07:30:59.164811+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: audit 2026-03-10T07:30:59.164811+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: audit 2026-03-10T07:30:59.179114+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: audit 2026-03-10T07:30:59.179114+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: cluster 2026-03-10T07:30:59.179206+0000 mon.a (mon.0) 2024 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: cluster 2026-03-10T07:30:59.179206+0000 mon.a (mon.0) 2024 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: audit 2026-03-10T07:30:59.183178+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: audit 2026-03-10T07:30:59.183178+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: audit 2026-03-10T07:30:59.183588+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:00 vm00 bash[28005]: audit 2026-03-10T07:30:59.183588+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: cluster 2026-03-10T07:30:58.616286+0000 mgr.y (mgr.24407) 241 : cluster [DBG] pgmap v321: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: cluster 2026-03-10T07:30:58.616286+0000 mgr.y (mgr.24407) 241 : cluster [DBG] pgmap v321: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: audit 2026-03-10T07:30:59.161257+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: audit 2026-03-10T07:30:59.161257+0000 mon.a (mon.0) 2023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm00-59637-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: audit 2026-03-10T07:30:59.164811+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: audit 2026-03-10T07:30:59.164811+0000 mon.b (mon.1) 261 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: audit 2026-03-10T07:30:59.179114+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: audit 2026-03-10T07:30:59.179114+0000 mon.b (mon.1) 262 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: cluster 2026-03-10T07:30:59.179206+0000 mon.a (mon.0) 2024 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: cluster 2026-03-10T07:30:59.179206+0000 mon.a (mon.0) 2024 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: audit 2026-03-10T07:30:59.183178+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: audit 2026-03-10T07:30:59.183178+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: audit 2026-03-10T07:30:59.183588+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:00 vm00 bash[20701]: audit 2026-03-10T07:30:59.183588+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:31:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:31:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:00.166554+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:00.166554+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: cluster 2026-03-10T07:31:00.176746+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: cluster 2026-03-10T07:31:00.176746+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:00.200074+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:00.200074+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:00.201236+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:00.201236+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:00.217000+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:00.217000+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:00.217225+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:00.217225+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.170318+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.170318+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.170363+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.170363+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.170387+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.170387+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.174302+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.174302+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.185543+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.185543+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: cluster 2026-03-10T07:31:01.186825+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: cluster 2026-03-10T07:31:01.186825+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.189692+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.189692+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.189787+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:01 vm00 bash[20701]: audit 2026-03-10T07:31:01.189787+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:00.166554+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:00.166554+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: cluster 2026-03-10T07:31:00.176746+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: cluster 2026-03-10T07:31:00.176746+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:00.200074+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:00.200074+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:00.201236+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:00.201236+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:00.217000+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:00.217000+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:00.217225+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:00.217225+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.170318+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.170318+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.170363+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.170363+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.170387+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.170387+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.174302+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.174302+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.185543+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.185543+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: cluster 2026-03-10T07:31:01.186825+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T07:31:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: cluster 2026-03-10T07:31:01.186825+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T07:31:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.189692+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.189692+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.189787+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:01 vm00 bash[28005]: audit 2026-03-10T07:31:01.189787+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:00.166554+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:00.166554+0000 mon.a (mon.0) 2027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: cluster 2026-03-10T07:31:00.176746+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: cluster 2026-03-10T07:31:00.176746+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:00.200074+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:00.200074+0000 mon.b (mon.1) 263 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:00.201236+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:00.201236+0000 mon.c (mon.2) 250 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:00.217000+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:00.217000+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:00.217225+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:00.217225+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.170318+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.170318+0000 mon.a (mon.0) 2031 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm00-59637-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.170363+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.170363+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.170387+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.170387+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.174302+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.174302+0000 mon.b (mon.1) 264 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.185543+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.185543+0000 mon.c (mon.2) 251 : audit [INF] from='client.? 192.168.123.100:0/1330070304' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: cluster 2026-03-10T07:31:01.186825+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: cluster 2026-03-10T07:31:01.186825+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.189692+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.189692+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.189787+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:01 vm03 bash[23382]: audit 2026-03-10T07:31:01.189787+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]: dispatch 2026-03-10T07:31:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: cluster 2026-03-10T07:31:00.616755+0000 mgr.y (mgr.24407) 242 : cluster [DBG] pgmap v324: 292 pgs: 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: cluster 2026-03-10T07:31:00.616755+0000 mgr.y (mgr.24407) 242 : cluster [DBG] pgmap v324: 292 pgs: 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: cluster 2026-03-10T07:31:01.201540+0000 mon.a (mon.0) 2037 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: cluster 2026-03-10T07:31:01.201540+0000 mon.a (mon.0) 2037 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: audit 2026-03-10T07:31:02.184917+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: audit 2026-03-10T07:31:02.184917+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: audit 2026-03-10T07:31:02.185197+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: audit 2026-03-10T07:31:02.185197+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: audit 2026-03-10T07:31:02.193390+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: audit 2026-03-10T07:31:02.193390+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: cluster 2026-03-10T07:31:02.197045+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T07:31:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: cluster 2026-03-10T07:31:02.197045+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T07:31:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: audit 2026-03-10T07:31:02.204353+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:02 vm03 bash[23382]: audit 2026-03-10T07:31:02.204353+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: cluster 2026-03-10T07:31:00.616755+0000 mgr.y (mgr.24407) 242 : cluster [DBG] pgmap v324: 292 pgs: 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: cluster 2026-03-10T07:31:00.616755+0000 mgr.y (mgr.24407) 242 : cluster [DBG] pgmap v324: 292 pgs: 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: cluster 2026-03-10T07:31:01.201540+0000 mon.a (mon.0) 2037 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: cluster 2026-03-10T07:31:01.201540+0000 mon.a (mon.0) 2037 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: audit 2026-03-10T07:31:02.184917+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: audit 2026-03-10T07:31:02.184917+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: audit 2026-03-10T07:31:02.185197+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: audit 2026-03-10T07:31:02.185197+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: audit 2026-03-10T07:31:02.193390+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: audit 2026-03-10T07:31:02.193390+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: cluster 2026-03-10T07:31:02.197045+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: cluster 2026-03-10T07:31:02.197045+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: audit 2026-03-10T07:31:02.204353+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:02 vm00 bash[28005]: audit 2026-03-10T07:31:02.204353+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: cluster 2026-03-10T07:31:00.616755+0000 mgr.y (mgr.24407) 242 : cluster [DBG] pgmap v324: 292 pgs: 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: cluster 2026-03-10T07:31:00.616755+0000 mgr.y (mgr.24407) 242 : cluster [DBG] pgmap v324: 292 pgs: 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: cluster 2026-03-10T07:31:01.201540+0000 mon.a (mon.0) 2037 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: cluster 2026-03-10T07:31:01.201540+0000 mon.a (mon.0) 2037 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: audit 2026-03-10T07:31:02.184917+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: audit 2026-03-10T07:31:02.184917+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: audit 2026-03-10T07:31:02.185197+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: audit 2026-03-10T07:31:02.185197+0000 mon.a (mon.0) 2039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm00-59629-40"}]': finished 2026-03-10T07:31:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: audit 2026-03-10T07:31:02.193390+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: audit 2026-03-10T07:31:02.193390+0000 mon.b (mon.1) 265 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: cluster 2026-03-10T07:31:02.197045+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T07:31:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: cluster 2026-03-10T07:31:02.197045+0000 mon.a (mon.0) 2040 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T07:31:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: audit 2026-03-10T07:31:02.204353+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:02 vm00 bash[20701]: audit 2026-03-10T07:31:02.204353+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]: dispatch 2026-03-10T07:31:03.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:31:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:31:03.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.214834+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.214834+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.236659+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.236659+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.236863+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.236863+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.237447+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.237447+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.238267+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.238267+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.239089+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:02.239089+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: cluster 2026-03-10T07:31:03.185127+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: cluster 2026-03-10T07:31:03.185127+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.188392+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]': finished 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.188392+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]': finished 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.188462+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.188462+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.192245+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.192245+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.201078+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.201078+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: cluster 2026-03-10T07:31:03.201802+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: cluster 2026-03-10T07:31:03.201802+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.202693+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.202693+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.203042+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:03 vm03 bash[23382]: audit 2026-03-10T07:31:03.203042+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.214834+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.214834+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.236659+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.236659+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.236863+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.236863+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.237447+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.237447+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.238267+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.238267+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.239089+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:02.239089+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: cluster 2026-03-10T07:31:03.185127+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: cluster 2026-03-10T07:31:03.185127+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.188392+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]': finished 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.188392+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]': finished 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.188462+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.188462+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.192245+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.192245+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.201078+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.201078+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: cluster 2026-03-10T07:31:03.201802+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: cluster 2026-03-10T07:31:03.201802+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.202693+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.202693+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.203042+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:03 vm00 bash[28005]: audit 2026-03-10T07:31:03.203042+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.214834+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.214834+0000 mon.b (mon.1) 266 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.236659+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.236659+0000 mon.b (mon.1) 267 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.236863+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.236863+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.237447+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.237447+0000 mon.b (mon.1) 268 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.238267+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.238267+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.239089+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:02.239089+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: cluster 2026-03-10T07:31:03.185127+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: cluster 2026-03-10T07:31:03.185127+0000 mon.a (mon.0) 2045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.188392+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]': finished 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.188392+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-33", "mode": "writeback"}]': finished 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.188462+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.188462+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm00-59629-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.192245+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.192245+0000 mon.b (mon.1) 269 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.201078+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.201078+0000 mon.b (mon.1) 270 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: cluster 2026-03-10T07:31:03.201802+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T07:31:03.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: cluster 2026-03-10T07:31:03.201802+0000 mon.a (mon.0) 2048 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T07:31:03.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.202693+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.202693+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:03.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.203042+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:03.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:03 vm00 bash[20701]: audit 2026-03-10T07:31:03.203042+0000 mon.a (mon.0) 2050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: cluster 2026-03-10T07:31:02.617234+0000 mgr.y (mgr.24407) 243 : cluster [DBG] pgmap v327: 300 pgs: 8 unknown, 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: cluster 2026-03-10T07:31:02.617234+0000 mgr.y (mgr.24407) 243 : cluster [DBG] pgmap v327: 300 pgs: 8 unknown, 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:03.088069+0000 mgr.y (mgr.24407) 244 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:03.088069+0000 mgr.y (mgr.24407) 244 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:03.293225+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:03.293225+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:03.295057+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:03.295057+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.191829+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.191829+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.191886+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.191886+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.194347+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.194347+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.194533+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.194533+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: cluster 2026-03-10T07:31:04.204610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: cluster 2026-03-10T07:31:04.204610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.205066+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.205066+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.205388+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:04 vm03 bash[23382]: audit 2026-03-10T07:31:04.205388+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: cluster 2026-03-10T07:31:02.617234+0000 mgr.y (mgr.24407) 243 : cluster [DBG] pgmap v327: 300 pgs: 8 unknown, 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: cluster 2026-03-10T07:31:02.617234+0000 mgr.y (mgr.24407) 243 : cluster [DBG] pgmap v327: 300 pgs: 8 unknown, 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:03.088069+0000 mgr.y (mgr.24407) 244 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:03.088069+0000 mgr.y (mgr.24407) 244 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:03.293225+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:03.293225+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:03.295057+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:03.295057+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.191829+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.191829+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.191886+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.191886+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.194347+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.194347+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.194533+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.194533+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: cluster 2026-03-10T07:31:04.204610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: cluster 2026-03-10T07:31:04.204610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.205066+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.205066+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.205388+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:04 vm00 bash[28005]: audit 2026-03-10T07:31:04.205388+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: cluster 2026-03-10T07:31:02.617234+0000 mgr.y (mgr.24407) 243 : cluster [DBG] pgmap v327: 300 pgs: 8 unknown, 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: cluster 2026-03-10T07:31:02.617234+0000 mgr.y (mgr.24407) 243 : cluster [DBG] pgmap v327: 300 pgs: 8 unknown, 32 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:03.088069+0000 mgr.y (mgr.24407) 244 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:03.088069+0000 mgr.y (mgr.24407) 244 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:03.293225+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:03.293225+0000 mon.b (mon.1) 271 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:03.295057+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:03.295057+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.191829+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.191829+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.191886+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.191886+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.194347+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.194347+0000 mon.b (mon.1) 272 : audit [INF] from='client.? 192.168.123.100:0/3864831144' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.194533+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.194533+0000 mon.b (mon.1) 273 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: cluster 2026-03-10T07:31:04.204610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: cluster 2026-03-10T07:31:04.204610+0000 mon.a (mon.0) 2054 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.205066+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.205066+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.205388+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:04 vm00 bash[20701]: audit 2026-03-10T07:31:04.205388+0000 mon.a (mon.0) 2056 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]: dispatch 2026-03-10T07:31:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.271336+0000 mon.c (mon.2) 252 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:31:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.271336+0000 mon.c (mon.2) 252 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:31:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.597413+0000 mon.a (mon.0) 2057 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.597413+0000 mon.a (mon.0) 2057 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.610185+0000 mon.a (mon.0) 2058 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.610185+0000 mon.a (mon.0) 2058 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.977231+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.977231+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.978354+0000 mon.c (mon.2) 254 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.978354+0000 mon.c (mon.2) 254 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.986146+0000 mon.a (mon.0) 2059 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:04.986146+0000 mon.a (mon.0) 2059 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: cluster 2026-03-10T07:31:05.192241+0000 mon.a (mon.0) 2060 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: cluster 2026-03-10T07:31:05.192241+0000 mon.a (mon.0) 2060 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.195794+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.195794+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.195922+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.195922+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.196013+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.196013+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: cluster 2026-03-10T07:31:05.199169+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: cluster 2026-03-10T07:31:05.199169+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.227706+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.227706+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.228743+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.228743+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.228957+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:05 vm03 bash[23382]: audit 2026-03-10T07:31:05.228957+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.271336+0000 mon.c (mon.2) 252 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.271336+0000 mon.c (mon.2) 252 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.597413+0000 mon.a (mon.0) 2057 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.597413+0000 mon.a (mon.0) 2057 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.610185+0000 mon.a (mon.0) 2058 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.610185+0000 mon.a (mon.0) 2058 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.977231+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.977231+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.978354+0000 mon.c (mon.2) 254 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.978354+0000 mon.c (mon.2) 254 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.986146+0000 mon.a (mon.0) 2059 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:04.986146+0000 mon.a (mon.0) 2059 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: cluster 2026-03-10T07:31:05.192241+0000 mon.a (mon.0) 2060 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: cluster 2026-03-10T07:31:05.192241+0000 mon.a (mon.0) 2060 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.195794+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.195794+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.195922+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.195922+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.196013+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.196013+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: cluster 2026-03-10T07:31:05.199169+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: cluster 2026-03-10T07:31:05.199169+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.227706+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.227706+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.228743+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.228743+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.228957+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:05 vm00 bash[28005]: audit 2026-03-10T07:31:05.228957+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.271336+0000 mon.c (mon.2) 252 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.271336+0000 mon.c (mon.2) 252 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.597413+0000 mon.a (mon.0) 2057 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.597413+0000 mon.a (mon.0) 2057 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.610185+0000 mon.a (mon.0) 2058 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.610185+0000 mon.a (mon.0) 2058 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.977231+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.977231+0000 mon.c (mon.2) 253 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.978354+0000 mon.c (mon.2) 254 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.978354+0000 mon.c (mon.2) 254 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.986146+0000 mon.a (mon.0) 2059 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:04.986146+0000 mon.a (mon.0) 2059 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: cluster 2026-03-10T07:31:05.192241+0000 mon.a (mon.0) 2060 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: cluster 2026-03-10T07:31:05.192241+0000 mon.a (mon.0) 2060 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.195794+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.195794+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm00-59629-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.195922+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.195922+0000 mon.a (mon.0) 2062 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm00-59637-49"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.196013+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.196013+0000 mon.a (mon.0) 2063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-33"}]': finished 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: cluster 2026-03-10T07:31:05.199169+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: cluster 2026-03-10T07:31:05.199169+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.227706+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.227706+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.228743+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.228743+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.228957+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:05 vm00 bash[20701]: audit 2026-03-10T07:31:05.228957+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:06 vm03 bash[23382]: cluster 2026-03-10T07:31:04.617807+0000 mgr.y (mgr.24407) 245 : cluster [DBG] pgmap v330: 292 pgs: 28 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:06 vm03 bash[23382]: cluster 2026-03-10T07:31:04.617807+0000 mgr.y (mgr.24407) 245 : cluster [DBG] pgmap v330: 292 pgs: 28 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:06 vm03 bash[23382]: audit 2026-03-10T07:31:06.199297+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:06 vm03 bash[23382]: audit 2026-03-10T07:31:06.199297+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:06 vm03 bash[23382]: cluster 2026-03-10T07:31:06.215607+0000 mon.a (mon.0) 2069 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T07:31:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:06 vm03 bash[23382]: cluster 2026-03-10T07:31:06.215607+0000 mon.a (mon.0) 2069 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T07:31:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:06 vm03 bash[23382]: audit 2026-03-10T07:31:06.217745+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:06 vm03 bash[23382]: audit 2026-03-10T07:31:06.217745+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:06 vm00 bash[28005]: cluster 2026-03-10T07:31:04.617807+0000 mgr.y (mgr.24407) 245 : cluster [DBG] pgmap v330: 292 pgs: 28 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:06 vm00 bash[28005]: cluster 2026-03-10T07:31:04.617807+0000 mgr.y (mgr.24407) 245 : cluster [DBG] pgmap v330: 292 pgs: 28 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:06 vm00 bash[28005]: audit 2026-03-10T07:31:06.199297+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:06 vm00 bash[28005]: audit 2026-03-10T07:31:06.199297+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:06 vm00 bash[28005]: cluster 2026-03-10T07:31:06.215607+0000 mon.a (mon.0) 2069 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T07:31:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:06 vm00 bash[28005]: cluster 2026-03-10T07:31:06.215607+0000 mon.a (mon.0) 2069 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T07:31:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:06 vm00 bash[28005]: audit 2026-03-10T07:31:06.217745+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:06 vm00 bash[28005]: audit 2026-03-10T07:31:06.217745+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:06 vm00 bash[20701]: cluster 2026-03-10T07:31:04.617807+0000 mgr.y (mgr.24407) 245 : cluster [DBG] pgmap v330: 292 pgs: 28 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:06 vm00 bash[20701]: cluster 2026-03-10T07:31:04.617807+0000 mgr.y (mgr.24407) 245 : cluster [DBG] pgmap v330: 292 pgs: 28 creating+peering, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:06 vm00 bash[20701]: audit 2026-03-10T07:31:06.199297+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:06 vm00 bash[20701]: audit 2026-03-10T07:31:06.199297+0000 mon.a (mon.0) 2068 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm00-59637-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:06 vm00 bash[20701]: cluster 2026-03-10T07:31:06.215607+0000 mon.a (mon.0) 2069 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T07:31:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:06 vm00 bash[20701]: cluster 2026-03-10T07:31:06.215607+0000 mon.a (mon.0) 2069 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T07:31:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:06 vm00 bash[20701]: audit 2026-03-10T07:31:06.217745+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:06 vm00 bash[20701]: audit 2026-03-10T07:31:06.217745+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: cluster 2026-03-10T07:31:06.618263+0000 mgr.y (mgr.24407) 246 : cluster [DBG] pgmap v333: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: cluster 2026-03-10T07:31:06.618263+0000 mgr.y (mgr.24407) 246 : cluster [DBG] pgmap v333: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: cluster 2026-03-10T07:31:07.217909+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T07:31:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: cluster 2026-03-10T07:31:07.217909+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T07:31:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: audit 2026-03-10T07:31:07.223591+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: audit 2026-03-10T07:31:07.223591+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: audit 2026-03-10T07:31:07.223684+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: audit 2026-03-10T07:31:07.223684+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: audit 2026-03-10T07:31:07.231400+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: audit 2026-03-10T07:31:07.231400+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: audit 2026-03-10T07:31:07.231467+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: audit 2026-03-10T07:31:07.231467+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: cluster 2026-03-10T07:31:07.244267+0000 mon.a (mon.0) 2074 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:08 vm03 bash[23382]: cluster 2026-03-10T07:31:07.244267+0000 mon.a (mon.0) 2074 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: cluster 2026-03-10T07:31:06.618263+0000 mgr.y (mgr.24407) 246 : cluster [DBG] pgmap v333: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: cluster 2026-03-10T07:31:06.618263+0000 mgr.y (mgr.24407) 246 : cluster [DBG] pgmap v333: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: cluster 2026-03-10T07:31:07.217909+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: cluster 2026-03-10T07:31:07.217909+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: audit 2026-03-10T07:31:07.223591+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: audit 2026-03-10T07:31:07.223591+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: audit 2026-03-10T07:31:07.223684+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: audit 2026-03-10T07:31:07.223684+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: audit 2026-03-10T07:31:07.231400+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: audit 2026-03-10T07:31:07.231400+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: audit 2026-03-10T07:31:07.231467+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: audit 2026-03-10T07:31:07.231467+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: cluster 2026-03-10T07:31:07.244267+0000 mon.a (mon.0) 2074 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:08 vm00 bash[28005]: cluster 2026-03-10T07:31:07.244267+0000 mon.a (mon.0) 2074 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:08.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: cluster 2026-03-10T07:31:06.618263+0000 mgr.y (mgr.24407) 246 : cluster [DBG] pgmap v333: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: cluster 2026-03-10T07:31:06.618263+0000 mgr.y (mgr.24407) 246 : cluster [DBG] pgmap v333: 268 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: cluster 2026-03-10T07:31:07.217909+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: cluster 2026-03-10T07:31:07.217909+0000 mon.a (mon.0) 2071 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: audit 2026-03-10T07:31:07.223591+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: audit 2026-03-10T07:31:07.223591+0000 mon.b (mon.1) 274 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: audit 2026-03-10T07:31:07.223684+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: audit 2026-03-10T07:31:07.223684+0000 mon.b (mon.1) 275 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: audit 2026-03-10T07:31:07.231400+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: audit 2026-03-10T07:31:07.231400+0000 mon.a (mon.0) 2072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: audit 2026-03-10T07:31:07.231467+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: audit 2026-03-10T07:31:07.231467+0000 mon.a (mon.0) 2073 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: cluster 2026-03-10T07:31:07.244267+0000 mon.a (mon.0) 2074 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:08 vm00 bash[20701]: cluster 2026-03-10T07:31:07.244267+0000 mon.a (mon.0) 2074 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:09.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.207106+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.207106+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.207171+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.207171+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.207201+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.207201+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.209754+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.209754+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.210264+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.210264+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: cluster 2026-03-10T07:31:08.210798+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: cluster 2026-03-10T07:31:08.210798+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.217431+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.217431+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.217594+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:08.217594+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:09.210799+0000 mon.a (mon.0) 2081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:09 vm03 bash[23382]: audit 2026-03-10T07:31:09.210799+0000 mon.a (mon.0) 2081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.207106+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.207106+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.207171+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.207171+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.207201+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.207201+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.209754+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.209754+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.210264+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.210264+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: cluster 2026-03-10T07:31:08.210798+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: cluster 2026-03-10T07:31:08.210798+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.217431+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.217431+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.217594+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:08.217594+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:09.210799+0000 mon.a (mon.0) 2081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:09 vm00 bash[28005]: audit 2026-03-10T07:31:09.210799+0000 mon.a (mon.0) 2081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.207106+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.207106+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm00-59637-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.207171+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.207171+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.207201+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.207201+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.209754+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.209754+0000 mon.b (mon.1) 276 : audit [INF] from='client.? 192.168.123.100:0/3252745372' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.210264+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.210264+0000 mon.b (mon.1) 277 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: cluster 2026-03-10T07:31:08.210798+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: cluster 2026-03-10T07:31:08.210798+0000 mon.a (mon.0) 2078 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.217431+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.217431+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.217594+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:08.217594+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:09.210799+0000 mon.a (mon.0) 2081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:09 vm00 bash[20701]: audit 2026-03-10T07:31:09.210799+0000 mon.a (mon.0) 2081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm00-59629-41"}]': finished 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: cluster 2026-03-10T07:31:08.618671+0000 mgr.y (mgr.24407) 247 : cluster [DBG] pgmap v336: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: cluster 2026-03-10T07:31:08.618671+0000 mgr.y (mgr.24407) 247 : cluster [DBG] pgmap v336: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.210915+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.210915+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.213018+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.213018+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: cluster 2026-03-10T07:31:09.220737+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: cluster 2026-03-10T07:31:09.220737+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.228987+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.228987+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.229783+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.229783+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.232238+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.232238+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.233219+0000 mon.a (mon.0) 2085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.233219+0000 mon.a (mon.0) 2085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.240296+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.240296+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.240765+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.240765+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.263979+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.263979+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.327518+0000 mon.c (mon.2) 255 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:10 vm00 bash[28005]: audit 2026-03-10T07:31:09.327518+0000 mon.c (mon.2) 255 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: cluster 2026-03-10T07:31:08.618671+0000 mgr.y (mgr.24407) 247 : cluster [DBG] pgmap v336: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: cluster 2026-03-10T07:31:08.618671+0000 mgr.y (mgr.24407) 247 : cluster [DBG] pgmap v336: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.210915+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.210915+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.213018+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.213018+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: cluster 2026-03-10T07:31:09.220737+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: cluster 2026-03-10T07:31:09.220737+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.228987+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.228987+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.229783+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.229783+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.232238+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.232238+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.233219+0000 mon.a (mon.0) 2085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.233219+0000 mon.a (mon.0) 2085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.240296+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.240296+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.240765+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.240765+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.263979+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.263979+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.327518+0000 mon.c (mon.2) 255 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:10 vm00 bash[20701]: audit 2026-03-10T07:31:09.327518+0000 mon.c (mon.2) 255 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: cluster 2026-03-10T07:31:08.618671+0000 mgr.y (mgr.24407) 247 : cluster [DBG] pgmap v336: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: cluster 2026-03-10T07:31:08.618671+0000 mgr.y (mgr.24407) 247 : cluster [DBG] pgmap v336: 300 pgs: 40 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 248 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.210915+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.210915+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.213018+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.213018+0000 mon.b (mon.1) 278 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: cluster 2026-03-10T07:31:09.220737+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: cluster 2026-03-10T07:31:09.220737+0000 mon.a (mon.0) 2083 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.228987+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.228987+0000 mon.b (mon.1) 279 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.229783+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.229783+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.232238+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.232238+0000 mon.b (mon.1) 280 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.233219+0000 mon.a (mon.0) 2085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.233219+0000 mon.a (mon.0) 2085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.240296+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.240296+0000 mon.b (mon.1) 281 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.240765+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.240765+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:10.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.263979+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.263979+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:10.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.327518+0000 mon.c (mon.2) 255 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:10.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:10 vm03 bash[23382]: audit 2026-03-10T07:31:09.327518+0000 mon.c (mon.2) 255 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.283962+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.283962+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.284019+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.284019+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: cluster 2026-03-10T07:31:10.289613+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: cluster 2026-03-10T07:31:10.289613+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.291298+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.291298+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.291382+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.291382+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.294438+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.294438+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.324589+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.324589+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.324864+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:10.324864+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: cluster 2026-03-10T07:31:11.284110+0000 mon.a (mon.0) 2094 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: cluster 2026-03-10T07:31:11.284110+0000 mon.a (mon.0) 2094 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:11.287354+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:11.287354+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:11.287445+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]': finished 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:11 vm00 bash[28005]: audit 2026-03-10T07:31:11.287445+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]': finished 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:31:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:31:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.283962+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.283962+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.284019+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.284019+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: cluster 2026-03-10T07:31:10.289613+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: cluster 2026-03-10T07:31:10.289613+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.291298+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.291298+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.291382+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.291382+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.294438+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.294438+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.324589+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.324589+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.324864+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:10.324864+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: cluster 2026-03-10T07:31:11.284110+0000 mon.a (mon.0) 2094 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: cluster 2026-03-10T07:31:11.284110+0000 mon.a (mon.0) 2094 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:11.287354+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:11.287354+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:11.287445+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]': finished 2026-03-10T07:31:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:11 vm00 bash[20701]: audit 2026-03-10T07:31:11.287445+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]': finished 2026-03-10T07:31:11.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.283962+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:11.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.283962+0000 mon.a (mon.0) 2088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:11.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.284019+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:11.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.284019+0000 mon.a (mon.0) 2089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm00-59629-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:11.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: cluster 2026-03-10T07:31:10.289613+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: cluster 2026-03-10T07:31:10.289613+0000 mon.a (mon.0) 2090 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.291298+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.291298+0000 mon.b (mon.1) 282 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.291382+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.291382+0000 mon.b (mon.1) 283 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.294438+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.294438+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.324589+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.324589+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]: dispatch 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.324864+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:10.324864+0000 mon.a (mon.0) 2093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: cluster 2026-03-10T07:31:11.284110+0000 mon.a (mon.0) 2094 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: cluster 2026-03-10T07:31:11.284110+0000 mon.a (mon.0) 2094 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:11.287354+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:11.287354+0000 mon.a (mon.0) 2095 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:11.287445+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]': finished 2026-03-10T07:31:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:11 vm03 bash[23382]: audit 2026-03-10T07:31:11.287445+0000 mon.a (mon.0) 2096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-35", "mode": "writeback"}]': finished 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: cluster 2026-03-10T07:31:10.618941+0000 mgr.y (mgr.24407) 248 : cluster [DBG] pgmap v339: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: cluster 2026-03-10T07:31:10.618941+0000 mgr.y (mgr.24407) 248 : cluster [DBG] pgmap v339: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: cluster 2026-03-10T07:31:11.302175+0000 mon.a (mon.0) 2097 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: cluster 2026-03-10T07:31:11.302175+0000 mon.a (mon.0) 2097 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: audit 2026-03-10T07:31:11.304227+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: audit 2026-03-10T07:31:11.304227+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: audit 2026-03-10T07:31:12.290999+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: audit 2026-03-10T07:31:12.290999+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: audit 2026-03-10T07:31:12.291103+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: audit 2026-03-10T07:31:12.291103+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: cluster 2026-03-10T07:31:12.294867+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T07:31:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:12 vm00 bash[28005]: cluster 2026-03-10T07:31:12.294867+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: cluster 2026-03-10T07:31:10.618941+0000 mgr.y (mgr.24407) 248 : cluster [DBG] pgmap v339: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: cluster 2026-03-10T07:31:10.618941+0000 mgr.y (mgr.24407) 248 : cluster [DBG] pgmap v339: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: cluster 2026-03-10T07:31:11.302175+0000 mon.a (mon.0) 2097 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: cluster 2026-03-10T07:31:11.302175+0000 mon.a (mon.0) 2097 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: audit 2026-03-10T07:31:11.304227+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: audit 2026-03-10T07:31:11.304227+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: audit 2026-03-10T07:31:12.290999+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: audit 2026-03-10T07:31:12.290999+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: audit 2026-03-10T07:31:12.291103+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: audit 2026-03-10T07:31:12.291103+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: cluster 2026-03-10T07:31:12.294867+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T07:31:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:12 vm00 bash[20701]: cluster 2026-03-10T07:31:12.294867+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: cluster 2026-03-10T07:31:10.618941+0000 mgr.y (mgr.24407) 248 : cluster [DBG] pgmap v339: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: cluster 2026-03-10T07:31:10.618941+0000 mgr.y (mgr.24407) 248 : cluster [DBG] pgmap v339: 292 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: cluster 2026-03-10T07:31:11.302175+0000 mon.a (mon.0) 2097 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: cluster 2026-03-10T07:31:11.302175+0000 mon.a (mon.0) 2097 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: audit 2026-03-10T07:31:11.304227+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: audit 2026-03-10T07:31:11.304227+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]: dispatch 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: audit 2026-03-10T07:31:12.290999+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: audit 2026-03-10T07:31:12.290999+0000 mon.a (mon.0) 2099 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm00-59629-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: audit 2026-03-10T07:31:12.291103+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: audit 2026-03-10T07:31:12.291103+0000 mon.a (mon.0) 2100 : audit [INF] from='client.? 192.168.123.100:0/113791467' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm00-59637-50"}]': finished 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: cluster 2026-03-10T07:31:12.294867+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T07:31:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:12 vm03 bash[23382]: cluster 2026-03-10T07:31:12.294867+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T07:31:13.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:31:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: cluster 2026-03-10T07:31:12.619267+0000 mgr.y (mgr.24407) 249 : cluster [DBG] pgmap v342: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: cluster 2026-03-10T07:31:12.619267+0000 mgr.y (mgr.24407) 249 : cluster [DBG] pgmap v342: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: audit 2026-03-10T07:31:13.098729+0000 mgr.y (mgr.24407) 250 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: audit 2026-03-10T07:31:13.098729+0000 mgr.y (mgr.24407) 250 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: cluster 2026-03-10T07:31:13.308289+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: cluster 2026-03-10T07:31:13.308289+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: audit 2026-03-10T07:31:13.315916+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: audit 2026-03-10T07:31:13.315916+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: cluster 2026-03-10T07:31:13.329740+0000 mon.a (mon.0) 2104 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: cluster 2026-03-10T07:31:13.329740+0000 mon.a (mon.0) 2104 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: audit 2026-03-10T07:31:13.356455+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: audit 2026-03-10T07:31:13.356455+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: audit 2026-03-10T07:31:13.358212+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:14.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:14 vm00 bash[28005]: audit 2026-03-10T07:31:13.358212+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: cluster 2026-03-10T07:31:12.619267+0000 mgr.y (mgr.24407) 249 : cluster [DBG] pgmap v342: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: cluster 2026-03-10T07:31:12.619267+0000 mgr.y (mgr.24407) 249 : cluster [DBG] pgmap v342: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: audit 2026-03-10T07:31:13.098729+0000 mgr.y (mgr.24407) 250 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: audit 2026-03-10T07:31:13.098729+0000 mgr.y (mgr.24407) 250 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: cluster 2026-03-10T07:31:13.308289+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: cluster 2026-03-10T07:31:13.308289+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: audit 2026-03-10T07:31:13.315916+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: audit 2026-03-10T07:31:13.315916+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: cluster 2026-03-10T07:31:13.329740+0000 mon.a (mon.0) 2104 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: cluster 2026-03-10T07:31:13.329740+0000 mon.a (mon.0) 2104 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: audit 2026-03-10T07:31:13.356455+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: audit 2026-03-10T07:31:13.356455+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: audit 2026-03-10T07:31:13.358212+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:14 vm00 bash[20701]: audit 2026-03-10T07:31:13.358212+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: cluster 2026-03-10T07:31:12.619267+0000 mgr.y (mgr.24407) 249 : cluster [DBG] pgmap v342: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: cluster 2026-03-10T07:31:12.619267+0000 mgr.y (mgr.24407) 249 : cluster [DBG] pgmap v342: 300 pgs: 8 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 280 active+clean; 8.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: audit 2026-03-10T07:31:13.098729+0000 mgr.y (mgr.24407) 250 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: audit 2026-03-10T07:31:13.098729+0000 mgr.y (mgr.24407) 250 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: cluster 2026-03-10T07:31:13.308289+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: cluster 2026-03-10T07:31:13.308289+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: audit 2026-03-10T07:31:13.315916+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: audit 2026-03-10T07:31:13.315916+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: cluster 2026-03-10T07:31:13.329740+0000 mon.a (mon.0) 2104 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: cluster 2026-03-10T07:31:13.329740+0000 mon.a (mon.0) 2104 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: audit 2026-03-10T07:31:13.356455+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: audit 2026-03-10T07:31:13.356455+0000 mon.b (mon.1) 284 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: audit 2026-03-10T07:31:13.358212+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:14 vm03 bash[23382]: audit 2026-03-10T07:31:13.358212+0000 mon.a (mon.0) 2105 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.298374+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.298374+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.298458+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.298458+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.303658+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.303658+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: cluster 2026-03-10T07:31:14.303963+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T07:31:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: cluster 2026-03-10T07:31:14.303963+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T07:31:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.308212+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.308212+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.344596+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.344596+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.346314+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:15 vm00 bash[28005]: audit 2026-03-10T07:31:14.346314+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.298374+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.298374+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.298458+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.298458+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.303658+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.303658+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: cluster 2026-03-10T07:31:14.303963+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: cluster 2026-03-10T07:31:14.303963+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.308212+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.308212+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.344596+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.344596+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.346314+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:15 vm00 bash[20701]: audit 2026-03-10T07:31:14.346314+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.298374+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.298374+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? 192.168.123.100:0/3942367414' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm00-59637-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.298458+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.298458+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.303658+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.303658+0000 mon.b (mon.1) 285 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: cluster 2026-03-10T07:31:14.303963+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: cluster 2026-03-10T07:31:14.303963+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.308212+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.308212+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.344596+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.344596+0000 mon.b (mon.1) 286 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.346314+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:15 vm03 bash[23382]: audit 2026-03-10T07:31:14.346314+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:16.365 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleWrite (7479 ms) 2026-03-10T07:31:16.365 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.WaitForComplete 2026-03-10T07:31:16.365 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.WaitForComplete (7177 ms) 2026-03-10T07:31:16.365 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip 2026-03-10T07:31:16.365 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip (7050 ms) 2026-03-10T07:31:16.365 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip2 2026-03-10T07:31:16.365 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip2 (7083 ms) 2026-03-10T07:31:16.365 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripAppend 2026-03-10T07:31:16.365 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripAppend (7270 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.IsComplete 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.IsComplete (7115 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.IsSafe 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.IsSafe (7031 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.ReturnValue 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.ReturnValue (7115 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.Flush 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.Flush (6874 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.FlushAsync 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.FlushAsync (7301 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripWriteFull 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripWriteFull (7019 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStat 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStat (7047 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStatNS 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStatNS (6494 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.StatRemove 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.StatRemove (7142 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.ExecuteClass 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.ExecuteClass (7020 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ RUN ] LibRadosAioEC.MultiWrite 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ OK ] LibRadosAioEC.MultiWrite (7152 ms) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] 16 tests from LibRadosAioEC (113369 ms total) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [----------] Global test environment tear-down 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [==========] 42 tests from 2 test suites ran. (192392 ms total) 2026-03-10T07:31:16.366 INFO:tasks.workunit.client.0.vm00.stdout: api_aio: [ PASSED ] 42 tests. 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: cluster 2026-03-10T07:31:14.619834+0000 mgr.y (mgr.24407) 251 : cluster [DBG] pgmap v345: 324 pgs: 1 creating+activating, 24 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 287 active+clean; 8.4 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: cluster 2026-03-10T07:31:14.619834+0000 mgr.y (mgr.24407) 251 : cluster [DBG] pgmap v345: 324 pgs: 1 creating+activating, 24 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 287 active+clean; 8.4 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.356806+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.356806+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.356865+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.356865+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.358993+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.358993+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: cluster 2026-03-10T07:31:15.359521+0000 mon.a (mon.0) 2113 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: cluster 2026-03-10T07:31:15.359521+0000 mon.a (mon.0) 2113 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.363793+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.363793+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.382232+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.382232+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.383582+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.383582+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.383784+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:15.383784+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:16.360637+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:16.360637+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:16.360703+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:16.360703+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: cluster 2026-03-10T07:31:16.363909+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: cluster 2026-03-10T07:31:16.363909+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:16.373689+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:16 vm00 bash[28005]: audit 2026-03-10T07:31:16.373689+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: cluster 2026-03-10T07:31:14.619834+0000 mgr.y (mgr.24407) 251 : cluster [DBG] pgmap v345: 324 pgs: 1 creating+activating, 24 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 287 active+clean; 8.4 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: cluster 2026-03-10T07:31:14.619834+0000 mgr.y (mgr.24407) 251 : cluster [DBG] pgmap v345: 324 pgs: 1 creating+activating, 24 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 287 active+clean; 8.4 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.356806+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.356806+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.356865+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.356865+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.358993+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.358993+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: cluster 2026-03-10T07:31:15.359521+0000 mon.a (mon.0) 2113 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: cluster 2026-03-10T07:31:15.359521+0000 mon.a (mon.0) 2113 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.363793+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.363793+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.382232+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.382232+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.383582+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.383582+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.383784+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:15.383784+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:16.360637+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:16.360637+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:16.360703+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:16.360703+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: cluster 2026-03-10T07:31:16.363909+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: cluster 2026-03-10T07:31:16.363909+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:16.373689+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:16 vm00 bash[20701]: audit 2026-03-10T07:31:16.373689+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: cluster 2026-03-10T07:31:14.619834+0000 mgr.y (mgr.24407) 251 : cluster [DBG] pgmap v345: 324 pgs: 1 creating+activating, 24 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 287 active+clean; 8.4 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: cluster 2026-03-10T07:31:14.619834+0000 mgr.y (mgr.24407) 251 : cluster [DBG] pgmap v345: 324 pgs: 1 creating+activating, 24 unknown, 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 287 active+clean; 8.4 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.356806+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.356806+0000 mon.a (mon.0) 2111 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.356865+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.356865+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.358993+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.358993+0000 mon.b (mon.1) 287 : audit [INF] from='client.? 192.168.123.100:0/235675196' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: cluster 2026-03-10T07:31:15.359521+0000 mon.a (mon.0) 2113 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: cluster 2026-03-10T07:31:15.359521+0000 mon.a (mon.0) 2113 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.363793+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.363793+0000 mon.a (mon.0) 2114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.382232+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.382232+0000 mon.a (mon.0) 2115 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.383582+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.383582+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.383784+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:15.383784+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:16.360637+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:16.360637+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm00-59629-42"}]': finished 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:16.360703+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:16.360703+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: cluster 2026-03-10T07:31:16.363909+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: cluster 2026-03-10T07:31:16.363909+0000 mon.a (mon.0) 2120 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:16.373689+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:16 vm03 bash[23382]: audit 2026-03-10T07:31:16.373689+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:17 vm03 bash[23382]: audit 2026-03-10T07:31:16.418040+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:17 vm03 bash[23382]: audit 2026-03-10T07:31:16.418040+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:17 vm03 bash[23382]: audit 2026-03-10T07:31:16.419899+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:17 vm03 bash[23382]: audit 2026-03-10T07:31:16.419899+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:17 vm03 bash[23382]: cluster 2026-03-10T07:31:16.620140+0000 mgr.y (mgr.24407) 252 : cluster [DBG] pgmap v348: 292 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:31:17.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:17 vm03 bash[23382]: cluster 2026-03-10T07:31:16.620140+0000 mgr.y (mgr.24407) 252 : cluster [DBG] pgmap v348: 292 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:31:17.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:17 vm03 bash[23382]: audit 2026-03-10T07:31:17.365144+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:17.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:17 vm03 bash[23382]: audit 2026-03-10T07:31:17.365144+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:17 vm00 bash[28005]: audit 2026-03-10T07:31:16.418040+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:17 vm00 bash[28005]: audit 2026-03-10T07:31:16.418040+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:17 vm00 bash[28005]: audit 2026-03-10T07:31:16.419899+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:17 vm00 bash[28005]: audit 2026-03-10T07:31:16.419899+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:17 vm00 bash[28005]: cluster 2026-03-10T07:31:16.620140+0000 mgr.y (mgr.24407) 252 : cluster [DBG] pgmap v348: 292 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:17 vm00 bash[28005]: cluster 2026-03-10T07:31:16.620140+0000 mgr.y (mgr.24407) 252 : cluster [DBG] pgmap v348: 292 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:17 vm00 bash[28005]: audit 2026-03-10T07:31:17.365144+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:17 vm00 bash[28005]: audit 2026-03-10T07:31:17.365144+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:17 vm00 bash[20701]: audit 2026-03-10T07:31:16.418040+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:17 vm00 bash[20701]: audit 2026-03-10T07:31:16.418040+0000 mon.b (mon.1) 288 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:17 vm00 bash[20701]: audit 2026-03-10T07:31:16.419899+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:17 vm00 bash[20701]: audit 2026-03-10T07:31:16.419899+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:17 vm00 bash[20701]: cluster 2026-03-10T07:31:16.620140+0000 mgr.y (mgr.24407) 252 : cluster [DBG] pgmap v348: 292 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:31:17.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:17 vm00 bash[20701]: cluster 2026-03-10T07:31:16.620140+0000 mgr.y (mgr.24407) 252 : cluster [DBG] pgmap v348: 292 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:31:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:17 vm00 bash[20701]: audit 2026-03-10T07:31:17.365144+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:17 vm00 bash[20701]: audit 2026-03-10T07:31:17.365144+0000 mon.a (mon.0) 2123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: audit 2026-03-10T07:31:17.367442+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: audit 2026-03-10T07:31:17.367442+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: cluster 2026-03-10T07:31:17.384231+0000 mon.a (mon.0) 2124 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: cluster 2026-03-10T07:31:17.384231+0000 mon.a (mon.0) 2124 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: audit 2026-03-10T07:31:17.385597+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: audit 2026-03-10T07:31:17.385597+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: cluster 2026-03-10T07:31:18.365328+0000 mon.a (mon.0) 2126 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: cluster 2026-03-10T07:31:18.365328+0000 mon.a (mon.0) 2126 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: audit 2026-03-10T07:31:18.369182+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: audit 2026-03-10T07:31:18.369182+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: audit 2026-03-10T07:31:18.369225+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: audit 2026-03-10T07:31:18.369225+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: cluster 2026-03-10T07:31:18.378536+0000 mon.a (mon.0) 2129 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T07:31:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:18 vm03 bash[23382]: cluster 2026-03-10T07:31:18.378536+0000 mon.a (mon.0) 2129 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: audit 2026-03-10T07:31:17.367442+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: audit 2026-03-10T07:31:17.367442+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: cluster 2026-03-10T07:31:17.384231+0000 mon.a (mon.0) 2124 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: cluster 2026-03-10T07:31:17.384231+0000 mon.a (mon.0) 2124 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: audit 2026-03-10T07:31:17.385597+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: audit 2026-03-10T07:31:17.385597+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: cluster 2026-03-10T07:31:18.365328+0000 mon.a (mon.0) 2126 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: cluster 2026-03-10T07:31:18.365328+0000 mon.a (mon.0) 2126 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: audit 2026-03-10T07:31:18.369182+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: audit 2026-03-10T07:31:18.369182+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: audit 2026-03-10T07:31:18.369225+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: audit 2026-03-10T07:31:18.369225+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: cluster 2026-03-10T07:31:18.378536+0000 mon.a (mon.0) 2129 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:18 vm00 bash[28005]: cluster 2026-03-10T07:31:18.378536+0000 mon.a (mon.0) 2129 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: audit 2026-03-10T07:31:17.367442+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: audit 2026-03-10T07:31:17.367442+0000 mon.b (mon.1) 289 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: cluster 2026-03-10T07:31:17.384231+0000 mon.a (mon.0) 2124 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: cluster 2026-03-10T07:31:17.384231+0000 mon.a (mon.0) 2124 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: audit 2026-03-10T07:31:17.385597+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: audit 2026-03-10T07:31:17.385597+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]: dispatch 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: cluster 2026-03-10T07:31:18.365328+0000 mon.a (mon.0) 2126 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: cluster 2026-03-10T07:31:18.365328+0000 mon.a (mon.0) 2126 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: audit 2026-03-10T07:31:18.369182+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: audit 2026-03-10T07:31:18.369182+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm00-59637-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: audit 2026-03-10T07:31:18.369225+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: audit 2026-03-10T07:31:18.369225+0000 mon.a (mon.0) 2128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-35"}]': finished 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: cluster 2026-03-10T07:31:18.378536+0000 mon.a (mon.0) 2129 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T07:31:18.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:18 vm00 bash[20701]: cluster 2026-03-10T07:31:18.378536+0000 mon.a (mon.0) 2129 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T07:31:19.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:19 vm03 bash[23382]: cluster 2026-03-10T07:31:18.620485+0000 mgr.y (mgr.24407) 253 : cluster [DBG] pgmap v351: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:31:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:19 vm03 bash[23382]: cluster 2026-03-10T07:31:18.620485+0000 mgr.y (mgr.24407) 253 : cluster [DBG] pgmap v351: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:31:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:19 vm03 bash[23382]: cluster 2026-03-10T07:31:19.386352+0000 mon.a (mon.0) 2130 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T07:31:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:19 vm03 bash[23382]: cluster 2026-03-10T07:31:19.386352+0000 mon.a (mon.0) 2130 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T07:31:19.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:19 vm00 bash[28005]: cluster 2026-03-10T07:31:18.620485+0000 mgr.y (mgr.24407) 253 : cluster [DBG] pgmap v351: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:31:19.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:19 vm00 bash[28005]: cluster 2026-03-10T07:31:18.620485+0000 mgr.y (mgr.24407) 253 : cluster [DBG] pgmap v351: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:31:19.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:19 vm00 bash[28005]: cluster 2026-03-10T07:31:19.386352+0000 mon.a (mon.0) 2130 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T07:31:19.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:19 vm00 bash[28005]: cluster 2026-03-10T07:31:19.386352+0000 mon.a (mon.0) 2130 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T07:31:19.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:19 vm00 bash[20701]: cluster 2026-03-10T07:31:18.620485+0000 mgr.y (mgr.24407) 253 : cluster [DBG] pgmap v351: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:31:19.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:19 vm00 bash[20701]: cluster 2026-03-10T07:31:18.620485+0000 mgr.y (mgr.24407) 253 : cluster [DBG] pgmap v351: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:31:19.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:19 vm00 bash[20701]: cluster 2026-03-10T07:31:19.386352+0000 mon.a (mon.0) 2130 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T07:31:19.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:19 vm00 bash[20701]: cluster 2026-03-10T07:31:19.386352+0000 mon.a (mon.0) 2130 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T07:31:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:20 vm03 bash[23382]: cluster 2026-03-10T07:31:19.409645+0000 mon.a (mon.0) 2131 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:20 vm03 bash[23382]: cluster 2026-03-10T07:31:19.409645+0000 mon.a (mon.0) 2131 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:20.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:20 vm00 bash[28005]: cluster 2026-03-10T07:31:19.409645+0000 mon.a (mon.0) 2131 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:20.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:20 vm00 bash[28005]: cluster 2026-03-10T07:31:19.409645+0000 mon.a (mon.0) 2131 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:20.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:20 vm00 bash[20701]: cluster 2026-03-10T07:31:19.409645+0000 mon.a (mon.0) 2131 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:20.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:20 vm00 bash[20701]: cluster 2026-03-10T07:31:19.409645+0000 mon.a (mon.0) 2131 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:21.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:31:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:31:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:31:22.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: cluster 2026-03-10T07:31:20.425098+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T07:31:22.443 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: cluster 2026-03-10T07:31:20.425098+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.428560+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.428560+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.447799+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.447799+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.456053+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.456053+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: cluster 2026-03-10T07:31:20.620773+0000 mgr.y (mgr.24407) 254 : cluster [DBG] pgmap v354: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: cluster 2026-03-10T07:31:20.620773+0000 mgr.y (mgr.24407) 254 : cluster [DBG] pgmap v354: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.977294+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.977294+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.977333+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.977333+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: cluster 2026-03-10T07:31:20.986064+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: cluster 2026-03-10T07:31:20.986064+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.986794+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:20.986794+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:21.015709+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:21.015709+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:21.017539+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:22.444 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:22 vm00 bash[20701]: audit 2026-03-10T07:31:21.017539+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: cluster 2026-03-10T07:31:20.425098+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T07:31:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: cluster 2026-03-10T07:31:20.425098+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T07:31:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.428560+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.428560+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.447799+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.447799+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.456053+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.456053+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: cluster 2026-03-10T07:31:20.620773+0000 mgr.y (mgr.24407) 254 : cluster [DBG] pgmap v354: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: cluster 2026-03-10T07:31:20.620773+0000 mgr.y (mgr.24407) 254 : cluster [DBG] pgmap v354: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.977294+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.977294+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.977333+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.977333+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: cluster 2026-03-10T07:31:20.986064+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: cluster 2026-03-10T07:31:20.986064+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.986794+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:20.986794+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:21.015709+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:21.015709+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:21.017539+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:22 vm03 bash[23382]: audit 2026-03-10T07:31:21.017539+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: cluster 2026-03-10T07:31:20.425098+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: cluster 2026-03-10T07:31:20.425098+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.428560+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.428560+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.447799+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.447799+0000 mon.b (mon.1) 290 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.456053+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.456053+0000 mon.a (mon.0) 2134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: cluster 2026-03-10T07:31:20.620773+0000 mgr.y (mgr.24407) 254 : cluster [DBG] pgmap v354: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: cluster 2026-03-10T07:31:20.620773+0000 mgr.y (mgr.24407) 254 : cluster [DBG] pgmap v354: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.977294+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.977294+0000 mon.a (mon.0) 2135 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.977333+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.977333+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: cluster 2026-03-10T07:31:20.986064+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T07:31:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: cluster 2026-03-10T07:31:20.986064+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T07:31:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.986794+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:20.986794+0000 mon.a (mon.0) 2138 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]: dispatch 2026-03-10T07:31:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:21.015709+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:21.015709+0000 mon.b (mon.1) 291 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:21.017539+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:22 vm00 bash[28005]: audit 2026-03-10T07:31:21.017539+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:23.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:31:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:31:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.555873+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.555873+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.555919+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.555919+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.563856+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.563856+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: cluster 2026-03-10T07:31:22.568545+0000 mon.a (mon.0) 2142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: cluster 2026-03-10T07:31:22.568545+0000 mon.a (mon.0) 2142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.570117+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.570117+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.585631+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.585631+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.587088+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.587088+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.587688+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.587688+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.588884+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.588884+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.589475+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.589475+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.590762+0000 mon.a (mon.0) 2146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:22.590762+0000 mon.a (mon.0) 2146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: cluster 2026-03-10T07:31:22.621161+0000 mgr.y (mgr.24407) 255 : cluster [DBG] pgmap v357: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: cluster 2026-03-10T07:31:22.621161+0000 mgr.y (mgr.24407) 255 : cluster [DBG] pgmap v357: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:23.100231+0000 mgr.y (mgr.24407) 256 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:24.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:23 vm03 bash[23382]: audit 2026-03-10T07:31:23.100231+0000 mgr.y (mgr.24407) 256 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.555873+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.555873+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.555919+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.555919+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.563856+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.563856+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: cluster 2026-03-10T07:31:22.568545+0000 mon.a (mon.0) 2142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T07:31:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: cluster 2026-03-10T07:31:22.568545+0000 mon.a (mon.0) 2142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T07:31:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.570117+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.570117+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.585631+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.585631+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.587088+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.587088+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.587688+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.587688+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.588884+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.588884+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.589475+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.589475+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.590762+0000 mon.a (mon.0) 2146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:22.590762+0000 mon.a (mon.0) 2146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: cluster 2026-03-10T07:31:22.621161+0000 mgr.y (mgr.24407) 255 : cluster [DBG] pgmap v357: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: cluster 2026-03-10T07:31:22.621161+0000 mgr.y (mgr.24407) 255 : cluster [DBG] pgmap v357: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:23.100231+0000 mgr.y (mgr.24407) 256 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:23 vm00 bash[28005]: audit 2026-03-10T07:31:23.100231+0000 mgr.y (mgr.24407) 256 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.555873+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.555873+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? 192.168.123.100:0/3807037197' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm00-59637-52"}]': finished 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.555919+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.555919+0000 mon.a (mon.0) 2141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.563856+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.563856+0000 mon.b (mon.1) 292 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: cluster 2026-03-10T07:31:22.568545+0000 mon.a (mon.0) 2142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: cluster 2026-03-10T07:31:22.568545+0000 mon.a (mon.0) 2142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.570117+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.570117+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.585631+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.585631+0000 mon.c (mon.2) 256 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.587088+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.587088+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.587688+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.587688+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.588884+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.588884+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.589475+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.589475+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.590762+0000 mon.a (mon.0) 2146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:22.590762+0000 mon.a (mon.0) 2146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: cluster 2026-03-10T07:31:22.621161+0000 mgr.y (mgr.24407) 255 : cluster [DBG] pgmap v357: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: cluster 2026-03-10T07:31:22.621161+0000 mgr.y (mgr.24407) 255 : cluster [DBG] pgmap v357: 292 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 250 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:23.100231+0000 mgr.y (mgr.24407) 256 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:23 vm00 bash[20701]: audit 2026-03-10T07:31:23.100231+0000 mgr.y (mgr.24407) 256 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.628026+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.628026+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.628106+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.628106+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.630768+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.630768+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: cluster 2026-03-10T07:31:23.632927+0000 mon.a (mon.0) 2149 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: cluster 2026-03-10T07:31:23.632927+0000 mon.a (mon.0) 2149 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.635583+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.635583+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.637177+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.637177+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.637422+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:23.637422+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:24.334226+0000 mon.c (mon.2) 260 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:24.334226+0000 mon.c (mon.2) 260 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: cluster 2026-03-10T07:31:24.628409+0000 mon.a (mon.0) 2152 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: cluster 2026-03-10T07:31:24.628409+0000 mon.a (mon.0) 2152 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:24.631956+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]': finished 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: audit 2026-03-10T07:31:24.631956+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]': finished 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: cluster 2026-03-10T07:31:24.644132+0000 mon.a (mon.0) 2154 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T07:31:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:24 vm03 bash[23382]: cluster 2026-03-10T07:31:24.644132+0000 mon.a (mon.0) 2154 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.628026+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.628026+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.628106+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.628106+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.630768+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.630768+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: cluster 2026-03-10T07:31:23.632927+0000 mon.a (mon.0) 2149 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: cluster 2026-03-10T07:31:23.632927+0000 mon.a (mon.0) 2149 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.635583+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.635583+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.637177+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.637177+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.637422+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:23.637422+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:24.334226+0000 mon.c (mon.2) 260 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:24.334226+0000 mon.c (mon.2) 260 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: cluster 2026-03-10T07:31:24.628409+0000 mon.a (mon.0) 2152 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: cluster 2026-03-10T07:31:24.628409+0000 mon.a (mon.0) 2152 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:24.631956+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]': finished 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: audit 2026-03-10T07:31:24.631956+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]': finished 2026-03-10T07:31:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: cluster 2026-03-10T07:31:24.644132+0000 mon.a (mon.0) 2154 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:24 vm00 bash[28005]: cluster 2026-03-10T07:31:24.644132+0000 mon.a (mon.0) 2154 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.628026+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.628026+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.628106+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.628106+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm00-59637-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.630768+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.630768+0000 mon.b (mon.1) 293 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: cluster 2026-03-10T07:31:23.632927+0000 mon.a (mon.0) 2149 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: cluster 2026-03-10T07:31:23.632927+0000 mon.a (mon.0) 2149 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.635583+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.635583+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.637177+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.637177+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]: dispatch 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.637422+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:23.637422+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:24.334226+0000 mon.c (mon.2) 260 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:24.334226+0000 mon.c (mon.2) 260 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: cluster 2026-03-10T07:31:24.628409+0000 mon.a (mon.0) 2152 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: cluster 2026-03-10T07:31:24.628409+0000 mon.a (mon.0) 2152 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:24.631956+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]': finished 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: audit 2026-03-10T07:31:24.631956+0000 mon.a (mon.0) 2153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-37", "mode": "writeback"}]': finished 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: cluster 2026-03-10T07:31:24.644132+0000 mon.a (mon.0) 2154 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T07:31:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:24 vm00 bash[20701]: cluster 2026-03-10T07:31:24.644132+0000 mon.a (mon.0) 2154 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T07:31:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: cluster 2026-03-10T07:31:24.621620+0000 mgr.y (mgr.24407) 257 : cluster [DBG] pgmap v359: 292 pgs: 25 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 257 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: cluster 2026-03-10T07:31:24.621620+0000 mgr.y (mgr.24407) 257 : cluster [DBG] pgmap v359: 292 pgs: 25 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 257 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:24.787841+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:24.787841+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:24.789727+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:24.789727+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:25.638085+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:25.638085+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:26.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:25.638205+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:26.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:25.638205+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:26.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:25.643757+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:26.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:25.643757+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:26.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: cluster 2026-03-10T07:31:25.650260+0000 mon.a (mon.0) 2158 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T07:31:26.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: cluster 2026-03-10T07:31:25.650260+0000 mon.a (mon.0) 2158 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T07:31:26.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:25.652211+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:26.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:25 vm03 bash[23382]: audit 2026-03-10T07:31:25.652211+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: cluster 2026-03-10T07:31:24.621620+0000 mgr.y (mgr.24407) 257 : cluster [DBG] pgmap v359: 292 pgs: 25 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 257 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: cluster 2026-03-10T07:31:24.621620+0000 mgr.y (mgr.24407) 257 : cluster [DBG] pgmap v359: 292 pgs: 25 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 257 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:24.787841+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:24.787841+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:24.789727+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:24.789727+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:25.638085+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:25.638085+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:25.638205+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:25.638205+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:25.643757+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:25.643757+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: cluster 2026-03-10T07:31:25.650260+0000 mon.a (mon.0) 2158 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: cluster 2026-03-10T07:31:25.650260+0000 mon.a (mon.0) 2158 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:25.652211+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:25 vm00 bash[20701]: audit 2026-03-10T07:31:25.652211+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: cluster 2026-03-10T07:31:24.621620+0000 mgr.y (mgr.24407) 257 : cluster [DBG] pgmap v359: 292 pgs: 25 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 257 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: cluster 2026-03-10T07:31:24.621620+0000 mgr.y (mgr.24407) 257 : cluster [DBG] pgmap v359: 292 pgs: 25 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 257 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:24.787841+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:24.787841+0000 mon.b (mon.1) 294 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:24.789727+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:24.789727+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:25.638085+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:25.638085+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm00-59637-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:25.638205+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:25.638205+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:25.643757+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:25.643757+0000 mon.b (mon.1) 295 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: cluster 2026-03-10T07:31:25.650260+0000 mon.a (mon.0) 2158 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: cluster 2026-03-10T07:31:25.650260+0000 mon.a (mon.0) 2158 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:25.652211+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:25 vm00 bash[28005]: audit 2026-03-10T07:31:25.652211+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]: dispatch 2026-03-10T07:31:27.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:26 vm00 bash[28005]: cluster 2026-03-10T07:31:25.975388+0000 mon.a (mon.0) 2160 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:27.192 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:26 vm00 bash[28005]: cluster 2026-03-10T07:31:25.975388+0000 mon.a (mon.0) 2160 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:27.192 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:26 vm00 bash[28005]: cluster 2026-03-10T07:31:26.638207+0000 mon.a (mon.0) 2161 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:27.192 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:26 vm00 bash[28005]: cluster 2026-03-10T07:31:26.638207+0000 mon.a (mon.0) 2161 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:27.192 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:26 vm00 bash[20701]: cluster 2026-03-10T07:31:25.975388+0000 mon.a (mon.0) 2160 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:27.192 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:26 vm00 bash[20701]: cluster 2026-03-10T07:31:25.975388+0000 mon.a (mon.0) 2160 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:27.192 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:26 vm00 bash[20701]: cluster 2026-03-10T07:31:26.638207+0000 mon.a (mon.0) 2161 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:27.192 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:26 vm00 bash[20701]: cluster 2026-03-10T07:31:26.638207+0000 mon.a (mon.0) 2161 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:27.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:26 vm03 bash[23382]: cluster 2026-03-10T07:31:25.975388+0000 mon.a (mon.0) 2160 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:27.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:26 vm03 bash[23382]: cluster 2026-03-10T07:31:25.975388+0000 mon.a (mon.0) 2160 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:27.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:26 vm03 bash[23382]: cluster 2026-03-10T07:31:26.638207+0000 mon.a (mon.0) 2161 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:27.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:26 vm03 bash[23382]: cluster 2026-03-10T07:31:26.638207+0000 mon.a (mon.0) 2161 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:27 vm00 bash[28005]: cluster 2026-03-10T07:31:26.622002+0000 mgr.y (mgr.24407) 258 : cluster [DBG] pgmap v362: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:27 vm00 bash[28005]: cluster 2026-03-10T07:31:26.622002+0000 mgr.y (mgr.24407) 258 : cluster [DBG] pgmap v362: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:27 vm00 bash[28005]: audit 2026-03-10T07:31:26.779950+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:27 vm00 bash[28005]: audit 2026-03-10T07:31:26.779950+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:27 vm00 bash[28005]: cluster 2026-03-10T07:31:26.796616+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:27 vm00 bash[28005]: cluster 2026-03-10T07:31:26.796616+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:27 vm00 bash[20701]: cluster 2026-03-10T07:31:26.622002+0000 mgr.y (mgr.24407) 258 : cluster [DBG] pgmap v362: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:27 vm00 bash[20701]: cluster 2026-03-10T07:31:26.622002+0000 mgr.y (mgr.24407) 258 : cluster [DBG] pgmap v362: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:27 vm00 bash[20701]: audit 2026-03-10T07:31:26.779950+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:27 vm00 bash[20701]: audit 2026-03-10T07:31:26.779950+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:27 vm00 bash[20701]: cluster 2026-03-10T07:31:26.796616+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T07:31:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:27 vm00 bash[20701]: cluster 2026-03-10T07:31:26.796616+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T07:31:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:27 vm03 bash[23382]: cluster 2026-03-10T07:31:26.622002+0000 mgr.y (mgr.24407) 258 : cluster [DBG] pgmap v362: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:27 vm03 bash[23382]: cluster 2026-03-10T07:31:26.622002+0000 mgr.y (mgr.24407) 258 : cluster [DBG] pgmap v362: 300 pgs: 8 unknown, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 284 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:27 vm03 bash[23382]: audit 2026-03-10T07:31:26.779950+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:27 vm03 bash[23382]: audit 2026-03-10T07:31:26.779950+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-37"}]': finished 2026-03-10T07:31:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:27 vm03 bash[23382]: cluster 2026-03-10T07:31:26.796616+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T07:31:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:27 vm03 bash[23382]: cluster 2026-03-10T07:31:26.796616+0000 mon.a (mon.0) 2163 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T07:31:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:28 vm03 bash[23382]: audit 2026-03-10T07:31:27.863371+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:28 vm03 bash[23382]: audit 2026-03-10T07:31:27.863371+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:28 vm03 bash[23382]: cluster 2026-03-10T07:31:27.864514+0000 mon.a (mon.0) 2164 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T07:31:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:28 vm03 bash[23382]: cluster 2026-03-10T07:31:27.864514+0000 mon.a (mon.0) 2164 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T07:31:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:28 vm03 bash[23382]: audit 2026-03-10T07:31:27.866086+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:28 vm03 bash[23382]: audit 2026-03-10T07:31:27.866086+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:28 vm00 bash[28005]: audit 2026-03-10T07:31:27.863371+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:28 vm00 bash[28005]: audit 2026-03-10T07:31:27.863371+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:28 vm00 bash[28005]: cluster 2026-03-10T07:31:27.864514+0000 mon.a (mon.0) 2164 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T07:31:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:28 vm00 bash[28005]: cluster 2026-03-10T07:31:27.864514+0000 mon.a (mon.0) 2164 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T07:31:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:28 vm00 bash[28005]: audit 2026-03-10T07:31:27.866086+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:28 vm00 bash[28005]: audit 2026-03-10T07:31:27.866086+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:28 vm00 bash[20701]: audit 2026-03-10T07:31:27.863371+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:29.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:28 vm00 bash[20701]: audit 2026-03-10T07:31:27.863371+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:29.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:28 vm00 bash[20701]: cluster 2026-03-10T07:31:27.864514+0000 mon.a (mon.0) 2164 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T07:31:29.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:28 vm00 bash[20701]: cluster 2026-03-10T07:31:27.864514+0000 mon.a (mon.0) 2164 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T07:31:29.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:28 vm00 bash[20701]: audit 2026-03-10T07:31:27.866086+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:29.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:28 vm00 bash[20701]: audit 2026-03-10T07:31:27.866086+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: cluster 2026-03-10T07:31:28.622307+0000 mgr.y (mgr.24407) 259 : cluster [DBG] pgmap v365: 260 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: cluster 2026-03-10T07:31:28.622307+0000 mgr.y (mgr.24407) 259 : cluster [DBG] pgmap v365: 260 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:28.906521+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:28.906521+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: cluster 2026-03-10T07:31:28.914483+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: cluster 2026-03-10T07:31:28.914483+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:28.927042+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:28.927042+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:28.928465+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:28.928465+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:28.935942+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:28.935942+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:28.936225+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:28.936225+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:29.911040+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:29.911040+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:29.911198+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:29.911198+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:29.927276+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:29.927276+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: cluster 2026-03-10T07:31:29.930856+0000 mon.a (mon.0) 2172 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: cluster 2026-03-10T07:31:29.930856+0000 mon.a (mon.0) 2172 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:29.942525+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:29 vm03 bash[23382]: audit 2026-03-10T07:31:29.942525+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:30.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: cluster 2026-03-10T07:31:28.622307+0000 mgr.y (mgr.24407) 259 : cluster [DBG] pgmap v365: 260 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:30.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: cluster 2026-03-10T07:31:28.622307+0000 mgr.y (mgr.24407) 259 : cluster [DBG] pgmap v365: 260 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:30.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:28.906521+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:28.906521+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: cluster 2026-03-10T07:31:28.914483+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T07:31:30.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: cluster 2026-03-10T07:31:28.914483+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T07:31:30.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:28.927042+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:28.927042+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:28.928465+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:28.928465+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:28.935942+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:28.935942+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:28.936225+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:28.936225+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:29.911040+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:29.911040+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:29.911198+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:29.911198+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:29.927276+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:29.927276+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: cluster 2026-03-10T07:31:29.930856+0000 mon.a (mon.0) 2172 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: cluster 2026-03-10T07:31:29.930856+0000 mon.a (mon.0) 2172 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:29.942525+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:29 vm00 bash[20701]: audit 2026-03-10T07:31:29.942525+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: cluster 2026-03-10T07:31:28.622307+0000 mgr.y (mgr.24407) 259 : cluster [DBG] pgmap v365: 260 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: cluster 2026-03-10T07:31:28.622307+0000 mgr.y (mgr.24407) 259 : cluster [DBG] pgmap v365: 260 pgs: 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 8.4 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:28.906521+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:28.906521+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: cluster 2026-03-10T07:31:28.914483+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: cluster 2026-03-10T07:31:28.914483+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:28.927042+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:28.927042+0000 mon.b (mon.1) 296 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:28.928465+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:28.928465+0000 mon.c (mon.2) 262 : audit [INF] from='client.? 192.168.123.100:0/2965785978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:28.935942+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:28.935942+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:28.936225+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:28.936225+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:29.911040+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:29.911040+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm00-59637-53"}]': finished 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:29.911198+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:29.911198+0000 mon.a (mon.0) 2171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:29.927276+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:29.927276+0000 mon.b (mon.1) 297 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: cluster 2026-03-10T07:31:29.930856+0000 mon.a (mon.0) 2172 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: cluster 2026-03-10T07:31:29.930856+0000 mon.a (mon.0) 2172 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:29.942525+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:29 vm00 bash[28005]: audit 2026-03-10T07:31:29.942525+0000 mon.a (mon.0) 2173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:31:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:31:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.948380+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.948380+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.978356+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.978356+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.982466+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.982466+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.982913+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.982913+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.983759+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.983759+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.984153+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:31.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:31 vm00 bash[28005]: audit 2026-03-10T07:31:29.984153+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.948380+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.948380+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.978356+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.978356+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.982466+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.982466+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.982913+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.982913+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.983759+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.983759+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.984153+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:31.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:31 vm00 bash[20701]: audit 2026-03-10T07:31:29.984153+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:31.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.948380+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.948380+0000 mon.c (mon.2) 263 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.978356+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.978356+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.982466+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.982466+0000 mon.c (mon.2) 264 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.982913+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.982913+0000 mon.a (mon.0) 2175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:31.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.983759+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:31.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.983759+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:31.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.984153+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:31.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:31 vm03 bash[23382]: audit 2026-03-10T07:31:29.984153+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:32.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: cluster 2026-03-10T07:31:30.622615+0000 mgr.y (mgr.24407) 260 : cluster [DBG] pgmap v368: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: cluster 2026-03-10T07:31:30.622615+0000 mgr.y (mgr.24407) 260 : cluster [DBG] pgmap v368: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.113582+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.113582+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.113693+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.113693+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.116486+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.116486+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.121488+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.121488+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: cluster 2026-03-10T07:31:31.126282+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: cluster 2026-03-10T07:31:31.126282+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.128526+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.128526+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.128747+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:31.128747+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:32.117029+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:32.117029+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:32.120306+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:32.120306+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: cluster 2026-03-10T07:31:32.126932+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: cluster 2026-03-10T07:31:32.126932+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:32.127627+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:32.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:32 vm03 bash[23382]: audit 2026-03-10T07:31:32.127627+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: cluster 2026-03-10T07:31:30.622615+0000 mgr.y (mgr.24407) 260 : cluster [DBG] pgmap v368: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: cluster 2026-03-10T07:31:30.622615+0000 mgr.y (mgr.24407) 260 : cluster [DBG] pgmap v368: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.113582+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.113582+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.113693+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.113693+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.116486+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.116486+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.121488+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.121488+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: cluster 2026-03-10T07:31:31.126282+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: cluster 2026-03-10T07:31:31.126282+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.128526+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: cluster 2026-03-10T07:31:30.622615+0000 mgr.y (mgr.24407) 260 : cluster [DBG] pgmap v368: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: cluster 2026-03-10T07:31:30.622615+0000 mgr.y (mgr.24407) 260 : cluster [DBG] pgmap v368: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.113582+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.113582+0000 mon.a (mon.0) 2177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.113693+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.113693+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm00-59637-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.116486+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.116486+0000 mon.b (mon.1) 298 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.121488+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.121488+0000 mon.c (mon.2) 266 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: cluster 2026-03-10T07:31:31.126282+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: cluster 2026-03-10T07:31:31.126282+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.128526+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.128526+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.128747+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:31.128747+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:32.117029+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:32.117029+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:32.120306+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:32.120306+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: cluster 2026-03-10T07:31:32.126932+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: cluster 2026-03-10T07:31:32.126932+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:32.127627+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:32 vm00 bash[28005]: audit 2026-03-10T07:31:32.127627+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.128526+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.128747+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:31.128747+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:32.117029+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:32.117029+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:32.120306+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:32.120306+0000 mon.b (mon.1) 299 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: cluster 2026-03-10T07:31:32.126932+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: cluster 2026-03-10T07:31:32.126932+0000 mon.a (mon.0) 2183 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:32.127627+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:32 vm00 bash[20701]: audit 2026-03-10T07:31:32.127627+0000 mon.a (mon.0) 2184 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]: dispatch 2026-03-10T07:31:33.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:31:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:31:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:33 vm03 bash[23382]: cluster 2026-03-10T07:31:33.117336+0000 mon.a (mon.0) 2185 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:33 vm03 bash[23382]: cluster 2026-03-10T07:31:33.117336+0000 mon.a (mon.0) 2185 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:33 vm03 bash[23382]: audit 2026-03-10T07:31:33.128682+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:33 vm03 bash[23382]: audit 2026-03-10T07:31:33.128682+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:33 vm03 bash[23382]: audit 2026-03-10T07:31:33.128939+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]': finished 2026-03-10T07:31:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:33 vm03 bash[23382]: audit 2026-03-10T07:31:33.128939+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]': finished 2026-03-10T07:31:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:33 vm03 bash[23382]: cluster 2026-03-10T07:31:33.141456+0000 mon.a (mon.0) 2188 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T07:31:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:33 vm03 bash[23382]: cluster 2026-03-10T07:31:33.141456+0000 mon.a (mon.0) 2188 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:33 vm00 bash[28005]: cluster 2026-03-10T07:31:33.117336+0000 mon.a (mon.0) 2185 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:33 vm00 bash[28005]: cluster 2026-03-10T07:31:33.117336+0000 mon.a (mon.0) 2185 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:33 vm00 bash[28005]: audit 2026-03-10T07:31:33.128682+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:33 vm00 bash[28005]: audit 2026-03-10T07:31:33.128682+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:33 vm00 bash[28005]: audit 2026-03-10T07:31:33.128939+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]': finished 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:33 vm00 bash[28005]: audit 2026-03-10T07:31:33.128939+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]': finished 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:33 vm00 bash[28005]: cluster 2026-03-10T07:31:33.141456+0000 mon.a (mon.0) 2188 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:33 vm00 bash[28005]: cluster 2026-03-10T07:31:33.141456+0000 mon.a (mon.0) 2188 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:33 vm00 bash[20701]: cluster 2026-03-10T07:31:33.117336+0000 mon.a (mon.0) 2185 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:33 vm00 bash[20701]: cluster 2026-03-10T07:31:33.117336+0000 mon.a (mon.0) 2185 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:33 vm00 bash[20701]: audit 2026-03-10T07:31:33.128682+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:33 vm00 bash[20701]: audit 2026-03-10T07:31:33.128682+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm00-59637-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:33 vm00 bash[20701]: audit 2026-03-10T07:31:33.128939+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]': finished 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:33 vm00 bash[20701]: audit 2026-03-10T07:31:33.128939+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-39", "mode": "writeback"}]': finished 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:33 vm00 bash[20701]: cluster 2026-03-10T07:31:33.141456+0000 mon.a (mon.0) 2188 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T07:31:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:33 vm00 bash[20701]: cluster 2026-03-10T07:31:33.141456+0000 mon.a (mon.0) 2188 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T07:31:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: cluster 2026-03-10T07:31:32.623026+0000 mgr.y (mgr.24407) 261 : cluster [DBG] pgmap v371: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: cluster 2026-03-10T07:31:32.623026+0000 mgr.y (mgr.24407) 261 : cluster [DBG] pgmap v371: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: audit 2026-03-10T07:31:33.108360+0000 mgr.y (mgr.24407) 262 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: audit 2026-03-10T07:31:33.108360+0000 mgr.y (mgr.24407) 262 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: audit 2026-03-10T07:31:33.259982+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: audit 2026-03-10T07:31:33.259982+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: audit 2026-03-10T07:31:33.262022+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: audit 2026-03-10T07:31:33.262022+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: audit 2026-03-10T07:31:34.151048+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:34.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: audit 2026-03-10T07:31:34.151048+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:34.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: cluster 2026-03-10T07:31:34.153575+0000 mon.a (mon.0) 2191 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T07:31:34.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:34 vm03 bash[23382]: cluster 2026-03-10T07:31:34.153575+0000 mon.a (mon.0) 2191 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: cluster 2026-03-10T07:31:32.623026+0000 mgr.y (mgr.24407) 261 : cluster [DBG] pgmap v371: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: cluster 2026-03-10T07:31:32.623026+0000 mgr.y (mgr.24407) 261 : cluster [DBG] pgmap v371: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: audit 2026-03-10T07:31:33.108360+0000 mgr.y (mgr.24407) 262 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: audit 2026-03-10T07:31:33.108360+0000 mgr.y (mgr.24407) 262 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: audit 2026-03-10T07:31:33.259982+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: audit 2026-03-10T07:31:33.259982+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: audit 2026-03-10T07:31:33.262022+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: audit 2026-03-10T07:31:33.262022+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: audit 2026-03-10T07:31:34.151048+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: audit 2026-03-10T07:31:34.151048+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: cluster 2026-03-10T07:31:34.153575+0000 mon.a (mon.0) 2191 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T07:31:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:34 vm00 bash[28005]: cluster 2026-03-10T07:31:34.153575+0000 mon.a (mon.0) 2191 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: cluster 2026-03-10T07:31:32.623026+0000 mgr.y (mgr.24407) 261 : cluster [DBG] pgmap v371: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: cluster 2026-03-10T07:31:32.623026+0000 mgr.y (mgr.24407) 261 : cluster [DBG] pgmap v371: 292 pgs: 32 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 252 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: audit 2026-03-10T07:31:33.108360+0000 mgr.y (mgr.24407) 262 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: audit 2026-03-10T07:31:33.108360+0000 mgr.y (mgr.24407) 262 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: audit 2026-03-10T07:31:33.259982+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: audit 2026-03-10T07:31:33.259982+0000 mon.b (mon.1) 300 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: audit 2026-03-10T07:31:33.262022+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: audit 2026-03-10T07:31:33.262022+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: audit 2026-03-10T07:31:34.151048+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: audit 2026-03-10T07:31:34.151048+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: cluster 2026-03-10T07:31:34.153575+0000 mon.a (mon.0) 2191 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T07:31:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:34 vm00 bash[20701]: cluster 2026-03-10T07:31:34.153575+0000 mon.a (mon.0) 2191 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T07:31:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:35 vm03 bash[23382]: audit 2026-03-10T07:31:34.157607+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:35 vm03 bash[23382]: audit 2026-03-10T07:31:34.157607+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:35 vm03 bash[23382]: audit 2026-03-10T07:31:34.159840+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:35 vm03 bash[23382]: audit 2026-03-10T07:31:34.159840+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:35 vm03 bash[23382]: cluster 2026-03-10T07:31:35.151241+0000 mon.a (mon.0) 2193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:35 vm03 bash[23382]: cluster 2026-03-10T07:31:35.151241+0000 mon.a (mon.0) 2193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:35 vm00 bash[20701]: audit 2026-03-10T07:31:34.157607+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:35 vm00 bash[20701]: audit 2026-03-10T07:31:34.157607+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:35 vm00 bash[20701]: audit 2026-03-10T07:31:34.159840+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:35 vm00 bash[20701]: audit 2026-03-10T07:31:34.159840+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:35 vm00 bash[20701]: cluster 2026-03-10T07:31:35.151241+0000 mon.a (mon.0) 2193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:35 vm00 bash[20701]: cluster 2026-03-10T07:31:35.151241+0000 mon.a (mon.0) 2193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:35 vm00 bash[28005]: audit 2026-03-10T07:31:34.157607+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:35 vm00 bash[28005]: audit 2026-03-10T07:31:34.157607+0000 mon.b (mon.1) 301 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:35 vm00 bash[28005]: audit 2026-03-10T07:31:34.159840+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:35 vm00 bash[28005]: audit 2026-03-10T07:31:34.159840+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]: dispatch 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:35 vm00 bash[28005]: cluster 2026-03-10T07:31:35.151241+0000 mon.a (mon.0) 2193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:35 vm00 bash[28005]: cluster 2026-03-10T07:31:35.151241+0000 mon.a (mon.0) 2193 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [==========] Running 77 tests from 4 test suites. 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] Global test environment set-up. 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: seed 59782 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.Dirty 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierPP.Dirty (189 ms) 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.FlushWriteRaces 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierPP.FlushWriteRaces (11043 ms) 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.HitSetNone 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierPP.HitSetNone (200 ms) 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP (11432 ms total) 2026-03-10T07:31:36.122 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Overlay 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Overlay (7427 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Promote 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Promote (7710 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnap 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnap (10062 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapScrub 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: my_snaps [3] 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: my_snaps [4,3] 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: my_snaps [5,4,3] 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: my_snaps [6,5,4,3] 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: promoting some heads 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: promoting from clones for snap 6 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: promoting from clones for snap 5 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: promoting from clones for snap 4 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: promoting from clones for snap 3 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: waiting for scrubs... 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: done waiting 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapScrub (48328 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapTrimRace 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapTrimRace (10272 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Whiteout 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Whiteout (8043 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate (8281 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Evict 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Evict (8147 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap (10085 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap2 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap2 (8888 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ListSnap 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ListSnap (10329 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace (13137 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlush 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlush (7512 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Flush 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Flush (8055 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushSnap 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushSnap (13163 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushTryFlushRaces 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushTryFlushRaces (8479 ms) 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlushReadRace 2026-03-10T07:31:36.123 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlushReadRace (8264 ms) 2026-03-10T07:31:36.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: cluster 2026-03-10T07:31:34.623440+0000 mgr.y (mgr.24407) 263 : cluster [DBG] pgmap v374: 300 pgs: 4 unknown, 26 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 262 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: cluster 2026-03-10T07:31:34.623440+0000 mgr.y (mgr.24407) 263 : cluster [DBG] pgmap v374: 300 pgs: 4 unknown, 26 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 262 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:35.154978+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:35.154978+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: cluster 2026-03-10T07:31:35.159019+0000 mon.a (mon.0) 2195 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: cluster 2026-03-10T07:31:35.159019+0000 mon.a (mon.0) 2195 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: cluster 2026-03-10T07:31:35.169079+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: cluster 2026-03-10T07:31:35.169079+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:35.170565+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:35.170565+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:35.171558+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:35.171558+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:36.113179+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:36.113179+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:36.126257+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:36.126257+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: cluster 2026-03-10T07:31:36.128721+0000 mon.a (mon.0) 2199 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: cluster 2026-03-10T07:31:36.128721+0000 mon.a (mon.0) 2199 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:36.129543+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:36 vm03 bash[23382]: audit 2026-03-10T07:31:36.129543+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: cluster 2026-03-10T07:31:34.623440+0000 mgr.y (mgr.24407) 263 : cluster [DBG] pgmap v374: 300 pgs: 4 unknown, 26 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 262 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: cluster 2026-03-10T07:31:34.623440+0000 mgr.y (mgr.24407) 263 : cluster [DBG] pgmap v374: 300 pgs: 4 unknown, 26 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 262 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:35.154978+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:35.154978+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: cluster 2026-03-10T07:31:35.159019+0000 mon.a (mon.0) 2195 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: cluster 2026-03-10T07:31:35.159019+0000 mon.a (mon.0) 2195 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: cluster 2026-03-10T07:31:35.169079+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: cluster 2026-03-10T07:31:35.169079+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:35.170565+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:35.170565+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:35.171558+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:35.171558+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:36.113179+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:36.113179+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:36.126257+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:36.126257+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: cluster 2026-03-10T07:31:36.128721+0000 mon.a (mon.0) 2199 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: cluster 2026-03-10T07:31:36.128721+0000 mon.a (mon.0) 2199 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:36.129543+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:36 vm00 bash[20701]: audit 2026-03-10T07:31:36.129543+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: cluster 2026-03-10T07:31:34.623440+0000 mgr.y (mgr.24407) 263 : cluster [DBG] pgmap v374: 300 pgs: 4 unknown, 26 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 262 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: cluster 2026-03-10T07:31:34.623440+0000 mgr.y (mgr.24407) 263 : cluster [DBG] pgmap v374: 300 pgs: 4 unknown, 26 creating+peering, 3 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 262 active+clean; 4.4 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:35.154978+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:35.154978+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-39"}]': finished 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: cluster 2026-03-10T07:31:35.159019+0000 mon.a (mon.0) 2195 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: cluster 2026-03-10T07:31:35.159019+0000 mon.a (mon.0) 2195 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: cluster 2026-03-10T07:31:35.169079+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: cluster 2026-03-10T07:31:35.169079+0000 mon.a (mon.0) 2196 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:35.170565+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:35.170565+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:35.171558+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:35.171558+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:36.113179+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:36.113179+0000 mon.a (mon.0) 2198 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:36.126257+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:36.126257+0000 mon.c (mon.2) 268 : audit [INF] from='client.? 192.168.123.100:0/984306118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: cluster 2026-03-10T07:31:36.128721+0000 mon.a (mon.0) 2199 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: cluster 2026-03-10T07:31:36.128721+0000 mon.a (mon.0) 2199 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:36.129543+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:36 vm00 bash[28005]: audit 2026-03-10T07:31:36.129543+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]: dispatch 2026-03-10T07:31:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: cluster 2026-03-10T07:31:36.623785+0000 mgr.y (mgr.24407) 264 : cluster [DBG] pgmap v377: 260 pgs: 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:31:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: cluster 2026-03-10T07:31:36.623785+0000 mgr.y (mgr.24407) 264 : cluster [DBG] pgmap v377: 260 pgs: 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:31:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.117093+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.117093+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: cluster 2026-03-10T07:31:37.127774+0000 mon.a (mon.0) 2202 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T07:31:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: cluster 2026-03-10T07:31:37.127774+0000 mon.a (mon.0) 2202 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T07:31:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.130880+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.130880+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.134055+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.134055+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.150867+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.150867+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.152303+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.152303+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.152532+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:38 vm03 bash[23382]: audit 2026-03-10T07:31:37.152532+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: cluster 2026-03-10T07:31:36.623785+0000 mgr.y (mgr.24407) 264 : cluster [DBG] pgmap v377: 260 pgs: 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: cluster 2026-03-10T07:31:36.623785+0000 mgr.y (mgr.24407) 264 : cluster [DBG] pgmap v377: 260 pgs: 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.117093+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.117093+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: cluster 2026-03-10T07:31:37.127774+0000 mon.a (mon.0) 2202 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: cluster 2026-03-10T07:31:37.127774+0000 mon.a (mon.0) 2202 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.130880+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.130880+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.134055+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.134055+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.150867+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.150867+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.152303+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.152303+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.152532+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:38 vm00 bash[28005]: audit 2026-03-10T07:31:37.152532+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: cluster 2026-03-10T07:31:36.623785+0000 mgr.y (mgr.24407) 264 : cluster [DBG] pgmap v377: 260 pgs: 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: cluster 2026-03-10T07:31:36.623785+0000 mgr.y (mgr.24407) 264 : cluster [DBG] pgmap v377: 260 pgs: 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.117093+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.117093+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm00-59637-54"}]': finished 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: cluster 2026-03-10T07:31:37.127774+0000 mon.a (mon.0) 2202 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: cluster 2026-03-10T07:31:37.127774+0000 mon.a (mon.0) 2202 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.130880+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.130880+0000 mon.b (mon.1) 302 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.134055+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.134055+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.150867+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.150867+0000 mon.a (mon.0) 2204 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.152303+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.152303+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.152532+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:38 vm00 bash[20701]: audit 2026-03-10T07:31:37.152532+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:38.120658+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:38.120658+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:38.120743+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:38.120743+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: cluster 2026-03-10T07:31:38.124350+0000 mon.a (mon.0) 2209 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: cluster 2026-03-10T07:31:38.124350+0000 mon.a (mon.0) 2209 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:38.132803+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:38.132803+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:38.139389+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:38.139389+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:38.139625+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:38.139625+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:39.124463+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:39.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:39.124463+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:39.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:39.127595+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:39.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:39.127595+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:39.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: cluster 2026-03-10T07:31:39.139281+0000 mon.a (mon.0) 2213 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T07:31:39.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: cluster 2026-03-10T07:31:39.139281+0000 mon.a (mon.0) 2213 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T07:31:39.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:39.139722+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:39.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:39 vm03 bash[23382]: audit 2026-03-10T07:31:39.139722+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:38.120658+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:38.120658+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:38.120743+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:38.120743+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: cluster 2026-03-10T07:31:38.124350+0000 mon.a (mon.0) 2209 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: cluster 2026-03-10T07:31:38.124350+0000 mon.a (mon.0) 2209 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:38.132803+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:38.132803+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:38.139389+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:38.139389+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:38.139625+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:38.139625+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:39.124463+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:38.120658+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:38.120658+0000 mon.a (mon.0) 2207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:38.120743+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:38.120743+0000 mon.a (mon.0) 2208 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm00-59637-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: cluster 2026-03-10T07:31:38.124350+0000 mon.a (mon.0) 2209 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: cluster 2026-03-10T07:31:38.124350+0000 mon.a (mon.0) 2209 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:38.132803+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:38.132803+0000 mon.b (mon.1) 303 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:38.139389+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:38.139389+0000 mon.a (mon.0) 2210 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:38.139625+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:38.139625+0000 mon.a (mon.0) 2211 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:39.124463+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:39.124463+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:39.127595+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:39.127595+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: cluster 2026-03-10T07:31:39.139281+0000 mon.a (mon.0) 2213 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: cluster 2026-03-10T07:31:39.139281+0000 mon.a (mon.0) 2213 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:39.139722+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:39 vm00 bash[28005]: audit 2026-03-10T07:31:39.139722+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:39.124463+0000 mon.a (mon.0) 2212 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:39.127595+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:39.127595+0000 mon.b (mon.1) 304 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: cluster 2026-03-10T07:31:39.139281+0000 mon.a (mon.0) 2213 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: cluster 2026-03-10T07:31:39.139281+0000 mon.a (mon.0) 2213 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:39.139722+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:39.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:39 vm00 bash[20701]: audit 2026-03-10T07:31:39.139722+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:31:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: cluster 2026-03-10T07:31:38.624167+0000 mgr.y (mgr.24407) 265 : cluster [DBG] pgmap v380: 292 pgs: 32 unknown, 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:31:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: cluster 2026-03-10T07:31:38.624167+0000 mgr.y (mgr.24407) 265 : cluster [DBG] pgmap v380: 292 pgs: 32 unknown, 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:31:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: audit 2026-03-10T07:31:39.341483+0000 mon.c (mon.2) 269 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: audit 2026-03-10T07:31:39.341483+0000 mon.c (mon.2) 269 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: audit 2026-03-10T07:31:40.127570+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: audit 2026-03-10T07:31:40.127570+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: audit 2026-03-10T07:31:40.127670+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:31:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: audit 2026-03-10T07:31:40.127670+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:31:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: audit 2026-03-10T07:31:40.135876+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: audit 2026-03-10T07:31:40.135876+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: cluster 2026-03-10T07:31:40.137846+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T07:31:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: cluster 2026-03-10T07:31:40.137846+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T07:31:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: audit 2026-03-10T07:31:40.139985+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:40 vm03 bash[23382]: audit 2026-03-10T07:31:40.139985+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: cluster 2026-03-10T07:31:38.624167+0000 mgr.y (mgr.24407) 265 : cluster [DBG] pgmap v380: 292 pgs: 32 unknown, 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: cluster 2026-03-10T07:31:38.624167+0000 mgr.y (mgr.24407) 265 : cluster [DBG] pgmap v380: 292 pgs: 32 unknown, 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: audit 2026-03-10T07:31:39.341483+0000 mon.c (mon.2) 269 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: audit 2026-03-10T07:31:39.341483+0000 mon.c (mon.2) 269 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: audit 2026-03-10T07:31:40.127570+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: audit 2026-03-10T07:31:40.127570+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: audit 2026-03-10T07:31:40.127670+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: audit 2026-03-10T07:31:40.127670+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: audit 2026-03-10T07:31:40.135876+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: audit 2026-03-10T07:31:40.135876+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: cluster 2026-03-10T07:31:40.137846+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: cluster 2026-03-10T07:31:40.137846+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: audit 2026-03-10T07:31:40.139985+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:40 vm00 bash[28005]: audit 2026-03-10T07:31:40.139985+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: cluster 2026-03-10T07:31:38.624167+0000 mgr.y (mgr.24407) 265 : cluster [DBG] pgmap v380: 292 pgs: 32 unknown, 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: cluster 2026-03-10T07:31:38.624167+0000 mgr.y (mgr.24407) 265 : cluster [DBG] pgmap v380: 292 pgs: 32 unknown, 1 active+clean+snaptrim, 259 active+clean; 8.3 MiB data, 733 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: audit 2026-03-10T07:31:39.341483+0000 mon.c (mon.2) 269 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: audit 2026-03-10T07:31:39.341483+0000 mon.c (mon.2) 269 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: audit 2026-03-10T07:31:40.127570+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: audit 2026-03-10T07:31:40.127570+0000 mon.a (mon.0) 2215 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm00-59637-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: audit 2026-03-10T07:31:40.127670+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: audit 2026-03-10T07:31:40.127670+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: audit 2026-03-10T07:31:40.135876+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: audit 2026-03-10T07:31:40.135876+0000 mon.b (mon.1) 305 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:40.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: cluster 2026-03-10T07:31:40.137846+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T07:31:40.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: cluster 2026-03-10T07:31:40.137846+0000 mon.a (mon.0) 2217 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T07:31:40.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: audit 2026-03-10T07:31:40.139985+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:40.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:40 vm00 bash[20701]: audit 2026-03-10T07:31:40.139985+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:41 vm00 bash[28005]: cluster 2026-03-10T07:31:41.111698+0000 mon.a (mon.0) 2219 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:41 vm00 bash[28005]: cluster 2026-03-10T07:31:41.111698+0000 mon.a (mon.0) 2219 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:41 vm00 bash[28005]: audit 2026-03-10T07:31:41.148903+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:41 vm00 bash[28005]: audit 2026-03-10T07:31:41.148903+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:41 vm00 bash[28005]: audit 2026-03-10T07:31:41.153880+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:41 vm00 bash[28005]: audit 2026-03-10T07:31:41.153880+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:41 vm00 bash[28005]: cluster 2026-03-10T07:31:41.168337+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T07:31:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:41 vm00 bash[28005]: cluster 2026-03-10T07:31:41.168337+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T07:31:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:41 vm00 bash[28005]: audit 2026-03-10T07:31:41.171896+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:41 vm00 bash[28005]: audit 2026-03-10T07:31:41.171896+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:41.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:31:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:31:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:31:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:41 vm00 bash[20701]: cluster 2026-03-10T07:31:41.111698+0000 mon.a (mon.0) 2219 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:41 vm00 bash[20701]: cluster 2026-03-10T07:31:41.111698+0000 mon.a (mon.0) 2219 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:41 vm00 bash[20701]: audit 2026-03-10T07:31:41.148903+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:41 vm00 bash[20701]: audit 2026-03-10T07:31:41.148903+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:41 vm00 bash[20701]: audit 2026-03-10T07:31:41.153880+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:41 vm00 bash[20701]: audit 2026-03-10T07:31:41.153880+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:41 vm00 bash[20701]: cluster 2026-03-10T07:31:41.168337+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T07:31:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:41 vm00 bash[20701]: cluster 2026-03-10T07:31:41.168337+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T07:31:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:41 vm00 bash[20701]: audit 2026-03-10T07:31:41.171896+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:41 vm00 bash[20701]: audit 2026-03-10T07:31:41.171896+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:41 vm03 bash[23382]: cluster 2026-03-10T07:31:41.111698+0000 mon.a (mon.0) 2219 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:41 vm03 bash[23382]: cluster 2026-03-10T07:31:41.111698+0000 mon.a (mon.0) 2219 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:41 vm03 bash[23382]: audit 2026-03-10T07:31:41.148903+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:41 vm03 bash[23382]: audit 2026-03-10T07:31:41.148903+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:41.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:41 vm03 bash[23382]: audit 2026-03-10T07:31:41.153880+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:41.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:41 vm03 bash[23382]: audit 2026-03-10T07:31:41.153880+0000 mon.b (mon.1) 306 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:41.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:41 vm03 bash[23382]: cluster 2026-03-10T07:31:41.168337+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T07:31:41.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:41 vm03 bash[23382]: cluster 2026-03-10T07:31:41.168337+0000 mon.a (mon.0) 2221 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T07:31:41.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:41 vm03 bash[23382]: audit 2026-03-10T07:31:41.171896+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:41.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:41 vm03 bash[23382]: audit 2026-03-10T07:31:41.171896+0000 mon.a (mon.0) 2222 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:42 vm00 bash[28005]: cluster 2026-03-10T07:31:40.624513+0000 mgr.y (mgr.24407) 266 : cluster [DBG] pgmap v383: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:42 vm00 bash[28005]: cluster 2026-03-10T07:31:40.624513+0000 mgr.y (mgr.24407) 266 : cluster [DBG] pgmap v383: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:42 vm00 bash[28005]: audit 2026-03-10T07:31:42.153243+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:42 vm00 bash[28005]: audit 2026-03-10T07:31:42.153243+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:42 vm00 bash[28005]: cluster 2026-03-10T07:31:42.167288+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:42 vm00 bash[28005]: cluster 2026-03-10T07:31:42.167288+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:42 vm00 bash[28005]: audit 2026-03-10T07:31:42.172645+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:42 vm00 bash[28005]: audit 2026-03-10T07:31:42.172645+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:42 vm00 bash[20701]: cluster 2026-03-10T07:31:40.624513+0000 mgr.y (mgr.24407) 266 : cluster [DBG] pgmap v383: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:42 vm00 bash[20701]: cluster 2026-03-10T07:31:40.624513+0000 mgr.y (mgr.24407) 266 : cluster [DBG] pgmap v383: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:42 vm00 bash[20701]: audit 2026-03-10T07:31:42.153243+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:42 vm00 bash[20701]: audit 2026-03-10T07:31:42.153243+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:42 vm00 bash[20701]: cluster 2026-03-10T07:31:42.167288+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:42 vm00 bash[20701]: cluster 2026-03-10T07:31:42.167288+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:42 vm00 bash[20701]: audit 2026-03-10T07:31:42.172645+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:42 vm00 bash[20701]: audit 2026-03-10T07:31:42.172645+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:42 vm03 bash[23382]: cluster 2026-03-10T07:31:40.624513+0000 mgr.y (mgr.24407) 266 : cluster [DBG] pgmap v383: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:42 vm03 bash[23382]: cluster 2026-03-10T07:31:40.624513+0000 mgr.y (mgr.24407) 266 : cluster [DBG] pgmap v383: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:42 vm03 bash[23382]: audit 2026-03-10T07:31:42.153243+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:31:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:42 vm03 bash[23382]: audit 2026-03-10T07:31:42.153243+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:31:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:42 vm03 bash[23382]: cluster 2026-03-10T07:31:42.167288+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T07:31:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:42 vm03 bash[23382]: cluster 2026-03-10T07:31:42.167288+0000 mon.a (mon.0) 2224 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T07:31:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:42 vm03 bash[23382]: audit 2026-03-10T07:31:42.172645+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:42 vm03 bash[23382]: audit 2026-03-10T07:31:42.172645+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:43.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:31:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:31:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: cluster 2026-03-10T07:31:42.624822+0000 mgr.y (mgr.24407) 267 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: cluster 2026-03-10T07:31:42.624822+0000 mgr.y (mgr.24407) 267 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.119177+0000 mgr.y (mgr.24407) 268 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.119177+0000 mgr.y (mgr.24407) 268 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.156998+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.156998+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: cluster 2026-03-10T07:31:43.163600+0000 mon.a (mon.0) 2227 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: cluster 2026-03-10T07:31:43.163600+0000 mon.a (mon.0) 2227 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.164941+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.164941+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.194423+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.194423+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.196164+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.196164+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.196386+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.196386+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.197968+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:44 vm03 bash[23382]: audit 2026-03-10T07:31:43.197968+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: cluster 2026-03-10T07:31:42.624822+0000 mgr.y (mgr.24407) 267 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: cluster 2026-03-10T07:31:42.624822+0000 mgr.y (mgr.24407) 267 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.119177+0000 mgr.y (mgr.24407) 268 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.119177+0000 mgr.y (mgr.24407) 268 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.156998+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.156998+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: cluster 2026-03-10T07:31:43.163600+0000 mon.a (mon.0) 2227 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: cluster 2026-03-10T07:31:43.163600+0000 mon.a (mon.0) 2227 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.164941+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.164941+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.194423+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.194423+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.196164+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.196164+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.196386+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.196386+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.197968+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:44 vm00 bash[28005]: audit 2026-03-10T07:31:43.197968+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: cluster 2026-03-10T07:31:42.624822+0000 mgr.y (mgr.24407) 267 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: cluster 2026-03-10T07:31:42.624822+0000 mgr.y (mgr.24407) 267 : cluster [DBG] pgmap v386: 292 pgs: 292 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.119177+0000 mgr.y (mgr.24407) 268 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.119177+0000 mgr.y (mgr.24407) 268 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.156998+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.156998+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: cluster 2026-03-10T07:31:43.163600+0000 mon.a (mon.0) 2227 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: cluster 2026-03-10T07:31:43.163600+0000 mon.a (mon.0) 2227 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.164941+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.164941+0000 mon.a (mon.0) 2228 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]: dispatch 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.194423+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.194423+0000 mon.b (mon.1) 307 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.196164+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.196164+0000 mon.b (mon.1) 308 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.196386+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.196386+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.197968+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:44 vm00 bash[20701]: audit 2026-03-10T07:31:43.197968+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]: dispatch 2026-03-10T07:31:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.159955+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.159955+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.160084+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]': finished 2026-03-10T07:31:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.160084+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]': finished 2026-03-10T07:31:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: cluster 2026-03-10T07:31:44.163020+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: cluster 2026-03-10T07:31:44.163020+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.175406+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.175406+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.205563+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.205563+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.206494+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.206494+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.206558+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.206558+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.207541+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.207541+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.208353+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:45 vm03 bash[23382]: audit 2026-03-10T07:31:44.208353+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.159955+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.159955+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.160084+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]': finished 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.160084+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]': finished 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: cluster 2026-03-10T07:31:44.163020+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: cluster 2026-03-10T07:31:44.163020+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.175406+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.175406+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.205563+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.205563+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.206494+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.206494+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.206558+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.206558+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.207541+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.207541+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.208353+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:45 vm00 bash[28005]: audit 2026-03-10T07:31:44.208353+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.159955+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.159955+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? 192.168.123.100:0/1375060118' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm00-59637-55"}]': finished 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.160084+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]': finished 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.160084+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-41"}]': finished 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: cluster 2026-03-10T07:31:44.163020+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: cluster 2026-03-10T07:31:44.163020+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.175406+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.175406+0000 mon.b (mon.1) 309 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.205563+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.205563+0000 mon.b (mon.1) 310 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.206494+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.206494+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.206558+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.206558+0000 mon.b (mon.1) 311 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.207541+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.207541+0000 mon.a (mon.0) 2235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.208353+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:45 vm00 bash[20701]: audit 2026-03-10T07:31:44.208353+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: cluster 2026-03-10T07:31:44.625401+0000 mgr.y (mgr.24407) 269 : cluster [DBG] pgmap v389: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: cluster 2026-03-10T07:31:44.625401+0000 mgr.y (mgr.24407) 269 : cluster [DBG] pgmap v389: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: audit 2026-03-10T07:31:45.186972+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: audit 2026-03-10T07:31:45.186972+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: audit 2026-03-10T07:31:45.197698+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: audit 2026-03-10T07:31:45.197698+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: cluster 2026-03-10T07:31:45.219380+0000 mon.a (mon.0) 2238 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T07:31:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: cluster 2026-03-10T07:31:45.219380+0000 mon.a (mon.0) 2238 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T07:31:46.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: audit 2026-03-10T07:31:45.220295+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: audit 2026-03-10T07:31:45.220295+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: audit 2026-03-10T07:31:46.142281+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:46.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: audit 2026-03-10T07:31:46.142281+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:46.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: cluster 2026-03-10T07:31:46.142641+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T07:31:46.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: cluster 2026-03-10T07:31:46.142641+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T07:31:46.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: audit 2026-03-10T07:31:46.144211+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:46.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:46 vm03 bash[23382]: audit 2026-03-10T07:31:46.144211+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: cluster 2026-03-10T07:31:44.625401+0000 mgr.y (mgr.24407) 269 : cluster [DBG] pgmap v389: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: cluster 2026-03-10T07:31:44.625401+0000 mgr.y (mgr.24407) 269 : cluster [DBG] pgmap v389: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: audit 2026-03-10T07:31:45.186972+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: audit 2026-03-10T07:31:45.186972+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: audit 2026-03-10T07:31:45.197698+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: audit 2026-03-10T07:31:45.197698+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: cluster 2026-03-10T07:31:45.219380+0000 mon.a (mon.0) 2238 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: cluster 2026-03-10T07:31:45.219380+0000 mon.a (mon.0) 2238 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: audit 2026-03-10T07:31:45.220295+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: audit 2026-03-10T07:31:45.220295+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: audit 2026-03-10T07:31:46.142281+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: audit 2026-03-10T07:31:46.142281+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: cluster 2026-03-10T07:31:46.142641+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: cluster 2026-03-10T07:31:46.142641+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: audit 2026-03-10T07:31:46.144211+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:46 vm00 bash[28005]: audit 2026-03-10T07:31:46.144211+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: cluster 2026-03-10T07:31:44.625401+0000 mgr.y (mgr.24407) 269 : cluster [DBG] pgmap v389: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: cluster 2026-03-10T07:31:44.625401+0000 mgr.y (mgr.24407) 269 : cluster [DBG] pgmap v389: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: audit 2026-03-10T07:31:45.186972+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: audit 2026-03-10T07:31:45.186972+0000 mon.a (mon.0) 2237 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm00-59637-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: audit 2026-03-10T07:31:45.197698+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: audit 2026-03-10T07:31:45.197698+0000 mon.b (mon.1) 312 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: cluster 2026-03-10T07:31:45.219380+0000 mon.a (mon.0) 2238 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: cluster 2026-03-10T07:31:45.219380+0000 mon.a (mon.0) 2238 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: audit 2026-03-10T07:31:45.220295+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: audit 2026-03-10T07:31:45.220295+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: audit 2026-03-10T07:31:46.142281+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: audit 2026-03-10T07:31:46.142281+0000 mon.b (mon.1) 313 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: cluster 2026-03-10T07:31:46.142641+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: cluster 2026-03-10T07:31:46.142641+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: audit 2026-03-10T07:31:46.144211+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:46 vm00 bash[20701]: audit 2026-03-10T07:31:46.144211+0000 mon.a (mon.0) 2241 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: cluster 2026-03-10T07:31:46.625772+0000 mgr.y (mgr.24407) 270 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: cluster 2026-03-10T07:31:46.625772+0000 mgr.y (mgr.24407) 270 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: audit 2026-03-10T07:31:47.120501+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: audit 2026-03-10T07:31:47.120501+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: audit 2026-03-10T07:31:47.120598+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: audit 2026-03-10T07:31:47.120598+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: audit 2026-03-10T07:31:47.126985+0000 mon.b (mon.1) 314 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: audit 2026-03-10T07:31:47.126985+0000 mon.b (mon.1) 314 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: cluster 2026-03-10T07:31:47.127584+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: cluster 2026-03-10T07:31:47.127584+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: audit 2026-03-10T07:31:47.127842+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: audit 2026-03-10T07:31:47.127842+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: audit 2026-03-10T07:31:47.138593+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: audit 2026-03-10T07:31:47.138593+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: cluster 2026-03-10T07:31:47.189441+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:48 vm00 bash[28005]: cluster 2026-03-10T07:31:47.189441+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: cluster 2026-03-10T07:31:46.625772+0000 mgr.y (mgr.24407) 270 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: cluster 2026-03-10T07:31:46.625772+0000 mgr.y (mgr.24407) 270 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: audit 2026-03-10T07:31:47.120501+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: audit 2026-03-10T07:31:47.120501+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: audit 2026-03-10T07:31:47.120598+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: audit 2026-03-10T07:31:47.120598+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: audit 2026-03-10T07:31:47.126985+0000 mon.b (mon.1) 314 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: audit 2026-03-10T07:31:47.126985+0000 mon.b (mon.1) 314 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: cluster 2026-03-10T07:31:47.127584+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: cluster 2026-03-10T07:31:47.127584+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: audit 2026-03-10T07:31:47.127842+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: audit 2026-03-10T07:31:47.127842+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: audit 2026-03-10T07:31:47.138593+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: audit 2026-03-10T07:31:47.138593+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: cluster 2026-03-10T07:31:47.189441+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:48 vm00 bash[20701]: cluster 2026-03-10T07:31:47.189441+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: cluster 2026-03-10T07:31:46.625772+0000 mgr.y (mgr.24407) 270 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: cluster 2026-03-10T07:31:46.625772+0000 mgr.y (mgr.24407) 270 : cluster [DBG] pgmap v392: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: audit 2026-03-10T07:31:47.120501+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: audit 2026-03-10T07:31:47.120501+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm00-59637-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: audit 2026-03-10T07:31:47.120598+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: audit 2026-03-10T07:31:47.120598+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: audit 2026-03-10T07:31:47.126985+0000 mon.b (mon.1) 314 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: audit 2026-03-10T07:31:47.126985+0000 mon.b (mon.1) 314 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: cluster 2026-03-10T07:31:47.127584+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: cluster 2026-03-10T07:31:47.127584+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: audit 2026-03-10T07:31:47.127842+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: audit 2026-03-10T07:31:47.127842+0000 mon.b (mon.1) 315 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: audit 2026-03-10T07:31:47.138593+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: audit 2026-03-10T07:31:47.138593+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: cluster 2026-03-10T07:31:47.189441+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:48 vm03 bash[23382]: cluster 2026-03-10T07:31:47.189441+0000 mon.a (mon.0) 2246 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:49.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:49 vm03 bash[23382]: audit 2026-03-10T07:31:48.132138+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:49.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:49 vm03 bash[23382]: audit 2026-03-10T07:31:48.132138+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:49.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:49 vm03 bash[23382]: cluster 2026-03-10T07:31:48.136277+0000 mon.a (mon.0) 2248 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T07:31:49.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:49 vm03 bash[23382]: cluster 2026-03-10T07:31:48.136277+0000 mon.a (mon.0) 2248 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T07:31:49.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:49 vm03 bash[23382]: audit 2026-03-10T07:31:48.142782+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:49.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:49 vm03 bash[23382]: audit 2026-03-10T07:31:48.142782+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:49.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:49 vm03 bash[23382]: audit 2026-03-10T07:31:48.145649+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:49.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:49 vm03 bash[23382]: audit 2026-03-10T07:31:48.145649+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:49 vm00 bash[28005]: audit 2026-03-10T07:31:48.132138+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:49 vm00 bash[28005]: audit 2026-03-10T07:31:48.132138+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:49 vm00 bash[28005]: cluster 2026-03-10T07:31:48.136277+0000 mon.a (mon.0) 2248 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:49 vm00 bash[28005]: cluster 2026-03-10T07:31:48.136277+0000 mon.a (mon.0) 2248 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:49 vm00 bash[28005]: audit 2026-03-10T07:31:48.142782+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:49 vm00 bash[28005]: audit 2026-03-10T07:31:48.142782+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:49 vm00 bash[28005]: audit 2026-03-10T07:31:48.145649+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:49 vm00 bash[28005]: audit 2026-03-10T07:31:48.145649+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:49 vm00 bash[20701]: audit 2026-03-10T07:31:48.132138+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:49 vm00 bash[20701]: audit 2026-03-10T07:31:48.132138+0000 mon.a (mon.0) 2247 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:49 vm00 bash[20701]: cluster 2026-03-10T07:31:48.136277+0000 mon.a (mon.0) 2248 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:49 vm00 bash[20701]: cluster 2026-03-10T07:31:48.136277+0000 mon.a (mon.0) 2248 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:49 vm00 bash[20701]: audit 2026-03-10T07:31:48.142782+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:49 vm00 bash[20701]: audit 2026-03-10T07:31:48.142782+0000 mon.b (mon.1) 316 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:49 vm00 bash[20701]: audit 2026-03-10T07:31:48.145649+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:49 vm00 bash[20701]: audit 2026-03-10T07:31:48.145649+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: cluster 2026-03-10T07:31:48.626139+0000 mgr.y (mgr.24407) 271 : cluster [DBG] pgmap v395: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: cluster 2026-03-10T07:31:48.626139+0000 mgr.y (mgr.24407) 271 : cluster [DBG] pgmap v395: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: audit 2026-03-10T07:31:49.172349+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: audit 2026-03-10T07:31:49.172349+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: audit 2026-03-10T07:31:49.176375+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: audit 2026-03-10T07:31:49.176375+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: audit 2026-03-10T07:31:49.193145+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: audit 2026-03-10T07:31:49.193145+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: cluster 2026-03-10T07:31:49.194039+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: cluster 2026-03-10T07:31:49.194039+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: audit 2026-03-10T07:31:49.195450+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: audit 2026-03-10T07:31:49.195450+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: audit 2026-03-10T07:31:49.195517+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:50 vm03 bash[23382]: audit 2026-03-10T07:31:49.195517+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:50.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: cluster 2026-03-10T07:31:48.626139+0000 mgr.y (mgr.24407) 271 : cluster [DBG] pgmap v395: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: cluster 2026-03-10T07:31:48.626139+0000 mgr.y (mgr.24407) 271 : cluster [DBG] pgmap v395: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: audit 2026-03-10T07:31:49.172349+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: audit 2026-03-10T07:31:49.172349+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: audit 2026-03-10T07:31:49.176375+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: audit 2026-03-10T07:31:49.176375+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: audit 2026-03-10T07:31:49.193145+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: audit 2026-03-10T07:31:49.193145+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: cluster 2026-03-10T07:31:49.194039+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: cluster 2026-03-10T07:31:49.194039+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: audit 2026-03-10T07:31:49.195450+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: audit 2026-03-10T07:31:49.195450+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: audit 2026-03-10T07:31:49.195517+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:50 vm00 bash[28005]: audit 2026-03-10T07:31:49.195517+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: cluster 2026-03-10T07:31:48.626139+0000 mgr.y (mgr.24407) 271 : cluster [DBG] pgmap v395: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: cluster 2026-03-10T07:31:48.626139+0000 mgr.y (mgr.24407) 271 : cluster [DBG] pgmap v395: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: audit 2026-03-10T07:31:49.172349+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: audit 2026-03-10T07:31:49.172349+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: audit 2026-03-10T07:31:49.176375+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: audit 2026-03-10T07:31:49.176375+0000 mon.b (mon.1) 317 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: audit 2026-03-10T07:31:49.193145+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: audit 2026-03-10T07:31:49.193145+0000 mon.b (mon.1) 318 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: cluster 2026-03-10T07:31:49.194039+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: cluster 2026-03-10T07:31:49.194039+0000 mon.a (mon.0) 2251 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: audit 2026-03-10T07:31:49.195450+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: audit 2026-03-10T07:31:49.195450+0000 mon.a (mon.0) 2252 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: audit 2026-03-10T07:31:49.195517+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:50 vm00 bash[20701]: audit 2026-03-10T07:31:49.195517+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.175577+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.175577+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.175711+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.175711+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.177899+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.177899+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.178129+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.178129+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: cluster 2026-03-10T07:31:50.180004+0000 mon.a (mon.0) 2256 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: cluster 2026-03-10T07:31:50.180004+0000 mon.a (mon.0) 2256 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.184309+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.184309+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.184383+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:50.184383+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:51.179220+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:51.179220+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:51.179413+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: audit 2026-03-10T07:31:51.179413+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: cluster 2026-03-10T07:31:51.189780+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:51 vm00 bash[28005]: cluster 2026-03-10T07:31:51.189780+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:31:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:31:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.175577+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.175577+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.175711+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.175711+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.177899+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.177899+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.178129+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.178129+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: cluster 2026-03-10T07:31:50.180004+0000 mon.a (mon.0) 2256 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: cluster 2026-03-10T07:31:50.180004+0000 mon.a (mon.0) 2256 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.184309+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.184309+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.184383+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:50.184383+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:51.179220+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:51.179220+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T07:31:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:51.179413+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: audit 2026-03-10T07:31:51.179413+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: cluster 2026-03-10T07:31:51.189780+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T07:31:51.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:51 vm00 bash[20701]: cluster 2026-03-10T07:31:51.189780+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetRead 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: ok, hit_set contains 266:602f83fe:::foo:head 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetRead (9091 ms) 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetWrite 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg_num = 32 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 0 ls 1773127912,0 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 1 ls 1773127912,0 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 2 ls 1773127912,0 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 3 ls 1773127912,0 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 4 ls 1773127912,0 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 5 ls 1773127912,0 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 6 ls 1773127912,0 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 7 ls 1773127912,0 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 8 ls 1773127912,0 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 9 ls 1773127912,0 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 10 ls 1773127912,0 2026-03-10T07:31:51.479 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 11 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 12 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 13 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 14 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 15 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 16 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 17 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 18 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 19 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 20 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 21 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 22 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 23 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 24 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 25 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 26 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 27 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 28 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 29 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 30 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg 31 ls 1773127912,0 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: pg_num = 32 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:6cac518f:::0:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:02547ec2:::1:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:f905c69b:::2:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:cfc208b3:::3:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:d83876eb:::4:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:b29083e3:::5:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:c4fdafeb:::6:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:5c6b0b28:::7:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:bd63b0f1:::8:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:e960b815:::9:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:52ea6a34:::10:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:89d3ae78:::11:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:de5d7c5f:::12:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:566253c9:::13:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:62a1935d:::14:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:863748b0:::15:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:3958e169:::16:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:4d4dabf9:::17:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:8391935d:::18:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:28883081:::19:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:69259c59:::20:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:4bdb80b7:::21:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:a11c5d71:::22:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:271af37b:::23:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:95b121be:::24:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:58d1031b:::25:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:0a050783:::26:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:c709704c:::27:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:cbe56eaf:::28:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:86b4b162:::29:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:70d89383:::30:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:dd450c7c:::31:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:6d5729b1:::32:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:c388f3fb:::33:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:56cfea31:::34:head 2026-03-10T07:31:51.480 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:9dbc1bf7:::35:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:40b74ccd:::36:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:4d5aaf42:::37:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:920f362c:::38:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:6cc53222:::39:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:9cad833f:::40:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:1ea84d41:::41:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:c4480ef6:::42:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:a694361e:::43:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:d1bd33e9:::44:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:ddc2cd5d:::45:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:2b782207:::46:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:7b187fca:::47:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:90ecdf6f:::48:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:a5ed95fe:::49:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:ea0eaa55:::50:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:f33ef17b:::51:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:a0d1b2f6:::52:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:60c5229e:::53:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:edcbc575:::54:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:102cf253:::55:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:efb7fb0b:::56:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:50d0a326:::57:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:d4dc5daf:::58:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:3a130462:::59:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:ec87ed71:::60:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:d5bc9454:::61:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:3ddfe313:::62:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:7c2816b9:::63:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:47e00e4d:::64:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:c6410c18:::65:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:b48ed237:::66:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:cd63ad31:::67:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:b179e92b:::68:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:0d9f741a:::69:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:6d3352ae:::70:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:c6d5c19e:::71:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:bc4729c3:::72:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:77e930b9:::73:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:0abeecfd:::74:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:b7c37e15:::75:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:b6378398:::76:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:02bd68de:::77:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:cc795d2d:::78:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:630d4fea:::79:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:e0d29ef5:::80:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:fd6f13d2:::81:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:606461d5:::82:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:eadbdc43:::83:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:8761d0bb:::84:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:9ef0186f:::85:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:e0d41294:::86:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:961de695:::87:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:1423148f:::88:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:633a8fa2:::89:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:a8653809:::90:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:3dac8b33:::91:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:35aad435:::92:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:f6dcc343:::93:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:dbbdad87:::94:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:1cb48ce0:::95:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:03cd461c:::96:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:17a4ea99:::97:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:9993c9a7:::98:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:6394211c:::99:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:94c7ae57:::100:head 2026-03-10T07:31:51.481 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:6fdee5bb:::101:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:9a477fd1:::102:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:eb850916:::103:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:affc56b9:::104:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:b42dc814:::105:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:f319f8f0:::106:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:9a40b9de:::107:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:8b524f28:::108:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:e3de589f:::109:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:90f90a5b:::110:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:a7b4f1d7:::111:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:af51766e:::112:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:b6f90bd1:::113:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:e0261208:::114:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:c9569ef7:::115:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:61bebe50:::116:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:fe93412b:::117:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:d3d38bee:::118:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:3100ba0c:::119:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:d0560ada:::120:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:f0ea8b35:::121:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:766f231a:::122:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:a07a2582:::123:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:bd7c6b3a:::124:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:fb2ddaff:::125:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:4408e1fe:::126:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:ee1df7a7:::127:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:c3002909:::128:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:4f48ffa9:::129:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:edf38733:::130:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:c08425c0:::131:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:5f902d98:::132:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:41ea2c93:::133:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:813cee13:::134:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:0131818d:::135:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:26ba5a85:::136:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:381b8a5a:::137:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:28797e47:::138:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:bfca7f22:::139:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:36807075:::140:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:80b03975:::141:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:5c15709b:::142:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:f39ea15e:::143:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:ea992956:::144:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:48887b1c:::145:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:9f24a9dd:::146:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:987f100b:::147:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:d2dd3581:::148:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:7fed1808:::149:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:c80b70e9:::150:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:85ed90f9:::151:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:36428b24:::152:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:d044c34a:::153:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:7c18bf58:::154:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:d1c21232:::155:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:a7a3c575:::156:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:87da0633:::157:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:d5ac3822:::158:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:3f20522d:::159:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:6ca26563:::160:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:532ce135:::161:head 2026-03-10T07:31:51.482 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:c78863e6:::162:head 2026-03-10T07:31:51.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.175577+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:51.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.175577+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:31:51.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.175711+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.175711+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.177899+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.177899+0000 mon.b (mon.1) 319 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.178129+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.178129+0000 mon.b (mon.1) 320 : audit [INF] from='client.? 192.168.123.100:0/384144481' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: cluster 2026-03-10T07:31:50.180004+0000 mon.a (mon.0) 2256 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: cluster 2026-03-10T07:31:50.180004+0000 mon.a (mon.0) 2256 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.184309+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.184309+0000 mon.a (mon.0) 2257 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.184383+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:50.184383+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]: dispatch 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:51.179220+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:51.179220+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:51.179413+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: audit 2026-03-10T07:31:51.179413+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm00-59637-56"}]': finished 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: cluster 2026-03-10T07:31:51.189780+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T07:31:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:51 vm03 bash[23382]: cluster 2026-03-10T07:31:51.189780+0000 mon.a (mon.0) 2261 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T07:31:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: cluster 2026-03-10T07:31:50.626461+0000 mgr.y (mgr.24407) 272 : cluster [DBG] pgmap v398: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: cluster 2026-03-10T07:31:50.626461+0000 mgr.y (mgr.24407) 272 : cluster [DBG] pgmap v398: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.208792+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.208792+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.217777+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.217777+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.222105+0000 mon.b (mon.1) 322 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.222105+0000 mon.b (mon.1) 322 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.235821+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.235821+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.236540+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.236540+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.238114+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.238114+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.484783+0000 mon.b (mon.1) 324 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.484783+0000 mon.b (mon.1) 324 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.551615+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.551615+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.552683+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.552683+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.553615+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.553615+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.554514+0000 mon.a (mon.0) 2266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:52 vm03 bash[23382]: audit 2026-03-10T07:31:51.554514+0000 mon.a (mon.0) 2266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: cluster 2026-03-10T07:31:50.626461+0000 mgr.y (mgr.24407) 272 : cluster [DBG] pgmap v398: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: cluster 2026-03-10T07:31:50.626461+0000 mgr.y (mgr.24407) 272 : cluster [DBG] pgmap v398: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.208792+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.208792+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.217777+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.217777+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.222105+0000 mon.b (mon.1) 322 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.222105+0000 mon.b (mon.1) 322 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.235821+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.235821+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.236540+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.236540+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.238114+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.238114+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.484783+0000 mon.b (mon.1) 324 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.484783+0000 mon.b (mon.1) 324 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.551615+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.551615+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.552683+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.552683+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.553615+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.553615+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.554514+0000 mon.a (mon.0) 2266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:52 vm00 bash[20701]: audit 2026-03-10T07:31:51.554514+0000 mon.a (mon.0) 2266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: cluster 2026-03-10T07:31:50.626461+0000 mgr.y (mgr.24407) 272 : cluster [DBG] pgmap v398: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: cluster 2026-03-10T07:31:50.626461+0000 mgr.y (mgr.24407) 272 : cluster [DBG] pgmap v398: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.208792+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.208792+0000 mon.b (mon.1) 321 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.217777+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.217777+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.222105+0000 mon.b (mon.1) 322 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.222105+0000 mon.b (mon.1) 322 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.235821+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.235821+0000 mon.b (mon.1) 323 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.236540+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.236540+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.238114+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.238114+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.484783+0000 mon.b (mon.1) 324 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.484783+0000 mon.b (mon.1) 324 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm00-59782-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.551615+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.551615+0000 mon.b (mon.1) 325 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.552683+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.552683+0000 mon.b (mon.1) 326 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.553615+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.553615+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.554514+0000 mon.a (mon.0) 2266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:52 vm00 bash[28005]: audit 2026-03-10T07:31:51.554514+0000 mon.a (mon.0) 2266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]: dispatch 2026-03-10T07:31:53.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:31:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: audit 2026-03-10T07:31:52.228184+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: audit 2026-03-10T07:31:52.228184+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: audit 2026-03-10T07:31:52.228306+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]': finished 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: audit 2026-03-10T07:31:52.228306+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]': finished 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: audit 2026-03-10T07:31:52.230632+0000 mon.b (mon.1) 327 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: audit 2026-03-10T07:31:52.230632+0000 mon.b (mon.1) 327 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: cluster 2026-03-10T07:31:52.232286+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: cluster 2026-03-10T07:31:52.232286+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: audit 2026-03-10T07:31:52.234299+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: audit 2026-03-10T07:31:52.234299+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: cluster 2026-03-10T07:31:53.249116+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T07:31:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:53 vm03 bash[23382]: cluster 2026-03-10T07:31:53.249116+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: audit 2026-03-10T07:31:52.228184+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: audit 2026-03-10T07:31:52.228184+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: audit 2026-03-10T07:31:52.228306+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]': finished 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: audit 2026-03-10T07:31:52.228306+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]': finished 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: audit 2026-03-10T07:31:52.230632+0000 mon.b (mon.1) 327 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: audit 2026-03-10T07:31:52.230632+0000 mon.b (mon.1) 327 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: cluster 2026-03-10T07:31:52.232286+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: cluster 2026-03-10T07:31:52.232286+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: audit 2026-03-10T07:31:52.234299+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: audit 2026-03-10T07:31:52.234299+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: cluster 2026-03-10T07:31:53.249116+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:53 vm00 bash[28005]: cluster 2026-03-10T07:31:53.249116+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: audit 2026-03-10T07:31:52.228184+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: audit 2026-03-10T07:31:52.228184+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm00-59637-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: audit 2026-03-10T07:31:52.228306+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]': finished 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: audit 2026-03-10T07:31:52.228306+0000 mon.a (mon.0) 2268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-43"}]': finished 2026-03-10T07:31:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: audit 2026-03-10T07:31:52.230632+0000 mon.b (mon.1) 327 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: audit 2026-03-10T07:31:52.230632+0000 mon.b (mon.1) 327 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: cluster 2026-03-10T07:31:52.232286+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T07:31:53.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: cluster 2026-03-10T07:31:52.232286+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T07:31:53.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: audit 2026-03-10T07:31:52.234299+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: audit 2026-03-10T07:31:52.234299+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:53.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: cluster 2026-03-10T07:31:53.249116+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T07:31:53.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:53 vm00 bash[20701]: cluster 2026-03-10T07:31:53.249116+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T07:31:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:54 vm03 bash[23382]: cluster 2026-03-10T07:31:52.626755+0000 mgr.y (mgr.24407) 273 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:54 vm03 bash[23382]: cluster 2026-03-10T07:31:52.626755+0000 mgr.y (mgr.24407) 273 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:54 vm03 bash[23382]: audit 2026-03-10T07:31:53.126686+0000 mgr.y (mgr.24407) 274 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:54 vm03 bash[23382]: audit 2026-03-10T07:31:53.126686+0000 mgr.y (mgr.24407) 274 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:54 vm00 bash[28005]: cluster 2026-03-10T07:31:52.626755+0000 mgr.y (mgr.24407) 273 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:54 vm00 bash[28005]: cluster 2026-03-10T07:31:52.626755+0000 mgr.y (mgr.24407) 273 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:54 vm00 bash[28005]: audit 2026-03-10T07:31:53.126686+0000 mgr.y (mgr.24407) 274 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:54 vm00 bash[28005]: audit 2026-03-10T07:31:53.126686+0000 mgr.y (mgr.24407) 274 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:54.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:54 vm00 bash[20701]: cluster 2026-03-10T07:31:52.626755+0000 mgr.y (mgr.24407) 273 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:54.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:54 vm00 bash[20701]: cluster 2026-03-10T07:31:52.626755+0000 mgr.y (mgr.24407) 273 : cluster [DBG] pgmap v401: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:54.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:54 vm00 bash[20701]: audit 2026-03-10T07:31:53.126686+0000 mgr.y (mgr.24407) 274 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:54.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:54 vm00 bash[20701]: audit 2026-03-10T07:31:53.126686+0000 mgr.y (mgr.24407) 274 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:31:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:54.347516+0000 mon.c (mon.2) 270 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:54.347516+0000 mon.c (mon.2) 270 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:54.400130+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:54.400130+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: cluster 2026-03-10T07:31:54.418935+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T07:31:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: cluster 2026-03-10T07:31:54.418935+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T07:31:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:54.419528+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:54.419528+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:54.431354+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:54.431354+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: cluster 2026-03-10T07:31:54.627186+0000 mgr.y (mgr.24407) 275 : cluster [DBG] pgmap v404: 300 pgs: 11 creating+peering, 29 unknown, 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: cluster 2026-03-10T07:31:54.627186+0000 mgr.y (mgr.24407) 275 : cluster [DBG] pgmap v404: 300 pgs: 11 creating+peering, 29 unknown, 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:55.403987+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:55.403987+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:55.419485+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:55.419485+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: cluster 2026-03-10T07:31:55.422367+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: cluster 2026-03-10T07:31:55.422367+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:55.424125+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:55 vm03 bash[23382]: audit 2026-03-10T07:31:55.424125+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:54.347516+0000 mon.c (mon.2) 270 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:54.347516+0000 mon.c (mon.2) 270 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:54.400130+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:54.400130+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: cluster 2026-03-10T07:31:54.418935+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: cluster 2026-03-10T07:31:54.418935+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:54.419528+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:54.419528+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:54.431354+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:54.431354+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: cluster 2026-03-10T07:31:54.627186+0000 mgr.y (mgr.24407) 275 : cluster [DBG] pgmap v404: 300 pgs: 11 creating+peering, 29 unknown, 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: cluster 2026-03-10T07:31:54.627186+0000 mgr.y (mgr.24407) 275 : cluster [DBG] pgmap v404: 300 pgs: 11 creating+peering, 29 unknown, 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:55.403987+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:55.403987+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:55.419485+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:55.419485+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: cluster 2026-03-10T07:31:55.422367+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: cluster 2026-03-10T07:31:55.422367+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:55.424125+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:55 vm00 bash[28005]: audit 2026-03-10T07:31:55.424125+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:54.347516+0000 mon.c (mon.2) 270 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:54.347516+0000 mon.c (mon.2) 270 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:54.400130+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:54.400130+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm00-59637-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: cluster 2026-03-10T07:31:54.418935+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: cluster 2026-03-10T07:31:54.418935+0000 mon.a (mon.0) 2273 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:54.419528+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:54.419528+0000 mon.b (mon.1) 328 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:54.431354+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:54.431354+0000 mon.a (mon.0) 2274 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:31:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: cluster 2026-03-10T07:31:54.627186+0000 mgr.y (mgr.24407) 275 : cluster [DBG] pgmap v404: 300 pgs: 11 creating+peering, 29 unknown, 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: cluster 2026-03-10T07:31:54.627186+0000 mgr.y (mgr.24407) 275 : cluster [DBG] pgmap v404: 300 pgs: 11 creating+peering, 29 unknown, 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:31:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:55.403987+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:55.403987+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:31:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:55.419485+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:55.419485+0000 mon.b (mon.1) 329 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: cluster 2026-03-10T07:31:55.422367+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T07:31:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: cluster 2026-03-10T07:31:55.422367+0000 mon.a (mon.0) 2276 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T07:31:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:55.424125+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:55 vm00 bash[20701]: audit 2026-03-10T07:31:55.424125+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: cluster 2026-03-10T07:31:55.440372+0000 mon.a (mon.0) 2278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: cluster 2026-03-10T07:31:55.440372+0000 mon.a (mon.0) 2278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: audit 2026-03-10T07:31:56.409551+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: audit 2026-03-10T07:31:56.409551+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: cluster 2026-03-10T07:31:56.414860+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: cluster 2026-03-10T07:31:56.414860+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: audit 2026-03-10T07:31:56.415854+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: audit 2026-03-10T07:31:56.415854+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: audit 2026-03-10T07:31:56.416230+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: audit 2026-03-10T07:31:56.416230+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: audit 2026-03-10T07:31:56.418128+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: audit 2026-03-10T07:31:56.418128+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: audit 2026-03-10T07:31:56.418219+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:56.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:56 vm03 bash[23382]: audit 2026-03-10T07:31:56.418219+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: cluster 2026-03-10T07:31:55.440372+0000 mon.a (mon.0) 2278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: cluster 2026-03-10T07:31:55.440372+0000 mon.a (mon.0) 2278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: audit 2026-03-10T07:31:56.409551+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: audit 2026-03-10T07:31:56.409551+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: cluster 2026-03-10T07:31:56.414860+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: cluster 2026-03-10T07:31:56.414860+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: audit 2026-03-10T07:31:56.415854+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: audit 2026-03-10T07:31:56.415854+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: audit 2026-03-10T07:31:56.416230+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: audit 2026-03-10T07:31:56.416230+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: audit 2026-03-10T07:31:56.418128+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: audit 2026-03-10T07:31:56.418128+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: audit 2026-03-10T07:31:56.418219+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:56 vm00 bash[20701]: audit 2026-03-10T07:31:56.418219+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: cluster 2026-03-10T07:31:55.440372+0000 mon.a (mon.0) 2278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: cluster 2026-03-10T07:31:55.440372+0000 mon.a (mon.0) 2278 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: audit 2026-03-10T07:31:56.409551+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: audit 2026-03-10T07:31:56.409551+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: cluster 2026-03-10T07:31:56.414860+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: cluster 2026-03-10T07:31:56.414860+0000 mon.a (mon.0) 2280 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: audit 2026-03-10T07:31:56.415854+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: audit 2026-03-10T07:31:56.415854+0000 mon.b (mon.1) 330 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: audit 2026-03-10T07:31:56.416230+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: audit 2026-03-10T07:31:56.416230+0000 mon.b (mon.1) 331 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: audit 2026-03-10T07:31:56.418128+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: audit 2026-03-10T07:31:56.418128+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:31:56.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: audit 2026-03-10T07:31:56.418219+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:56.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:56 vm00 bash[28005]: audit 2026-03-10T07:31:56.418219+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: cluster 2026-03-10T07:31:56.627491+0000 mgr.y (mgr.24407) 276 : cluster [DBG] pgmap v407: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:58.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: cluster 2026-03-10T07:31:56.627491+0000 mgr.y (mgr.24407) 276 : cluster [DBG] pgmap v407: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:58.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.413285+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:31:58.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.413285+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:31:58.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.413382+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:58.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.413382+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:58.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: cluster 2026-03-10T07:31:57.417589+0000 mon.a (mon.0) 2285 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T07:31:58.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: cluster 2026-03-10T07:31:57.417589+0000 mon.a (mon.0) 2285 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T07:31:58.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.417998+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.417998+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.418366+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.418366+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.422217+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.422217+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.423712+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:57 vm03 bash[23382]: audit 2026-03-10T07:31:57.423712+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: cluster 2026-03-10T07:31:56.627491+0000 mgr.y (mgr.24407) 276 : cluster [DBG] pgmap v407: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: cluster 2026-03-10T07:31:56.627491+0000 mgr.y (mgr.24407) 276 : cluster [DBG] pgmap v407: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.413285+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.413285+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.413382+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.413382+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: cluster 2026-03-10T07:31:57.417589+0000 mon.a (mon.0) 2285 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: cluster 2026-03-10T07:31:57.417589+0000 mon.a (mon.0) 2285 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.417998+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.417998+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.418366+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.418366+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.422217+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.422217+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.423712+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:57 vm00 bash[20701]: audit 2026-03-10T07:31:57.423712+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: cluster 2026-03-10T07:31:56.627491+0000 mgr.y (mgr.24407) 276 : cluster [DBG] pgmap v407: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: cluster 2026-03-10T07:31:56.627491+0000 mgr.y (mgr.24407) 276 : cluster [DBG] pgmap v407: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.413285+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.413285+0000 mon.a (mon.0) 2283 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.413382+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.413382+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: cluster 2026-03-10T07:31:57.417589+0000 mon.a (mon.0) 2285 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: cluster 2026-03-10T07:31:57.417589+0000 mon.a (mon.0) 2285 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.417998+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.417998+0000 mon.b (mon.1) 332 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.418366+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.418366+0000 mon.b (mon.1) 333 : audit [INF] from='client.? 192.168.123.100:0/4074615355' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.422217+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.422217+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.423712+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:57 vm00 bash[28005]: audit 2026-03-10T07:31:57.423712+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]: dispatch 2026-03-10T07:31:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.420929+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:31:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.420929+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:31:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.421034+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.421034+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: cluster 2026-03-10T07:31:58.424396+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T07:31:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: cluster 2026-03-10T07:31:58.424396+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T07:31:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.426569+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.426569+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.430346+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.430346+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.440643+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.440643+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.441547+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.441547+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.442243+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.442243+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.442585+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.442585+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.443383+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.443383+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.444108+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: audit 2026-03-10T07:31:58.444108+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: cluster 2026-03-10T07:31:58.627778+0000 mgr.y (mgr.24407) 277 : cluster [DBG] pgmap v410: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:59.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:31:59 vm03 bash[23382]: cluster 2026-03-10T07:31:58.627778+0000 mgr.y (mgr.24407) 277 : cluster [DBG] pgmap v410: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.420929+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.420929+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.421034+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.421034+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: cluster 2026-03-10T07:31:58.424396+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: cluster 2026-03-10T07:31:58.424396+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.426569+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.426569+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.430346+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.430346+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.440643+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.440643+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.441547+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.441547+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.442243+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.442243+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.420929+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.420929+0000 mon.a (mon.0) 2288 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.421034+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.421034+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm00-59637-57"}]': finished 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: cluster 2026-03-10T07:31:58.424396+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: cluster 2026-03-10T07:31:58.424396+0000 mon.a (mon.0) 2290 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.426569+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.426569+0000 mon.b (mon.1) 334 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.430346+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.430346+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.440643+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.440643+0000 mon.b (mon.1) 335 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.441547+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.441547+0000 mon.b (mon.1) 336 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.442243+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.442243+0000 mon.b (mon.1) 337 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.442585+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.442585+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.443383+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.443383+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.444108+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: audit 2026-03-10T07:31:58.444108+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: cluster 2026-03-10T07:31:58.627778+0000 mgr.y (mgr.24407) 277 : cluster [DBG] pgmap v410: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:31:59 vm00 bash[28005]: cluster 2026-03-10T07:31:58.627778+0000 mgr.y (mgr.24407) 277 : cluster [DBG] pgmap v410: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.442585+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.442585+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.443383+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.443383+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.444108+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: audit 2026-03-10T07:31:58.444108+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: cluster 2026-03-10T07:31:58.627778+0000 mgr.y (mgr.24407) 277 : cluster [DBG] pgmap v410: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:31:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:31:59 vm00 bash[20701]: cluster 2026-03-10T07:31:58.627778+0000 mgr.y (mgr.24407) 277 : cluster [DBG] pgmap v410: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.429454+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.429454+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.429558+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.429558+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.438003+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.438003+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.438147+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.438147+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: cluster 2026-03-10T07:31:59.438209+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: cluster 2026-03-10T07:31:59.438209+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.445412+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.445412+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.445503+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:00.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:00 vm03 bash[23382]: audit 2026-03-10T07:31:59.445503+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.429454+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.429454+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.429558+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.429558+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.438003+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.438003+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.438147+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.438147+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: cluster 2026-03-10T07:31:59.438209+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: cluster 2026-03-10T07:31:59.438209+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.445412+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.445412+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.445503+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:00 vm00 bash[28005]: audit 2026-03-10T07:31:59.445503+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.429454+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.429454+0000 mon.a (mon.0) 2295 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.429558+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.429558+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm00-59637-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.438003+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.438003+0000 mon.b (mon.1) 338 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.438147+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.438147+0000 mon.b (mon.1) 339 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: cluster 2026-03-10T07:31:59.438209+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: cluster 2026-03-10T07:31:59.438209+0000 mon.a (mon.0) 2297 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.445412+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.445412+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.445503+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:00 vm00 bash[20701]: audit 2026-03-10T07:31:59.445503+0000 mon.a (mon.0) 2299 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:01.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:32:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:32:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:01 vm00 bash[28005]: audit 2026-03-10T07:32:00.441419+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:01 vm00 bash[28005]: audit 2026-03-10T07:32:00.441419+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:01 vm00 bash[28005]: cluster 2026-03-10T07:32:00.460633+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:01 vm00 bash[28005]: cluster 2026-03-10T07:32:00.460633+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:01 vm00 bash[28005]: cluster 2026-03-10T07:32:00.628064+0000 mgr.y (mgr.24407) 278 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:01 vm00 bash[28005]: cluster 2026-03-10T07:32:00.628064+0000 mgr.y (mgr.24407) 278 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:01 vm00 bash[28005]: cluster 2026-03-10T07:32:01.116598+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:01 vm00 bash[28005]: cluster 2026-03-10T07:32:01.116598+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:01 vm00 bash[20701]: audit 2026-03-10T07:32:00.441419+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:01 vm00 bash[20701]: audit 2026-03-10T07:32:00.441419+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:01 vm00 bash[20701]: cluster 2026-03-10T07:32:00.460633+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:01 vm00 bash[20701]: cluster 2026-03-10T07:32:00.460633+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:01 vm00 bash[20701]: cluster 2026-03-10T07:32:00.628064+0000 mgr.y (mgr.24407) 278 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:01 vm00 bash[20701]: cluster 2026-03-10T07:32:00.628064+0000 mgr.y (mgr.24407) 278 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:01 vm00 bash[20701]: cluster 2026-03-10T07:32:01.116598+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:01.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:01 vm00 bash[20701]: cluster 2026-03-10T07:32:01.116598+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:01 vm03 bash[23382]: audit 2026-03-10T07:32:00.441419+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:32:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:01 vm03 bash[23382]: audit 2026-03-10T07:32:00.441419+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:32:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:01 vm03 bash[23382]: cluster 2026-03-10T07:32:00.460633+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T07:32:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:01 vm03 bash[23382]: cluster 2026-03-10T07:32:00.460633+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T07:32:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:01 vm03 bash[23382]: cluster 2026-03-10T07:32:00.628064+0000 mgr.y (mgr.24407) 278 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:01 vm03 bash[23382]: cluster 2026-03-10T07:32:00.628064+0000 mgr.y (mgr.24407) 278 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:01 vm03 bash[23382]: cluster 2026-03-10T07:32:01.116598+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:01 vm03 bash[23382]: cluster 2026-03-10T07:32:01.116598+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:02 vm00 bash[28005]: audit 2026-03-10T07:32:01.580223+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:02 vm00 bash[28005]: audit 2026-03-10T07:32:01.580223+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:02 vm00 bash[28005]: cluster 2026-03-10T07:32:01.591643+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T07:32:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:02 vm00 bash[28005]: cluster 2026-03-10T07:32:01.591643+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T07:32:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:02 vm00 bash[20701]: audit 2026-03-10T07:32:01.580223+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:02 vm00 bash[20701]: audit 2026-03-10T07:32:01.580223+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:02 vm00 bash[20701]: cluster 2026-03-10T07:32:01.591643+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T07:32:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:02 vm00 bash[20701]: cluster 2026-03-10T07:32:01.591643+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T07:32:03.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:02 vm03 bash[23382]: audit 2026-03-10T07:32:01.580223+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:03.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:02 vm03 bash[23382]: audit 2026-03-10T07:32:01.580223+0000 mon.a (mon.0) 2303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm00-59637-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:03.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:02 vm03 bash[23382]: cluster 2026-03-10T07:32:01.591643+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T07:32:03.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:02 vm03 bash[23382]: cluster 2026-03-10T07:32:01.591643+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T07:32:03.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:32:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:32:03.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:03 vm00 bash[28005]: cluster 2026-03-10T07:32:02.599000+0000 mon.a (mon.0) 2305 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T07:32:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:03 vm00 bash[28005]: cluster 2026-03-10T07:32:02.599000+0000 mon.a (mon.0) 2305 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T07:32:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:03 vm00 bash[28005]: cluster 2026-03-10T07:32:02.628356+0000 mgr.y (mgr.24407) 279 : cluster [DBG] pgmap v416: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:03 vm00 bash[28005]: cluster 2026-03-10T07:32:02.628356+0000 mgr.y (mgr.24407) 279 : cluster [DBG] pgmap v416: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:03 vm00 bash[28005]: audit 2026-03-10T07:32:03.132875+0000 mgr.y (mgr.24407) 280 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:03 vm00 bash[28005]: audit 2026-03-10T07:32:03.132875+0000 mgr.y (mgr.24407) 280 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:03 vm00 bash[20701]: cluster 2026-03-10T07:32:02.599000+0000 mon.a (mon.0) 2305 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T07:32:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:03 vm00 bash[20701]: cluster 2026-03-10T07:32:02.599000+0000 mon.a (mon.0) 2305 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T07:32:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:03 vm00 bash[20701]: cluster 2026-03-10T07:32:02.628356+0000 mgr.y (mgr.24407) 279 : cluster [DBG] pgmap v416: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:03 vm00 bash[20701]: cluster 2026-03-10T07:32:02.628356+0000 mgr.y (mgr.24407) 279 : cluster [DBG] pgmap v416: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:03 vm00 bash[20701]: audit 2026-03-10T07:32:03.132875+0000 mgr.y (mgr.24407) 280 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:03 vm00 bash[20701]: audit 2026-03-10T07:32:03.132875+0000 mgr.y (mgr.24407) 280 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:04.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:03 vm03 bash[23382]: cluster 2026-03-10T07:32:02.599000+0000 mon.a (mon.0) 2305 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T07:32:04.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:03 vm03 bash[23382]: cluster 2026-03-10T07:32:02.599000+0000 mon.a (mon.0) 2305 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T07:32:04.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:03 vm03 bash[23382]: cluster 2026-03-10T07:32:02.628356+0000 mgr.y (mgr.24407) 279 : cluster [DBG] pgmap v416: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:04.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:03 vm03 bash[23382]: cluster 2026-03-10T07:32:02.628356+0000 mgr.y (mgr.24407) 279 : cluster [DBG] pgmap v416: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:04.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:03 vm03 bash[23382]: audit 2026-03-10T07:32:03.132875+0000 mgr.y (mgr.24407) 280 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:04.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:03 vm03 bash[23382]: audit 2026-03-10T07:32:03.132875+0000 mgr.y (mgr.24407) 280 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:04.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:04 vm00 bash[28005]: audit 2026-03-10T07:32:03.616534+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:04.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:04 vm00 bash[28005]: audit 2026-03-10T07:32:03.616534+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:04 vm00 bash[28005]: cluster 2026-03-10T07:32:03.617623+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T07:32:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:04 vm00 bash[28005]: cluster 2026-03-10T07:32:03.617623+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T07:32:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:04 vm00 bash[28005]: audit 2026-03-10T07:32:03.625849+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:04 vm00 bash[28005]: audit 2026-03-10T07:32:03.625849+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:04 vm00 bash[20701]: audit 2026-03-10T07:32:03.616534+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:04 vm00 bash[20701]: audit 2026-03-10T07:32:03.616534+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:04 vm00 bash[20701]: cluster 2026-03-10T07:32:03.617623+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T07:32:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:04 vm00 bash[20701]: cluster 2026-03-10T07:32:03.617623+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T07:32:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:04 vm00 bash[20701]: audit 2026-03-10T07:32:03.625849+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:04 vm00 bash[20701]: audit 2026-03-10T07:32:03.625849+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:04 vm03 bash[23382]: audit 2026-03-10T07:32:03.616534+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:04 vm03 bash[23382]: audit 2026-03-10T07:32:03.616534+0000 mon.b (mon.1) 340 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:04 vm03 bash[23382]: cluster 2026-03-10T07:32:03.617623+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T07:32:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:04 vm03 bash[23382]: cluster 2026-03-10T07:32:03.617623+0000 mon.a (mon.0) 2306 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T07:32:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:04 vm03 bash[23382]: audit 2026-03-10T07:32:03.625849+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:04 vm03 bash[23382]: audit 2026-03-10T07:32:03.625849+0000 mon.a (mon.0) 2307 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:04.601610+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:04.601610+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: cluster 2026-03-10T07:32:04.604420+0000 mon.a (mon.0) 2309 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: cluster 2026-03-10T07:32:04.604420+0000 mon.a (mon.0) 2309 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:04.606070+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:04.606070+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:04.617686+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:04.617686+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: cluster 2026-03-10T07:32:04.628794+0000 mgr.y (mgr.24407) 281 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: cluster 2026-03-10T07:32:04.628794+0000 mgr.y (mgr.24407) 281 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:05.035259+0000 mon.c (mon.2) 271 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:05.035259+0000 mon.c (mon.2) 271 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:05.340901+0000 mon.c (mon.2) 272 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:05.340901+0000 mon.c (mon.2) 272 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:05.341732+0000 mon.c (mon.2) 273 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:05.341732+0000 mon.c (mon.2) 273 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:05.348164+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:05 vm00 bash[28005]: audit 2026-03-10T07:32:05.348164+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:04.601610+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:04.601610+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: cluster 2026-03-10T07:32:04.604420+0000 mon.a (mon.0) 2309 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: cluster 2026-03-10T07:32:04.604420+0000 mon.a (mon.0) 2309 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:04.606070+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:04.606070+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:04.617686+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:04.617686+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: cluster 2026-03-10T07:32:04.628794+0000 mgr.y (mgr.24407) 281 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: cluster 2026-03-10T07:32:04.628794+0000 mgr.y (mgr.24407) 281 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:05.035259+0000 mon.c (mon.2) 271 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:05.035259+0000 mon.c (mon.2) 271 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:05.340901+0000 mon.c (mon.2) 272 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:05.340901+0000 mon.c (mon.2) 272 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:05.341732+0000 mon.c (mon.2) 273 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:05.341732+0000 mon.c (mon.2) 273 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:05.348164+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:05 vm00 bash[20701]: audit 2026-03-10T07:32:05.348164+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:04.601610+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:04.601610+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: cluster 2026-03-10T07:32:04.604420+0000 mon.a (mon.0) 2309 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: cluster 2026-03-10T07:32:04.604420+0000 mon.a (mon.0) 2309 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:04.606070+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:04.606070+0000 mon.b (mon.1) 341 : audit [INF] from='client.? 192.168.123.100:0/158624832' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:04.617686+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:04.617686+0000 mon.a (mon.0) 2310 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]: dispatch 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: cluster 2026-03-10T07:32:04.628794+0000 mgr.y (mgr.24407) 281 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: cluster 2026-03-10T07:32:04.628794+0000 mgr.y (mgr.24407) 281 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:05.035259+0000 mon.c (mon.2) 271 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:05.035259+0000 mon.c (mon.2) 271 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:05.340901+0000 mon.c (mon.2) 272 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:05.340901+0000 mon.c (mon.2) 272 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:05.341732+0000 mon.c (mon.2) 273 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:05.341732+0000 mon.c (mon.2) 273 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:32:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:05.348164+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:06.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:05 vm03 bash[23382]: audit 2026-03-10T07:32:05.348164+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:06 vm03 bash[23382]: audit 2026-03-10T07:32:05.628819+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:06 vm03 bash[23382]: audit 2026-03-10T07:32:05.628819+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:06 vm03 bash[23382]: cluster 2026-03-10T07:32:05.638775+0000 mon.a (mon.0) 2313 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T07:32:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:06 vm03 bash[23382]: cluster 2026-03-10T07:32:05.638775+0000 mon.a (mon.0) 2313 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T07:32:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:06 vm03 bash[23382]: audit 2026-03-10T07:32:05.648571+0000 mon.a (mon.0) 2314 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:06 vm03 bash[23382]: audit 2026-03-10T07:32:05.648571+0000 mon.a (mon.0) 2314 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:06 vm03 bash[23382]: audit 2026-03-10T07:32:05.649800+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:06 vm03 bash[23382]: audit 2026-03-10T07:32:05.649800+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:06 vm03 bash[23382]: audit 2026-03-10T07:32:05.649990+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:06 vm03 bash[23382]: audit 2026-03-10T07:32:05.649990+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:07.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:06 vm00 bash[28005]: audit 2026-03-10T07:32:05.628819+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:06 vm00 bash[28005]: audit 2026-03-10T07:32:05.628819+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:06 vm00 bash[28005]: cluster 2026-03-10T07:32:05.638775+0000 mon.a (mon.0) 2313 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:06 vm00 bash[28005]: cluster 2026-03-10T07:32:05.638775+0000 mon.a (mon.0) 2313 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:06 vm00 bash[28005]: audit 2026-03-10T07:32:05.648571+0000 mon.a (mon.0) 2314 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:06 vm00 bash[28005]: audit 2026-03-10T07:32:05.648571+0000 mon.a (mon.0) 2314 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:06 vm00 bash[28005]: audit 2026-03-10T07:32:05.649800+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:06 vm00 bash[28005]: audit 2026-03-10T07:32:05.649800+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:06 vm00 bash[28005]: audit 2026-03-10T07:32:05.649990+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:06 vm00 bash[28005]: audit 2026-03-10T07:32:05.649990+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:06 vm00 bash[20701]: audit 2026-03-10T07:32:05.628819+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:06 vm00 bash[20701]: audit 2026-03-10T07:32:05.628819+0000 mon.a (mon.0) 2312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm00-59637-58"}]': finished 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:06 vm00 bash[20701]: cluster 2026-03-10T07:32:05.638775+0000 mon.a (mon.0) 2313 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:06 vm00 bash[20701]: cluster 2026-03-10T07:32:05.638775+0000 mon.a (mon.0) 2313 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:06 vm00 bash[20701]: audit 2026-03-10T07:32:05.648571+0000 mon.a (mon.0) 2314 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:06 vm00 bash[20701]: audit 2026-03-10T07:32:05.648571+0000 mon.a (mon.0) 2314 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:06 vm00 bash[20701]: audit 2026-03-10T07:32:05.649800+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:06 vm00 bash[20701]: audit 2026-03-10T07:32:05.649800+0000 mon.a (mon.0) 2315 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:06 vm00 bash[20701]: audit 2026-03-10T07:32:05.649990+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:06 vm00 bash[20701]: audit 2026-03-10T07:32:05.649990+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:07 vm03 bash[23382]: cluster 2026-03-10T07:32:06.629137+0000 mgr.y (mgr.24407) 282 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:07 vm03 bash[23382]: cluster 2026-03-10T07:32:06.629137+0000 mgr.y (mgr.24407) 282 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:07 vm03 bash[23382]: audit 2026-03-10T07:32:06.632298+0000 mon.a (mon.0) 2317 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:07 vm03 bash[23382]: audit 2026-03-10T07:32:06.632298+0000 mon.a (mon.0) 2317 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:07 vm03 bash[23382]: cluster 2026-03-10T07:32:06.638330+0000 mon.a (mon.0) 2318 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T07:32:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:07 vm03 bash[23382]: cluster 2026-03-10T07:32:06.638330+0000 mon.a (mon.0) 2318 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T07:32:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:07 vm03 bash[23382]: audit 2026-03-10T07:32:06.641913+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:07 vm03 bash[23382]: audit 2026-03-10T07:32:06.641913+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:07 vm03 bash[23382]: cluster 2026-03-10T07:32:07.651198+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T07:32:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:07 vm03 bash[23382]: cluster 2026-03-10T07:32:07.651198+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T07:32:08.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:07 vm00 bash[28005]: cluster 2026-03-10T07:32:06.629137+0000 mgr.y (mgr.24407) 282 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:07 vm00 bash[28005]: cluster 2026-03-10T07:32:06.629137+0000 mgr.y (mgr.24407) 282 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:07 vm00 bash[28005]: audit 2026-03-10T07:32:06.632298+0000 mon.a (mon.0) 2317 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:07 vm00 bash[28005]: audit 2026-03-10T07:32:06.632298+0000 mon.a (mon.0) 2317 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:07 vm00 bash[28005]: cluster 2026-03-10T07:32:06.638330+0000 mon.a (mon.0) 2318 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:07 vm00 bash[28005]: cluster 2026-03-10T07:32:06.638330+0000 mon.a (mon.0) 2318 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:07 vm00 bash[28005]: audit 2026-03-10T07:32:06.641913+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:07 vm00 bash[28005]: audit 2026-03-10T07:32:06.641913+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:07 vm00 bash[28005]: cluster 2026-03-10T07:32:07.651198+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:07 vm00 bash[28005]: cluster 2026-03-10T07:32:07.651198+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:07 vm00 bash[20701]: cluster 2026-03-10T07:32:06.629137+0000 mgr.y (mgr.24407) 282 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:07 vm00 bash[20701]: cluster 2026-03-10T07:32:06.629137+0000 mgr.y (mgr.24407) 282 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:07 vm00 bash[20701]: audit 2026-03-10T07:32:06.632298+0000 mon.a (mon.0) 2317 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:07 vm00 bash[20701]: audit 2026-03-10T07:32:06.632298+0000 mon.a (mon.0) 2317 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:07 vm00 bash[20701]: cluster 2026-03-10T07:32:06.638330+0000 mon.a (mon.0) 2318 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:07 vm00 bash[20701]: cluster 2026-03-10T07:32:06.638330+0000 mon.a (mon.0) 2318 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:07 vm00 bash[20701]: audit 2026-03-10T07:32:06.641913+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:07 vm00 bash[20701]: audit 2026-03-10T07:32:06.641913+0000 mon.a (mon.0) 2319 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:07 vm00 bash[20701]: cluster 2026-03-10T07:32:07.651198+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T07:32:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:07 vm00 bash[20701]: cluster 2026-03-10T07:32:07.651198+0000 mon.a (mon.0) 2320 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T07:32:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:09 vm03 bash[23382]: cluster 2026-03-10T07:32:08.629500+0000 mgr.y (mgr.24407) 283 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:09 vm03 bash[23382]: cluster 2026-03-10T07:32:08.629500+0000 mgr.y (mgr.24407) 283 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:09 vm03 bash[23382]: audit 2026-03-10T07:32:08.693362+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:09 vm03 bash[23382]: audit 2026-03-10T07:32:08.693362+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:09 vm03 bash[23382]: cluster 2026-03-10T07:32:08.696255+0000 mon.a (mon.0) 2322 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T07:32:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:09 vm03 bash[23382]: cluster 2026-03-10T07:32:08.696255+0000 mon.a (mon.0) 2322 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T07:32:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:09 vm03 bash[23382]: audit 2026-03-10T07:32:09.431686+0000 mon.c (mon.2) 274 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:09 vm03 bash[23382]: audit 2026-03-10T07:32:09.431686+0000 mon.c (mon.2) 274 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:10.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:09 vm00 bash[28005]: cluster 2026-03-10T07:32:08.629500+0000 mgr.y (mgr.24407) 283 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:09 vm00 bash[28005]: cluster 2026-03-10T07:32:08.629500+0000 mgr.y (mgr.24407) 283 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:09 vm00 bash[28005]: audit 2026-03-10T07:32:08.693362+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:09 vm00 bash[28005]: audit 2026-03-10T07:32:08.693362+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:09 vm00 bash[28005]: cluster 2026-03-10T07:32:08.696255+0000 mon.a (mon.0) 2322 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:09 vm00 bash[28005]: cluster 2026-03-10T07:32:08.696255+0000 mon.a (mon.0) 2322 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:09 vm00 bash[28005]: audit 2026-03-10T07:32:09.431686+0000 mon.c (mon.2) 274 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:09 vm00 bash[28005]: audit 2026-03-10T07:32:09.431686+0000 mon.c (mon.2) 274 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:09 vm00 bash[20701]: cluster 2026-03-10T07:32:08.629500+0000 mgr.y (mgr.24407) 283 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:09 vm00 bash[20701]: cluster 2026-03-10T07:32:08.629500+0000 mgr.y (mgr.24407) 283 : cluster [DBG] pgmap v424: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:09 vm00 bash[20701]: audit 2026-03-10T07:32:08.693362+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:09 vm00 bash[20701]: audit 2026-03-10T07:32:08.693362+0000 mon.a (mon.0) 2321 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm00-59637-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:09 vm00 bash[20701]: cluster 2026-03-10T07:32:08.696255+0000 mon.a (mon.0) 2322 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:09 vm00 bash[20701]: cluster 2026-03-10T07:32:08.696255+0000 mon.a (mon.0) 2322 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:09 vm00 bash[20701]: audit 2026-03-10T07:32:09.431686+0000 mon.c (mon.2) 274 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:09 vm00 bash[20701]: audit 2026-03-10T07:32:09.431686+0000 mon.c (mon.2) 274 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:10 vm03 bash[23382]: cluster 2026-03-10T07:32:09.715244+0000 mon.a (mon.0) 2323 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T07:32:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:10 vm03 bash[23382]: cluster 2026-03-10T07:32:09.715244+0000 mon.a (mon.0) 2323 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T07:32:11.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:10 vm00 bash[28005]: cluster 2026-03-10T07:32:09.715244+0000 mon.a (mon.0) 2323 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T07:32:11.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:10 vm00 bash[28005]: cluster 2026-03-10T07:32:09.715244+0000 mon.a (mon.0) 2323 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T07:32:11.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:10 vm00 bash[20701]: cluster 2026-03-10T07:32:09.715244+0000 mon.a (mon.0) 2323 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T07:32:11.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:10 vm00 bash[20701]: cluster 2026-03-10T07:32:09.715244+0000 mon.a (mon.0) 2323 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T07:32:11.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:32:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:32:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:32:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:11 vm03 bash[23382]: cluster 2026-03-10T07:32:10.629996+0000 mgr.y (mgr.24407) 284 : cluster [DBG] pgmap v427: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:11 vm03 bash[23382]: cluster 2026-03-10T07:32:10.629996+0000 mgr.y (mgr.24407) 284 : cluster [DBG] pgmap v427: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:11 vm03 bash[23382]: cluster 2026-03-10T07:32:10.705552+0000 mon.a (mon.0) 2324 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:11 vm03 bash[23382]: cluster 2026-03-10T07:32:10.705552+0000 mon.a (mon.0) 2324 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:11 vm03 bash[23382]: cluster 2026-03-10T07:32:10.733107+0000 mon.a (mon.0) 2325 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T07:32:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:11 vm03 bash[23382]: cluster 2026-03-10T07:32:10.733107+0000 mon.a (mon.0) 2325 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T07:32:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:11 vm03 bash[23382]: audit 2026-03-10T07:32:10.736094+0000 mon.a (mon.0) 2326 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:11 vm03 bash[23382]: audit 2026-03-10T07:32:10.736094+0000 mon.a (mon.0) 2326 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:12.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:11 vm00 bash[28005]: cluster 2026-03-10T07:32:10.629996+0000 mgr.y (mgr.24407) 284 : cluster [DBG] pgmap v427: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:11 vm00 bash[28005]: cluster 2026-03-10T07:32:10.629996+0000 mgr.y (mgr.24407) 284 : cluster [DBG] pgmap v427: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:11 vm00 bash[28005]: cluster 2026-03-10T07:32:10.705552+0000 mon.a (mon.0) 2324 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:11 vm00 bash[28005]: cluster 2026-03-10T07:32:10.705552+0000 mon.a (mon.0) 2324 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:11 vm00 bash[28005]: cluster 2026-03-10T07:32:10.733107+0000 mon.a (mon.0) 2325 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:11 vm00 bash[28005]: cluster 2026-03-10T07:32:10.733107+0000 mon.a (mon.0) 2325 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:11 vm00 bash[28005]: audit 2026-03-10T07:32:10.736094+0000 mon.a (mon.0) 2326 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:11 vm00 bash[28005]: audit 2026-03-10T07:32:10.736094+0000 mon.a (mon.0) 2326 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:11 vm00 bash[20701]: cluster 2026-03-10T07:32:10.629996+0000 mgr.y (mgr.24407) 284 : cluster [DBG] pgmap v427: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:11 vm00 bash[20701]: cluster 2026-03-10T07:32:10.629996+0000 mgr.y (mgr.24407) 284 : cluster [DBG] pgmap v427: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:11 vm00 bash[20701]: cluster 2026-03-10T07:32:10.705552+0000 mon.a (mon.0) 2324 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:11 vm00 bash[20701]: cluster 2026-03-10T07:32:10.705552+0000 mon.a (mon.0) 2324 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:11 vm00 bash[20701]: cluster 2026-03-10T07:32:10.733107+0000 mon.a (mon.0) 2325 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:11 vm00 bash[20701]: cluster 2026-03-10T07:32:10.733107+0000 mon.a (mon.0) 2325 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:11 vm00 bash[20701]: audit 2026-03-10T07:32:10.736094+0000 mon.a (mon.0) 2326 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:12.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:11 vm00 bash[20701]: audit 2026-03-10T07:32:10.736094+0000 mon.a (mon.0) 2326 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:7462ddf6:.RoundTripAppendPP (3009 ms) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RacingRemovePP 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RacingRemovePP (2991 ms) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP (3101 ms) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP2 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP2 (3226 ms) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolEIOFlag 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: setting pool EIO 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: max_success 101, min_failed 102 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolEIOFlag (4085 ms) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiReads 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiReads (2996 ms) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio (120169 ms total) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.ReadIntoBufferlist 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioPP.ReadIntoBufferlist (3048 ms) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.XattrsRoundTripPP 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioPP.XattrsRoundTripPP (9016 ms) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RmXattrPP 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RmXattrPP (15213 ms) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RemoveTestPP 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RemoveTestPP (3052 ms) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP (30329 ms total) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosIoPP.XattrListPP (3046 ms) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP (3046 ms total) 2026-03-10T07:32:12.726 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleWritePP 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleWritePP (13505 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.WaitForSafePP 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.WaitForSafePP (7143 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP (7050 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP2 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP2 (7078 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP3 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP3 (3071 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripSparseReadPP 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripSparseReadPP (7205 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripAppendPP 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripAppendPP (7357 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsCompletePP 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsCompletePP (7213 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsSafePP 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsSafePP (7023 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ReturnValuePP 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ReturnValuePP (7025 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushPP 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushPP (7238 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushAsyncPP 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushAsyncPP (7211 ms) 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP 2026-03-10T07:32:12.727 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP (7090 ms) 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:11.718569+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:11.718569+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: cluster 2026-03-10T07:32:11.721348+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: cluster 2026-03-10T07:32:11.721348+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:11.722041+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:11.722041+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.523529+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.523529+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.524529+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.524529+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.525648+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.525648+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.526453+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.526453+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.723070+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.723070+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.723136+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]': finished 2026-03-10T07:32:13.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: audit 2026-03-10T07:32:12.723136+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]': finished 2026-03-10T07:32:13.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: cluster 2026-03-10T07:32:12.726463+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T07:32:13.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:12 vm03 bash[23382]: cluster 2026-03-10T07:32:12.726463+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T07:32:13.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:11.718569+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:11.718569+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: cluster 2026-03-10T07:32:11.721348+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: cluster 2026-03-10T07:32:11.721348+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:11.722041+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:11.722041+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.523529+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.523529+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.524529+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.524529+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.525648+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.525648+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.526453+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.526453+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.723070+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.723070+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.723136+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: audit 2026-03-10T07:32:12.723136+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: cluster 2026-03-10T07:32:12.726463+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:12 vm00 bash[28005]: cluster 2026-03-10T07:32:12.726463+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:11.718569+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:11.718569+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: cluster 2026-03-10T07:32:11.721348+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: cluster 2026-03-10T07:32:11.721348+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:11.722041+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:11.722041+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.523529+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.523529+0000 mon.b (mon.1) 342 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.524529+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.524529+0000 mon.b (mon.1) 343 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.525648+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.525648+0000 mon.a (mon.0) 2330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.526453+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.526453+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]: dispatch 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.723070+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.723070+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.100:0/1520915901' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm00-59637-59"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.723136+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: audit 2026-03-10T07:32:12.723136+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-45"}]': finished 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: cluster 2026-03-10T07:32:12.726463+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T07:32:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:12 vm00 bash[20701]: cluster 2026-03-10T07:32:12.726463+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T07:32:13.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:32:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:32:14.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:13 vm00 bash[28005]: cluster 2026-03-10T07:32:12.630356+0000 mgr.y (mgr.24407) 285 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:14.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:13 vm00 bash[28005]: cluster 2026-03-10T07:32:12.630356+0000 mgr.y (mgr.24407) 285 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:14.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:13 vm00 bash[28005]: audit 2026-03-10T07:32:13.137809+0000 mgr.y (mgr.24407) 286 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:14.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:13 vm00 bash[28005]: audit 2026-03-10T07:32:13.137809+0000 mgr.y (mgr.24407) 286 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:13 vm00 bash[20701]: cluster 2026-03-10T07:32:12.630356+0000 mgr.y (mgr.24407) 285 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:13 vm00 bash[20701]: cluster 2026-03-10T07:32:12.630356+0000 mgr.y (mgr.24407) 285 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:13 vm00 bash[20701]: audit 2026-03-10T07:32:13.137809+0000 mgr.y (mgr.24407) 286 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:13 vm00 bash[20701]: audit 2026-03-10T07:32:13.137809+0000 mgr.y (mgr.24407) 286 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:14.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:13 vm03 bash[23382]: cluster 2026-03-10T07:32:12.630356+0000 mgr.y (mgr.24407) 285 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:14.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:13 vm03 bash[23382]: cluster 2026-03-10T07:32:12.630356+0000 mgr.y (mgr.24407) 285 : cluster [DBG] pgmap v430: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T07:32:14.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:13 vm03 bash[23382]: audit 2026-03-10T07:32:13.137809+0000 mgr.y (mgr.24407) 286 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:14.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:13 vm03 bash[23382]: audit 2026-03-10T07:32:13.137809+0000 mgr.y (mgr.24407) 286 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: cluster 2026-03-10T07:32:13.782808+0000 mon.a (mon.0) 2335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: cluster 2026-03-10T07:32:13.782808+0000 mon.a (mon.0) 2335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: audit 2026-03-10T07:32:13.784571+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/255029354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: audit 2026-03-10T07:32:13.784571+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/255029354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: audit 2026-03-10T07:32:13.788193+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: audit 2026-03-10T07:32:13.788193+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: audit 2026-03-10T07:32:14.763629+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: audit 2026-03-10T07:32:14.763629+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: cluster 2026-03-10T07:32:14.772659+0000 mon.a (mon.0) 2338 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: cluster 2026-03-10T07:32:14.772659+0000 mon.a (mon.0) 2338 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: audit 2026-03-10T07:32:14.792719+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: audit 2026-03-10T07:32:14.792719+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: audit 2026-03-10T07:32:14.795615+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:14 vm00 bash[28005]: audit 2026-03-10T07:32:14.795615+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: cluster 2026-03-10T07:32:13.782808+0000 mon.a (mon.0) 2335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: cluster 2026-03-10T07:32:13.782808+0000 mon.a (mon.0) 2335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: audit 2026-03-10T07:32:13.784571+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/255029354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: audit 2026-03-10T07:32:13.784571+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/255029354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: audit 2026-03-10T07:32:13.788193+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: audit 2026-03-10T07:32:13.788193+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: audit 2026-03-10T07:32:14.763629+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: audit 2026-03-10T07:32:14.763629+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: cluster 2026-03-10T07:32:14.772659+0000 mon.a (mon.0) 2338 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: cluster 2026-03-10T07:32:14.772659+0000 mon.a (mon.0) 2338 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: audit 2026-03-10T07:32:14.792719+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: audit 2026-03-10T07:32:14.792719+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: audit 2026-03-10T07:32:14.795615+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:14 vm00 bash[20701]: audit 2026-03-10T07:32:14.795615+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: cluster 2026-03-10T07:32:13.782808+0000 mon.a (mon.0) 2335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: cluster 2026-03-10T07:32:13.782808+0000 mon.a (mon.0) 2335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: audit 2026-03-10T07:32:13.784571+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/255029354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: audit 2026-03-10T07:32:13.784571+0000 mon.b (mon.1) 344 : audit [INF] from='client.? 192.168.123.100:0/255029354' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: audit 2026-03-10T07:32:13.788193+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: audit 2026-03-10T07:32:13.788193+0000 mon.a (mon.0) 2336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: audit 2026-03-10T07:32:14.763629+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: audit 2026-03-10T07:32:14.763629+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm00-59637-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: cluster 2026-03-10T07:32:14.772659+0000 mon.a (mon.0) 2338 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: cluster 2026-03-10T07:32:14.772659+0000 mon.a (mon.0) 2338 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: audit 2026-03-10T07:32:14.792719+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: audit 2026-03-10T07:32:14.792719+0000 mon.b (mon.1) 345 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: audit 2026-03-10T07:32:14.795615+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:14 vm03 bash[23382]: audit 2026-03-10T07:32:14.795615+0000 mon.a (mon.0) 2339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: cluster 2026-03-10T07:32:14.630748+0000 mgr.y (mgr.24407) 287 : cluster [DBG] pgmap v433: 292 pgs: 8 creating+peering, 24 unknown, 260 active+clean; 8.3 MiB data, 713 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: cluster 2026-03-10T07:32:14.630748+0000 mgr.y (mgr.24407) 287 : cluster [DBG] pgmap v433: 292 pgs: 8 creating+peering, 24 unknown, 260 active+clean; 8.3 MiB data, 713 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.767315+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.767315+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: cluster 2026-03-10T07:32:15.775394+0000 mon.a (mon.0) 2341 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: cluster 2026-03-10T07:32:15.775394+0000 mon.a (mon.0) 2341 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.790266+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.790266+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.801955+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.801955+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.802898+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.802898+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.804182+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.804182+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.805333+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.805333+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.807203+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:15 vm00 bash[28005]: audit 2026-03-10T07:32:15.807203+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: cluster 2026-03-10T07:32:14.630748+0000 mgr.y (mgr.24407) 287 : cluster [DBG] pgmap v433: 292 pgs: 8 creating+peering, 24 unknown, 260 active+clean; 8.3 MiB data, 713 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: cluster 2026-03-10T07:32:14.630748+0000 mgr.y (mgr.24407) 287 : cluster [DBG] pgmap v433: 292 pgs: 8 creating+peering, 24 unknown, 260 active+clean; 8.3 MiB data, 713 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.767315+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.767315+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: cluster 2026-03-10T07:32:15.775394+0000 mon.a (mon.0) 2341 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: cluster 2026-03-10T07:32:15.775394+0000 mon.a (mon.0) 2341 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.790266+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.790266+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.801955+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.801955+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.802898+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.802898+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.804182+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.804182+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.805333+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.805333+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.807203+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:16.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:15 vm00 bash[20701]: audit 2026-03-10T07:32:15.807203+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: cluster 2026-03-10T07:32:14.630748+0000 mgr.y (mgr.24407) 287 : cluster [DBG] pgmap v433: 292 pgs: 8 creating+peering, 24 unknown, 260 active+clean; 8.3 MiB data, 713 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: cluster 2026-03-10T07:32:14.630748+0000 mgr.y (mgr.24407) 287 : cluster [DBG] pgmap v433: 292 pgs: 8 creating+peering, 24 unknown, 260 active+clean; 8.3 MiB data, 713 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.767315+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.767315+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: cluster 2026-03-10T07:32:15.775394+0000 mon.a (mon.0) 2341 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: cluster 2026-03-10T07:32:15.775394+0000 mon.a (mon.0) 2341 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.790266+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.790266+0000 mon.b (mon.1) 346 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.801955+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.801955+0000 mon.b (mon.1) 347 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.802898+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.802898+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.804182+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.804182+0000 mon.b (mon.1) 348 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.805333+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.805333+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.807203+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:15 vm03 bash[23382]: audit 2026-03-10T07:32:15.807203+0000 mon.a (mon.0) 2344 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:15.867034+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:15.867034+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:15.869146+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:15.869146+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.770647+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.770647+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.770682+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.770682+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: cluster 2026-03-10T07:32:16.773496+0000 mon.a (mon.0) 2348 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: cluster 2026-03-10T07:32:16.773496+0000 mon.a (mon.0) 2348 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.779133+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.779133+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.779612+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.779612+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.781135+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.781135+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.781591+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:16 vm00 bash[28005]: audit 2026-03-10T07:32:16.781591+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:15.867034+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:15.867034+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:15.869146+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:15.869146+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.770647+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.770647+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.770682+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.770682+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: cluster 2026-03-10T07:32:16.773496+0000 mon.a (mon.0) 2348 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: cluster 2026-03-10T07:32:16.773496+0000 mon.a (mon.0) 2348 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.779133+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.779133+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.779612+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.779612+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.781135+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.781135+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.781591+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:17.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:16 vm00 bash[20701]: audit 2026-03-10T07:32:16.781591+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:15.867034+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:15.867034+0000 mon.b (mon.1) 349 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:15.869146+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:15.869146+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.770647+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.770647+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm00-59637-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.770682+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.770682+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: cluster 2026-03-10T07:32:16.773496+0000 mon.a (mon.0) 2348 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: cluster 2026-03-10T07:32:16.773496+0000 mon.a (mon.0) 2348 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.779133+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.779133+0000 mon.b (mon.1) 350 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.779612+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.779612+0000 mon.b (mon.1) 351 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:17.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.781135+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.781135+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.781591+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:16 vm03 bash[23382]: audit 2026-03-10T07:32:16.781591+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:18.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: cluster 2026-03-10T07:32:16.631048+0000 mgr.y (mgr.24407) 288 : cluster [DBG] pgmap v436: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: cluster 2026-03-10T07:32:16.631048+0000 mgr.y (mgr.24407) 288 : cluster [DBG] pgmap v436: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: cluster 2026-03-10T07:32:16.817685+0000 mon.a (mon.0) 2351 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: cluster 2026-03-10T07:32:16.817685+0000 mon.a (mon.0) 2351 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: audit 2026-03-10T07:32:17.774119+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: audit 2026-03-10T07:32:17.774119+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: audit 2026-03-10T07:32:17.780139+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: audit 2026-03-10T07:32:17.780139+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: cluster 2026-03-10T07:32:17.784797+0000 mon.a (mon.0) 2353 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: cluster 2026-03-10T07:32:17.784797+0000 mon.a (mon.0) 2353 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: audit 2026-03-10T07:32:17.787618+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:17 vm00 bash[28005]: audit 2026-03-10T07:32:17.787618+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: cluster 2026-03-10T07:32:16.631048+0000 mgr.y (mgr.24407) 288 : cluster [DBG] pgmap v436: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: cluster 2026-03-10T07:32:16.631048+0000 mgr.y (mgr.24407) 288 : cluster [DBG] pgmap v436: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: cluster 2026-03-10T07:32:16.817685+0000 mon.a (mon.0) 2351 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: cluster 2026-03-10T07:32:16.817685+0000 mon.a (mon.0) 2351 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: audit 2026-03-10T07:32:17.774119+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: audit 2026-03-10T07:32:17.774119+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: audit 2026-03-10T07:32:17.780139+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: audit 2026-03-10T07:32:17.780139+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: cluster 2026-03-10T07:32:17.784797+0000 mon.a (mon.0) 2353 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: cluster 2026-03-10T07:32:17.784797+0000 mon.a (mon.0) 2353 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: audit 2026-03-10T07:32:17.787618+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:17 vm00 bash[20701]: audit 2026-03-10T07:32:17.787618+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: cluster 2026-03-10T07:32:16.631048+0000 mgr.y (mgr.24407) 288 : cluster [DBG] pgmap v436: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: cluster 2026-03-10T07:32:16.631048+0000 mgr.y (mgr.24407) 288 : cluster [DBG] pgmap v436: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: cluster 2026-03-10T07:32:16.817685+0000 mon.a (mon.0) 2351 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: cluster 2026-03-10T07:32:16.817685+0000 mon.a (mon.0) 2351 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: audit 2026-03-10T07:32:17.774119+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: audit 2026-03-10T07:32:17.774119+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: audit 2026-03-10T07:32:17.780139+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: audit 2026-03-10T07:32:17.780139+0000 mon.b (mon.1) 352 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: cluster 2026-03-10T07:32:17.784797+0000 mon.a (mon.0) 2353 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: cluster 2026-03-10T07:32:17.784797+0000 mon.a (mon.0) 2353 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: audit 2026-03-10T07:32:17.787618+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:17 vm03 bash[23382]: audit 2026-03-10T07:32:17.787618+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]: dispatch 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: cluster 2026-03-10T07:32:18.774053+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: cluster 2026-03-10T07:32:18.774053+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: audit 2026-03-10T07:32:18.777607+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: audit 2026-03-10T07:32:18.777607+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: audit 2026-03-10T07:32:18.777653+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]': finished 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: audit 2026-03-10T07:32:18.777653+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]': finished 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: audit 2026-03-10T07:32:18.780014+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: audit 2026-03-10T07:32:18.780014+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: cluster 2026-03-10T07:32:18.784412+0000 mon.a (mon.0) 2358 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: cluster 2026-03-10T07:32:18.784412+0000 mon.a (mon.0) 2358 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: audit 2026-03-10T07:32:18.794007+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:18 vm00 bash[28005]: audit 2026-03-10T07:32:18.794007+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: cluster 2026-03-10T07:32:18.774053+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: cluster 2026-03-10T07:32:18.774053+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: audit 2026-03-10T07:32:18.777607+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: audit 2026-03-10T07:32:18.777607+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: audit 2026-03-10T07:32:18.777653+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]': finished 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: audit 2026-03-10T07:32:18.777653+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]': finished 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: audit 2026-03-10T07:32:18.780014+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: audit 2026-03-10T07:32:18.780014+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: cluster 2026-03-10T07:32:18.784412+0000 mon.a (mon.0) 2358 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: cluster 2026-03-10T07:32:18.784412+0000 mon.a (mon.0) 2358 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: audit 2026-03-10T07:32:18.794007+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:18 vm00 bash[20701]: audit 2026-03-10T07:32:18.794007+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: cluster 2026-03-10T07:32:18.774053+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: cluster 2026-03-10T07:32:18.774053+0000 mon.a (mon.0) 2355 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: audit 2026-03-10T07:32:18.777607+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: audit 2026-03-10T07:32:18.777607+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm00-59637-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: audit 2026-03-10T07:32:18.777653+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]': finished 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: audit 2026-03-10T07:32:18.777653+0000 mon.a (mon.0) 2357 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-47", "mode": "writeback"}]': finished 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: audit 2026-03-10T07:32:18.780014+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: audit 2026-03-10T07:32:18.780014+0000 mon.b (mon.1) 353 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: cluster 2026-03-10T07:32:18.784412+0000 mon.a (mon.0) 2358 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: cluster 2026-03-10T07:32:18.784412+0000 mon.a (mon.0) 2358 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: audit 2026-03-10T07:32:18.794007+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:18 vm03 bash[23382]: audit 2026-03-10T07:32:18.794007+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:20.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:19 vm00 bash[28005]: cluster 2026-03-10T07:32:18.631340+0000 mgr.y (mgr.24407) 289 : cluster [DBG] pgmap v439: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:19 vm00 bash[28005]: cluster 2026-03-10T07:32:18.631340+0000 mgr.y (mgr.24407) 289 : cluster [DBG] pgmap v439: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:19 vm00 bash[28005]: audit 2026-03-10T07:32:19.782023+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:19 vm00 bash[28005]: audit 2026-03-10T07:32:19.782023+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:19 vm00 bash[28005]: audit 2026-03-10T07:32:19.784067+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:19 vm00 bash[28005]: audit 2026-03-10T07:32:19.784067+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:19 vm00 bash[28005]: cluster 2026-03-10T07:32:19.789832+0000 mon.a (mon.0) 2361 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:19 vm00 bash[28005]: cluster 2026-03-10T07:32:19.789832+0000 mon.a (mon.0) 2361 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:19 vm00 bash[28005]: audit 2026-03-10T07:32:19.798836+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:19 vm00 bash[28005]: audit 2026-03-10T07:32:19.798836+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:19 vm00 bash[20701]: cluster 2026-03-10T07:32:18.631340+0000 mgr.y (mgr.24407) 289 : cluster [DBG] pgmap v439: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:19 vm00 bash[20701]: cluster 2026-03-10T07:32:18.631340+0000 mgr.y (mgr.24407) 289 : cluster [DBG] pgmap v439: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:19 vm00 bash[20701]: audit 2026-03-10T07:32:19.782023+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:19 vm00 bash[20701]: audit 2026-03-10T07:32:19.782023+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:19 vm00 bash[20701]: audit 2026-03-10T07:32:19.784067+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:19 vm00 bash[20701]: audit 2026-03-10T07:32:19.784067+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:19 vm00 bash[20701]: cluster 2026-03-10T07:32:19.789832+0000 mon.a (mon.0) 2361 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:19 vm00 bash[20701]: cluster 2026-03-10T07:32:19.789832+0000 mon.a (mon.0) 2361 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:19 vm00 bash[20701]: audit 2026-03-10T07:32:19.798836+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:20.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:19 vm00 bash[20701]: audit 2026-03-10T07:32:19.798836+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:19 vm03 bash[23382]: cluster 2026-03-10T07:32:18.631340+0000 mgr.y (mgr.24407) 289 : cluster [DBG] pgmap v439: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:19 vm03 bash[23382]: cluster 2026-03-10T07:32:18.631340+0000 mgr.y (mgr.24407) 289 : cluster [DBG] pgmap v439: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:19 vm03 bash[23382]: audit 2026-03-10T07:32:19.782023+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:19 vm03 bash[23382]: audit 2026-03-10T07:32:19.782023+0000 mon.a (mon.0) 2360 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:19 vm03 bash[23382]: audit 2026-03-10T07:32:19.784067+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:19 vm03 bash[23382]: audit 2026-03-10T07:32:19.784067+0000 mon.b (mon.1) 354 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:19 vm03 bash[23382]: cluster 2026-03-10T07:32:19.789832+0000 mon.a (mon.0) 2361 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T07:32:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:19 vm03 bash[23382]: cluster 2026-03-10T07:32:19.789832+0000 mon.a (mon.0) 2361 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T07:32:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:19 vm03 bash[23382]: audit 2026-03-10T07:32:19.798836+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:19 vm03 bash[23382]: audit 2026-03-10T07:32:19.798836+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:21.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:32:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:32:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: cluster 2026-03-10T07:32:20.631640+0000 mgr.y (mgr.24407) 290 : cluster [DBG] pgmap v442: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: cluster 2026-03-10T07:32:20.631640+0000 mgr.y (mgr.24407) 290 : cluster [DBG] pgmap v442: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: audit 2026-03-10T07:32:20.785514+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: audit 2026-03-10T07:32:20.785514+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: audit 2026-03-10T07:32:20.790305+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: audit 2026-03-10T07:32:20.790305+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: audit 2026-03-10T07:32:20.794399+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: audit 2026-03-10T07:32:20.794399+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: cluster 2026-03-10T07:32:20.794901+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: cluster 2026-03-10T07:32:20.794901+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: audit 2026-03-10T07:32:20.795887+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: audit 2026-03-10T07:32:20.795887+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: audit 2026-03-10T07:32:20.796277+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:21 vm00 bash[28005]: audit 2026-03-10T07:32:20.796277+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: cluster 2026-03-10T07:32:20.631640+0000 mgr.y (mgr.24407) 290 : cluster [DBG] pgmap v442: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: cluster 2026-03-10T07:32:20.631640+0000 mgr.y (mgr.24407) 290 : cluster [DBG] pgmap v442: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: audit 2026-03-10T07:32:20.785514+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: audit 2026-03-10T07:32:20.785514+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: audit 2026-03-10T07:32:20.790305+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: audit 2026-03-10T07:32:20.790305+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: audit 2026-03-10T07:32:20.794399+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: audit 2026-03-10T07:32:20.794399+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: cluster 2026-03-10T07:32:20.794901+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: cluster 2026-03-10T07:32:20.794901+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: audit 2026-03-10T07:32:20.795887+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: audit 2026-03-10T07:32:20.795887+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: audit 2026-03-10T07:32:20.796277+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:21 vm00 bash[20701]: audit 2026-03-10T07:32:20.796277+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: cluster 2026-03-10T07:32:20.631640+0000 mgr.y (mgr.24407) 290 : cluster [DBG] pgmap v442: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: cluster 2026-03-10T07:32:20.631640+0000 mgr.y (mgr.24407) 290 : cluster [DBG] pgmap v442: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: audit 2026-03-10T07:32:20.785514+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: audit 2026-03-10T07:32:20.785514+0000 mon.a (mon.0) 2363 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: audit 2026-03-10T07:32:20.790305+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: audit 2026-03-10T07:32:20.790305+0000 mon.b (mon.1) 355 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: audit 2026-03-10T07:32:20.794399+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: audit 2026-03-10T07:32:20.794399+0000 mon.b (mon.1) 356 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: cluster 2026-03-10T07:32:20.794901+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T07:32:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: cluster 2026-03-10T07:32:20.794901+0000 mon.a (mon.0) 2364 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T07:32:22.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: audit 2026-03-10T07:32:20.795887+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: audit 2026-03-10T07:32:20.795887+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:22.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: audit 2026-03-10T07:32:20.796277+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:22.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:21 vm03 bash[23382]: audit 2026-03-10T07:32:20.796277+0000 mon.a (mon.0) 2366 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: cluster 2026-03-10T07:32:21.785503+0000 mon.a (mon.0) 2367 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: cluster 2026-03-10T07:32:21.785503+0000 mon.a (mon.0) 2367 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.789435+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.789435+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.789474+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.789474+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: cluster 2026-03-10T07:32:21.793140+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: cluster 2026-03-10T07:32:21.793140+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.796472+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.796472+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.796839+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.796839+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.804364+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.804364+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.804534+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:22 vm00 bash[28005]: audit 2026-03-10T07:32:21.804534+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: cluster 2026-03-10T07:32:21.785503+0000 mon.a (mon.0) 2367 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: cluster 2026-03-10T07:32:21.785503+0000 mon.a (mon.0) 2367 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.789435+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.789435+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.789474+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.789474+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: cluster 2026-03-10T07:32:21.793140+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: cluster 2026-03-10T07:32:21.793140+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.796472+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.796472+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.796839+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.796839+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.804364+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.804364+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.804534+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:22 vm00 bash[20701]: audit 2026-03-10T07:32:21.804534+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: cluster 2026-03-10T07:32:21.785503+0000 mon.a (mon.0) 2367 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:23.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: cluster 2026-03-10T07:32:21.785503+0000 mon.a (mon.0) 2367 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.789435+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.789435+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.789474+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.789474+0000 mon.a (mon.0) 2369 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: cluster 2026-03-10T07:32:21.793140+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: cluster 2026-03-10T07:32:21.793140+0000 mon.a (mon.0) 2370 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.796472+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.796472+0000 mon.b (mon.1) 357 : audit [INF] from='client.? 192.168.123.100:0/3869758459' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.796839+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.796839+0000 mon.b (mon.1) 358 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.804364+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.804364+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]: dispatch 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.804534+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:22 vm03 bash[23382]: audit 2026-03-10T07:32:21.804534+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:23.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:32:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: cluster 2026-03-10T07:32:22.631968+0000 mgr.y (mgr.24407) 291 : cluster [DBG] pgmap v445: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: cluster 2026-03-10T07:32:22.631968+0000 mgr.y (mgr.24407) 291 : cluster [DBG] pgmap v445: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.794542+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.794542+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.794593+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.794593+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.799399+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.799399+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: cluster 2026-03-10T07:32:22.802172+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: cluster 2026-03-10T07:32:22.802172+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: cluster 2026-03-10T07:32:22.809140+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: cluster 2026-03-10T07:32:22.809140+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.812142+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.812142+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.815333+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.815333+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.821180+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.821180+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.821394+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:22.821394+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.138890+0000 mgr.y (mgr.24407) 292 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.138890+0000 mgr.y (mgr.24407) 292 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.798343+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.798343+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.798438+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.798438+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.802246+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.802246+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: cluster 2026-03-10T07:32:23.804933+0000 mon.a (mon.0) 2383 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: cluster 2026-03-10T07:32:23.804933+0000 mon.a (mon.0) 2383 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.806626+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.806626+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.806814+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:23 vm00 bash[28005]: audit 2026-03-10T07:32:23.806814+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: cluster 2026-03-10T07:32:22.631968+0000 mgr.y (mgr.24407) 291 : cluster [DBG] pgmap v445: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: cluster 2026-03-10T07:32:22.631968+0000 mgr.y (mgr.24407) 291 : cluster [DBG] pgmap v445: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.794542+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.794542+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.794593+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.794593+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.799399+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.799399+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: cluster 2026-03-10T07:32:22.802172+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: cluster 2026-03-10T07:32:22.802172+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: cluster 2026-03-10T07:32:22.809140+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: cluster 2026-03-10T07:32:22.809140+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.812142+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.812142+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.815333+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.815333+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.821180+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.821180+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.821394+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:22.821394+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.138890+0000 mgr.y (mgr.24407) 292 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.138890+0000 mgr.y (mgr.24407) 292 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.798343+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.798343+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.798438+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.798438+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.802246+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.802246+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: cluster 2026-03-10T07:32:23.804933+0000 mon.a (mon.0) 2383 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: cluster 2026-03-10T07:32:23.804933+0000 mon.a (mon.0) 2383 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.806626+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.806626+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.806814+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:24.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:23 vm00 bash[20701]: audit 2026-03-10T07:32:23.806814+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:24.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: cluster 2026-03-10T07:32:22.631968+0000 mgr.y (mgr.24407) 291 : cluster [DBG] pgmap v445: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:24.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: cluster 2026-03-10T07:32:22.631968+0000 mgr.y (mgr.24407) 291 : cluster [DBG] pgmap v445: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:32:24.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.794542+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:24.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.794542+0000 mon.a (mon.0) 2373 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm00-59637-61"}]': finished 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.794593+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.794593+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.799399+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.799399+0000 mon.b (mon.1) 359 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: cluster 2026-03-10T07:32:22.802172+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: cluster 2026-03-10T07:32:22.802172+0000 mon.a (mon.0) 2375 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: cluster 2026-03-10T07:32:22.809140+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: cluster 2026-03-10T07:32:22.809140+0000 mon.a (mon.0) 2376 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.812142+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.812142+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.815333+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.815333+0000 mon.a (mon.0) 2378 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.821180+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.821180+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.821394+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:22.821394+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.138890+0000 mgr.y (mgr.24407) 292 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.138890+0000 mgr.y (mgr.24407) 292 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.798343+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.798343+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.798438+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.798438+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm00-59637-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.802246+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.802246+0000 mon.b (mon.1) 360 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: cluster 2026-03-10T07:32:23.804933+0000 mon.a (mon.0) 2383 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: cluster 2026-03-10T07:32:23.804933+0000 mon.a (mon.0) 2383 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.806626+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.806626+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.806814+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:24.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:23 vm03 bash[23382]: audit 2026-03-10T07:32:23.806814+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:24 vm00 bash[28005]: audit 2026-03-10T07:32:24.437847+0000 mon.c (mon.2) 275 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:24 vm00 bash[28005]: audit 2026-03-10T07:32:24.437847+0000 mon.c (mon.2) 275 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:24 vm00 bash[28005]: audit 2026-03-10T07:32:24.801727+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:24 vm00 bash[28005]: audit 2026-03-10T07:32:24.801727+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:24 vm00 bash[28005]: cluster 2026-03-10T07:32:24.812158+0000 mon.a (mon.0) 2387 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:24 vm00 bash[28005]: cluster 2026-03-10T07:32:24.812158+0000 mon.a (mon.0) 2387 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:24 vm00 bash[20701]: audit 2026-03-10T07:32:24.437847+0000 mon.c (mon.2) 275 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:24 vm00 bash[20701]: audit 2026-03-10T07:32:24.437847+0000 mon.c (mon.2) 275 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:24 vm00 bash[20701]: audit 2026-03-10T07:32:24.801727+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:24 vm00 bash[20701]: audit 2026-03-10T07:32:24.801727+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:24 vm00 bash[20701]: cluster 2026-03-10T07:32:24.812158+0000 mon.a (mon.0) 2387 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T07:32:25.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:24 vm00 bash[20701]: cluster 2026-03-10T07:32:24.812158+0000 mon.a (mon.0) 2387 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T07:32:25.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:24 vm03 bash[23382]: audit 2026-03-10T07:32:24.437847+0000 mon.c (mon.2) 275 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:25.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:24 vm03 bash[23382]: audit 2026-03-10T07:32:24.437847+0000 mon.c (mon.2) 275 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:25.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:24 vm03 bash[23382]: audit 2026-03-10T07:32:24.801727+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:32:25.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:24 vm03 bash[23382]: audit 2026-03-10T07:32:24.801727+0000 mon.a (mon.0) 2386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:32:25.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:24 vm03 bash[23382]: cluster 2026-03-10T07:32:24.812158+0000 mon.a (mon.0) 2387 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T07:32:25.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:24 vm03 bash[23382]: cluster 2026-03-10T07:32:24.812158+0000 mon.a (mon.0) 2387 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: cluster 2026-03-10T07:32:24.632486+0000 mgr.y (mgr.24407) 293 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: cluster 2026-03-10T07:32:24.632486+0000 mgr.y (mgr.24407) 293 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:24.854264+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:24.854264+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:24.856357+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:24.856357+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:25.805236+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:25.805236+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:25.805323+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:25.805323+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:25.810823+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:25.810823+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: cluster 2026-03-10T07:32:25.814951+0000 mon.a (mon.0) 2391 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: cluster 2026-03-10T07:32:25.814951+0000 mon.a (mon.0) 2391 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:25.826483+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:25 vm00 bash[28005]: audit 2026-03-10T07:32:25.826483+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: cluster 2026-03-10T07:32:24.632486+0000 mgr.y (mgr.24407) 293 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: cluster 2026-03-10T07:32:24.632486+0000 mgr.y (mgr.24407) 293 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:24.854264+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:24.854264+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:24.856357+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:24.856357+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:25.805236+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:25.805236+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:25.805323+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:25.805323+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:25.810823+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:25.810823+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: cluster 2026-03-10T07:32:25.814951+0000 mon.a (mon.0) 2391 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: cluster 2026-03-10T07:32:25.814951+0000 mon.a (mon.0) 2391 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:25.826483+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:26.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:25 vm00 bash[20701]: audit 2026-03-10T07:32:25.826483+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: cluster 2026-03-10T07:32:24.632486+0000 mgr.y (mgr.24407) 293 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: cluster 2026-03-10T07:32:24.632486+0000 mgr.y (mgr.24407) 293 : cluster [DBG] pgmap v448: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:24.854264+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:24.854264+0000 mon.b (mon.1) 361 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:24.856357+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:24.856357+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:25.805236+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:25.805236+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm00-59637-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:25.805323+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:25.805323+0000 mon.a (mon.0) 2390 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:25.810823+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:25.810823+0000 mon.b (mon.1) 362 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: cluster 2026-03-10T07:32:25.814951+0000 mon.a (mon.0) 2391 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: cluster 2026-03-10T07:32:25.814951+0000 mon.a (mon.0) 2391 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T07:32:26.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:25.826483+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:26.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:25 vm03 bash[23382]: audit 2026-03-10T07:32:25.826483+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: cluster 2026-03-10T07:32:26.632781+0000 mgr.y (mgr.24407) 294 : cluster [DBG] pgmap v451: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: cluster 2026-03-10T07:32:26.632781+0000 mgr.y (mgr.24407) 294 : cluster [DBG] pgmap v451: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: audit 2026-03-10T07:32:26.808574+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: audit 2026-03-10T07:32:26.808574+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: cluster 2026-03-10T07:32:26.824708+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: cluster 2026-03-10T07:32:26.824708+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: audit 2026-03-10T07:32:26.852146+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: audit 2026-03-10T07:32:26.852146+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: audit 2026-03-10T07:32:26.852756+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: audit 2026-03-10T07:32:26.852756+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: audit 2026-03-10T07:32:26.854097+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: audit 2026-03-10T07:32:26.854097+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: audit 2026-03-10T07:32:26.854647+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:27 vm00 bash[28005]: audit 2026-03-10T07:32:26.854647+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: cluster 2026-03-10T07:32:26.632781+0000 mgr.y (mgr.24407) 294 : cluster [DBG] pgmap v451: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: cluster 2026-03-10T07:32:26.632781+0000 mgr.y (mgr.24407) 294 : cluster [DBG] pgmap v451: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: audit 2026-03-10T07:32:26.808574+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: audit 2026-03-10T07:32:26.808574+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: cluster 2026-03-10T07:32:26.824708+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: cluster 2026-03-10T07:32:26.824708+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: audit 2026-03-10T07:32:26.852146+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: audit 2026-03-10T07:32:26.852146+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: audit 2026-03-10T07:32:26.852756+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: audit 2026-03-10T07:32:26.852756+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: audit 2026-03-10T07:32:26.854097+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: audit 2026-03-10T07:32:26.854097+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: audit 2026-03-10T07:32:26.854647+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:27 vm00 bash[20701]: audit 2026-03-10T07:32:26.854647+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: cluster 2026-03-10T07:32:26.632781+0000 mgr.y (mgr.24407) 294 : cluster [DBG] pgmap v451: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: cluster 2026-03-10T07:32:26.632781+0000 mgr.y (mgr.24407) 294 : cluster [DBG] pgmap v451: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: audit 2026-03-10T07:32:26.808574+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: audit 2026-03-10T07:32:26.808574+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]': finished 2026-03-10T07:32:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: cluster 2026-03-10T07:32:26.824708+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T07:32:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: cluster 2026-03-10T07:32:26.824708+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T07:32:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: audit 2026-03-10T07:32:26.852146+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: audit 2026-03-10T07:32:26.852146+0000 mon.b (mon.1) 363 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: audit 2026-03-10T07:32:26.852756+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: audit 2026-03-10T07:32:26.852756+0000 mon.b (mon.1) 364 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: audit 2026-03-10T07:32:26.854097+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: audit 2026-03-10T07:32:26.854097+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:28.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: audit 2026-03-10T07:32:26.854647+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:28.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:27 vm03 bash[23382]: audit 2026-03-10T07:32:26.854647+0000 mon.a (mon.0) 2396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-47"}]: dispatch 2026-03-10T07:32:29.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:28 vm00 bash[28005]: cluster 2026-03-10T07:32:27.829835+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T07:32:29.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:28 vm00 bash[28005]: cluster 2026-03-10T07:32:27.829835+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T07:32:29.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:28 vm00 bash[28005]: audit 2026-03-10T07:32:27.839726+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:29.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:28 vm00 bash[28005]: audit 2026-03-10T07:32:27.839726+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:29.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:28 vm00 bash[20701]: cluster 2026-03-10T07:32:27.829835+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T07:32:29.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:28 vm00 bash[20701]: cluster 2026-03-10T07:32:27.829835+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T07:32:29.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:28 vm00 bash[20701]: audit 2026-03-10T07:32:27.839726+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:29.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:28 vm00 bash[20701]: audit 2026-03-10T07:32:27.839726+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:28 vm03 bash[23382]: cluster 2026-03-10T07:32:27.829835+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T07:32:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:28 vm03 bash[23382]: cluster 2026-03-10T07:32:27.829835+0000 mon.a (mon.0) 2397 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T07:32:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:28 vm03 bash[23382]: audit 2026-03-10T07:32:27.839726+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:28 vm03 bash[23382]: audit 2026-03-10T07:32:27.839726+0000 mon.a (mon.0) 2398 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: cluster 2026-03-10T07:32:28.633684+0000 mgr.y (mgr.24407) 295 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: cluster 2026-03-10T07:32:28.633684+0000 mgr.y (mgr.24407) 295 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: cluster 2026-03-10T07:32:28.819800+0000 mon.a (mon.0) 2399 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: cluster 2026-03-10T07:32:28.819800+0000 mon.a (mon.0) 2399 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: audit 2026-03-10T07:32:28.828479+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: audit 2026-03-10T07:32:28.828479+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: cluster 2026-03-10T07:32:28.833922+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: cluster 2026-03-10T07:32:28.833922+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: audit 2026-03-10T07:32:28.848157+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: audit 2026-03-10T07:32:28.848157+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: audit 2026-03-10T07:32:28.858544+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: audit 2026-03-10T07:32:28.858544+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: audit 2026-03-10T07:32:28.860509+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:29 vm00 bash[28005]: audit 2026-03-10T07:32:28.860509+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: cluster 2026-03-10T07:32:28.633684+0000 mgr.y (mgr.24407) 295 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: cluster 2026-03-10T07:32:28.633684+0000 mgr.y (mgr.24407) 295 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: cluster 2026-03-10T07:32:28.819800+0000 mon.a (mon.0) 2399 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: cluster 2026-03-10T07:32:28.819800+0000 mon.a (mon.0) 2399 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: audit 2026-03-10T07:32:28.828479+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:30.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: audit 2026-03-10T07:32:28.828479+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: cluster 2026-03-10T07:32:28.833922+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T07:32:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: cluster 2026-03-10T07:32:28.833922+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T07:32:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: audit 2026-03-10T07:32:28.848157+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: audit 2026-03-10T07:32:28.848157+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: audit 2026-03-10T07:32:28.858544+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: audit 2026-03-10T07:32:28.858544+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: audit 2026-03-10T07:32:28.860509+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:29 vm00 bash[20701]: audit 2026-03-10T07:32:28.860509+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: cluster 2026-03-10T07:32:28.633684+0000 mgr.y (mgr.24407) 295 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: cluster 2026-03-10T07:32:28.633684+0000 mgr.y (mgr.24407) 295 : cluster [DBG] pgmap v454: 260 pgs: 260 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: cluster 2026-03-10T07:32:28.819800+0000 mon.a (mon.0) 2399 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: cluster 2026-03-10T07:32:28.819800+0000 mon.a (mon.0) 2399 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: audit 2026-03-10T07:32:28.828479+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: audit 2026-03-10T07:32:28.828479+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: cluster 2026-03-10T07:32:28.833922+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T07:32:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: cluster 2026-03-10T07:32:28.833922+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T07:32:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: audit 2026-03-10T07:32:28.848157+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: audit 2026-03-10T07:32:28.848157+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]: dispatch 2026-03-10T07:32:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: audit 2026-03-10T07:32:28.858544+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: audit 2026-03-10T07:32:28.858544+0000 mon.b (mon.1) 365 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: audit 2026-03-10T07:32:28.860509+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:30.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:29 vm03 bash[23382]: audit 2026-03-10T07:32:28.860509+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.839451+0000 mon.a (mon.0) 2404 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.839451+0000 mon.a (mon.0) 2404 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.839546+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.839546+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: cluster 2026-03-10T07:32:29.847209+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: cluster 2026-03-10T07:32:29.847209+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.870876+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.870876+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.873029+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.873029+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.873400+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.873400+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.883079+0000 mon.b (mon.1) 368 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.883079+0000 mon.b (mon.1) 368 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.883981+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.883981+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.885623+0000 mon.a (mon.0) 2409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.885623+0000 mon.a (mon.0) 2409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.887680+0000 mon.b (mon.1) 369 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.887680+0000 mon.b (mon.1) 369 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.891674+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:30 vm00 bash[28005]: audit 2026-03-10T07:32:29.891674+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:32:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:32:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.839451+0000 mon.a (mon.0) 2404 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.839451+0000 mon.a (mon.0) 2404 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.839546+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.839546+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: cluster 2026-03-10T07:32:29.847209+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: cluster 2026-03-10T07:32:29.847209+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.870876+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.870876+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.873029+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.873029+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.873400+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.873400+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.883079+0000 mon.b (mon.1) 368 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.883079+0000 mon.b (mon.1) 368 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.883981+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.883981+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.885623+0000 mon.a (mon.0) 2409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.885623+0000 mon.a (mon.0) 2409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.887680+0000 mon.b (mon.1) 369 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.887680+0000 mon.b (mon.1) 369 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.891674+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:30 vm00 bash[20701]: audit 2026-03-10T07:32:29.891674+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.839451+0000 mon.a (mon.0) 2404 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.839451+0000 mon.a (mon.0) 2404 : audit [INF] from='client.? 192.168.123.100:0/1806838067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm00-59637-62"}]': finished 2026-03-10T07:32:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.839546+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.839546+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: cluster 2026-03-10T07:32:29.847209+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T07:32:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: cluster 2026-03-10T07:32:29.847209+0000 mon.a (mon.0) 2406 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T07:32:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.870876+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.870876+0000 mon.b (mon.1) 366 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.873029+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.873029+0000 mon.b (mon.1) 367 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.873400+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.873400+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.883079+0000 mon.b (mon.1) 368 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.883079+0000 mon.b (mon.1) 368 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.883981+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.883981+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.885623+0000 mon.a (mon.0) 2409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.885623+0000 mon.a (mon.0) 2409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.887680+0000 mon.b (mon.1) 369 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.887680+0000 mon.b (mon.1) 369 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.891674+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:31.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:30 vm03 bash[23382]: audit 2026-03-10T07:32:29.891674+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: cluster 2026-03-10T07:32:30.633987+0000 mgr.y (mgr.24407) 296 : cluster [DBG] pgmap v457: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: cluster 2026-03-10T07:32:30.633987+0000 mgr.y (mgr.24407) 296 : cluster [DBG] pgmap v457: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.848586+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.848586+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.848754+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.848754+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.854813+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.854813+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.854872+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.854872+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: cluster 2026-03-10T07:32:30.871265+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: cluster 2026-03-10T07:32:30.871265+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.872206+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.872206+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.872387+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:31 vm00 bash[28005]: audit 2026-03-10T07:32:30.872387+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: cluster 2026-03-10T07:32:30.633987+0000 mgr.y (mgr.24407) 296 : cluster [DBG] pgmap v457: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: cluster 2026-03-10T07:32:30.633987+0000 mgr.y (mgr.24407) 296 : cluster [DBG] pgmap v457: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.848586+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.848586+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.848754+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.848754+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.854813+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.854813+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.854872+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.854872+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: cluster 2026-03-10T07:32:30.871265+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: cluster 2026-03-10T07:32:30.871265+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.872206+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.872206+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.872387+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:32.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:31 vm00 bash[20701]: audit 2026-03-10T07:32:30.872387+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: cluster 2026-03-10T07:32:30.633987+0000 mgr.y (mgr.24407) 296 : cluster [DBG] pgmap v457: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: cluster 2026-03-10T07:32:30.633987+0000 mgr.y (mgr.24407) 296 : cluster [DBG] pgmap v457: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.848586+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.848586+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm00-59637-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.848754+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.848754+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.854813+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.854813+0000 mon.b (mon.1) 370 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.854872+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.854872+0000 mon.b (mon.1) 371 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: cluster 2026-03-10T07:32:30.871265+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: cluster 2026-03-10T07:32:30.871265+0000 mon.a (mon.0) 2413 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.872206+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.872206+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.872387+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:31 vm03 bash[23382]: audit 2026-03-10T07:32:30.872387+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:33.263 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:32:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:32:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:33 vm03 bash[23382]: audit 2026-03-10T07:32:31.857910+0000 mon.a (mon.0) 2416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:33 vm03 bash[23382]: audit 2026-03-10T07:32:31.857910+0000 mon.a (mon.0) 2416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:33 vm03 bash[23382]: audit 2026-03-10T07:32:31.862879+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:33 vm03 bash[23382]: audit 2026-03-10T07:32:31.862879+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:33 vm03 bash[23382]: cluster 2026-03-10T07:32:31.872932+0000 mon.a (mon.0) 2417 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T07:32:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:33 vm03 bash[23382]: cluster 2026-03-10T07:32:31.872932+0000 mon.a (mon.0) 2417 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T07:32:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:33 vm03 bash[23382]: audit 2026-03-10T07:32:31.874125+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:33 vm03 bash[23382]: audit 2026-03-10T07:32:31.874125+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:33 vm00 bash[28005]: audit 2026-03-10T07:32:31.857910+0000 mon.a (mon.0) 2416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:33 vm00 bash[28005]: audit 2026-03-10T07:32:31.857910+0000 mon.a (mon.0) 2416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:33 vm00 bash[28005]: audit 2026-03-10T07:32:31.862879+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:33 vm00 bash[28005]: audit 2026-03-10T07:32:31.862879+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:33 vm00 bash[28005]: cluster 2026-03-10T07:32:31.872932+0000 mon.a (mon.0) 2417 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:33 vm00 bash[28005]: cluster 2026-03-10T07:32:31.872932+0000 mon.a (mon.0) 2417 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:33 vm00 bash[28005]: audit 2026-03-10T07:32:31.874125+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:33 vm00 bash[28005]: audit 2026-03-10T07:32:31.874125+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:33 vm00 bash[20701]: audit 2026-03-10T07:32:31.857910+0000 mon.a (mon.0) 2416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:33 vm00 bash[20701]: audit 2026-03-10T07:32:31.857910+0000 mon.a (mon.0) 2416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:33 vm00 bash[20701]: audit 2026-03-10T07:32:31.862879+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:33 vm00 bash[20701]: audit 2026-03-10T07:32:31.862879+0000 mon.b (mon.1) 372 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:33 vm00 bash[20701]: cluster 2026-03-10T07:32:31.872932+0000 mon.a (mon.0) 2417 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:33 vm00 bash[20701]: cluster 2026-03-10T07:32:31.872932+0000 mon.a (mon.0) 2417 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:33 vm00 bash[20701]: audit 2026-03-10T07:32:31.874125+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:33 vm00 bash[20701]: audit 2026-03-10T07:32:31.874125+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]: dispatch 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: cluster 2026-03-10T07:32:32.634306+0000 mgr.y (mgr.24407) 297 : cluster [DBG] pgmap v460: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: cluster 2026-03-10T07:32:32.634306+0000 mgr.y (mgr.24407) 297 : cluster [DBG] pgmap v460: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: cluster 2026-03-10T07:32:32.857837+0000 mon.a (mon.0) 2419 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: cluster 2026-03-10T07:32:32.857837+0000 mon.a (mon.0) 2419 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: audit 2026-03-10T07:32:32.985749+0000 mon.a (mon.0) 2420 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: audit 2026-03-10T07:32:32.985749+0000 mon.a (mon.0) 2420 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: audit 2026-03-10T07:32:32.985798+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]': finished 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: audit 2026-03-10T07:32:32.985798+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]': finished 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: cluster 2026-03-10T07:32:32.988329+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: cluster 2026-03-10T07:32:32.988329+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: audit 2026-03-10T07:32:33.148402+0000 mgr.y (mgr.24407) 298 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: audit 2026-03-10T07:32:33.148402+0000 mgr.y (mgr.24407) 298 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: cluster 2026-03-10T07:32:33.993511+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T07:32:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:34 vm03 bash[23382]: cluster 2026-03-10T07:32:33.993511+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: cluster 2026-03-10T07:32:32.634306+0000 mgr.y (mgr.24407) 297 : cluster [DBG] pgmap v460: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: cluster 2026-03-10T07:32:32.634306+0000 mgr.y (mgr.24407) 297 : cluster [DBG] pgmap v460: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: cluster 2026-03-10T07:32:32.857837+0000 mon.a (mon.0) 2419 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: cluster 2026-03-10T07:32:32.857837+0000 mon.a (mon.0) 2419 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: audit 2026-03-10T07:32:32.985749+0000 mon.a (mon.0) 2420 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: audit 2026-03-10T07:32:32.985749+0000 mon.a (mon.0) 2420 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: audit 2026-03-10T07:32:32.985798+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]': finished 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: audit 2026-03-10T07:32:32.985798+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]': finished 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: cluster 2026-03-10T07:32:32.988329+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: cluster 2026-03-10T07:32:32.988329+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: audit 2026-03-10T07:32:33.148402+0000 mgr.y (mgr.24407) 298 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: audit 2026-03-10T07:32:33.148402+0000 mgr.y (mgr.24407) 298 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: cluster 2026-03-10T07:32:33.993511+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:34 vm00 bash[28005]: cluster 2026-03-10T07:32:33.993511+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: cluster 2026-03-10T07:32:32.634306+0000 mgr.y (mgr.24407) 297 : cluster [DBG] pgmap v460: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: cluster 2026-03-10T07:32:32.634306+0000 mgr.y (mgr.24407) 297 : cluster [DBG] pgmap v460: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: cluster 2026-03-10T07:32:32.857837+0000 mon.a (mon.0) 2419 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: cluster 2026-03-10T07:32:32.857837+0000 mon.a (mon.0) 2419 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: audit 2026-03-10T07:32:32.985749+0000 mon.a (mon.0) 2420 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: audit 2026-03-10T07:32:32.985749+0000 mon.a (mon.0) 2420 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm00-59637-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: audit 2026-03-10T07:32:32.985798+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]': finished 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: audit 2026-03-10T07:32:32.985798+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-49", "mode": "readproxy"}]': finished 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: cluster 2026-03-10T07:32:32.988329+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: cluster 2026-03-10T07:32:32.988329+0000 mon.a (mon.0) 2422 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: audit 2026-03-10T07:32:33.148402+0000 mgr.y (mgr.24407) 298 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: audit 2026-03-10T07:32:33.148402+0000 mgr.y (mgr.24407) 298 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: cluster 2026-03-10T07:32:33.993511+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T07:32:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:34 vm00 bash[20701]: cluster 2026-03-10T07:32:33.993511+0000 mon.a (mon.0) 2423 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:35 vm00 bash[28005]: cluster 2026-03-10T07:32:35.012671+0000 mon.a (mon.0) 2424 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:35 vm00 bash[28005]: cluster 2026-03-10T07:32:35.012671+0000 mon.a (mon.0) 2424 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:35 vm00 bash[28005]: cluster 2026-03-10T07:32:35.026825+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:35 vm00 bash[28005]: cluster 2026-03-10T07:32:35.026825+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:35 vm00 bash[28005]: audit 2026-03-10T07:32:35.033554+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:35 vm00 bash[28005]: audit 2026-03-10T07:32:35.033554+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:35 vm00 bash[28005]: audit 2026-03-10T07:32:35.038826+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:35 vm00 bash[28005]: audit 2026-03-10T07:32:35.038826+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:35 vm00 bash[20701]: cluster 2026-03-10T07:32:35.012671+0000 mon.a (mon.0) 2424 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:35 vm00 bash[20701]: cluster 2026-03-10T07:32:35.012671+0000 mon.a (mon.0) 2424 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:35 vm00 bash[20701]: cluster 2026-03-10T07:32:35.026825+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:35 vm00 bash[20701]: cluster 2026-03-10T07:32:35.026825+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:35 vm00 bash[20701]: audit 2026-03-10T07:32:35.033554+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:35 vm00 bash[20701]: audit 2026-03-10T07:32:35.033554+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:35 vm00 bash[20701]: audit 2026-03-10T07:32:35.038826+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:35 vm00 bash[20701]: audit 2026-03-10T07:32:35.038826+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:35 vm03 bash[23382]: cluster 2026-03-10T07:32:35.012671+0000 mon.a (mon.0) 2424 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:35 vm03 bash[23382]: cluster 2026-03-10T07:32:35.012671+0000 mon.a (mon.0) 2424 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:35 vm03 bash[23382]: cluster 2026-03-10T07:32:35.026825+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T07:32:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:35 vm03 bash[23382]: cluster 2026-03-10T07:32:35.026825+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T07:32:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:35 vm03 bash[23382]: audit 2026-03-10T07:32:35.033554+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:35 vm03 bash[23382]: audit 2026-03-10T07:32:35.033554+0000 mon.b (mon.1) 373 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:35 vm03 bash[23382]: audit 2026-03-10T07:32:35.038826+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:35 vm03 bash[23382]: audit 2026-03-10T07:32:35.038826+0000 mon.a (mon.0) 2426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:36.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:36 vm03 bash[23382]: cluster 2026-03-10T07:32:34.634733+0000 mgr.y (mgr.24407) 299 : cluster [DBG] pgmap v463: 300 pgs: 6 unknown, 28 creating+peering, 266 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:36.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:36 vm03 bash[23382]: cluster 2026-03-10T07:32:34.634733+0000 mgr.y (mgr.24407) 299 : cluster [DBG] pgmap v463: 300 pgs: 6 unknown, 28 creating+peering, 266 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:36 vm00 bash[28005]: cluster 2026-03-10T07:32:34.634733+0000 mgr.y (mgr.24407) 299 : cluster [DBG] pgmap v463: 300 pgs: 6 unknown, 28 creating+peering, 266 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:36 vm00 bash[28005]: cluster 2026-03-10T07:32:34.634733+0000 mgr.y (mgr.24407) 299 : cluster [DBG] pgmap v463: 300 pgs: 6 unknown, 28 creating+peering, 266 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:36 vm00 bash[20701]: cluster 2026-03-10T07:32:34.634733+0000 mgr.y (mgr.24407) 299 : cluster [DBG] pgmap v463: 300 pgs: 6 unknown, 28 creating+peering, 266 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:36 vm00 bash[20701]: cluster 2026-03-10T07:32:34.634733+0000 mgr.y (mgr.24407) 299 : cluster [DBG] pgmap v463: 300 pgs: 6 unknown, 28 creating+peering, 266 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:37 vm03 bash[23382]: audit 2026-03-10T07:32:36.158872+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:37 vm03 bash[23382]: audit 2026-03-10T07:32:36.158872+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:37 vm03 bash[23382]: audit 2026-03-10T07:32:36.161805+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:37 vm03 bash[23382]: audit 2026-03-10T07:32:36.161805+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:37 vm03 bash[23382]: cluster 2026-03-10T07:32:36.163734+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T07:32:37.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:37 vm03 bash[23382]: cluster 2026-03-10T07:32:36.163734+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T07:32:37.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:37 vm03 bash[23382]: audit 2026-03-10T07:32:36.164994+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:37 vm03 bash[23382]: audit 2026-03-10T07:32:36.164994+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:37 vm03 bash[23382]: audit 2026-03-10T07:32:37.162279+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:37.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:37 vm03 bash[23382]: audit 2026-03-10T07:32:37.162279+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:37 vm00 bash[28005]: audit 2026-03-10T07:32:36.158872+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:37 vm00 bash[28005]: audit 2026-03-10T07:32:36.158872+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:37 vm00 bash[28005]: audit 2026-03-10T07:32:36.161805+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:37 vm00 bash[28005]: audit 2026-03-10T07:32:36.161805+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:37 vm00 bash[28005]: cluster 2026-03-10T07:32:36.163734+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:37 vm00 bash[28005]: cluster 2026-03-10T07:32:36.163734+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:37 vm00 bash[28005]: audit 2026-03-10T07:32:36.164994+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:37 vm00 bash[28005]: audit 2026-03-10T07:32:36.164994+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:37 vm00 bash[28005]: audit 2026-03-10T07:32:37.162279+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:37 vm00 bash[28005]: audit 2026-03-10T07:32:37.162279+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:37 vm00 bash[20701]: audit 2026-03-10T07:32:36.158872+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:37 vm00 bash[20701]: audit 2026-03-10T07:32:36.158872+0000 mon.a (mon.0) 2427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:37 vm00 bash[20701]: audit 2026-03-10T07:32:36.161805+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:37 vm00 bash[20701]: audit 2026-03-10T07:32:36.161805+0000 mon.b (mon.1) 374 : audit [INF] from='client.? 192.168.123.100:0/3745773172' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:37 vm00 bash[20701]: cluster 2026-03-10T07:32:36.163734+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:37 vm00 bash[20701]: cluster 2026-03-10T07:32:36.163734+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:37 vm00 bash[20701]: audit 2026-03-10T07:32:36.164994+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:37 vm00 bash[20701]: audit 2026-03-10T07:32:36.164994+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]: dispatch 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:37 vm00 bash[20701]: audit 2026-03-10T07:32:37.162279+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:37 vm00 bash[20701]: audit 2026-03-10T07:32:37.162279+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm00-59637-63"}]': finished 2026-03-10T07:32:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: cluster 2026-03-10T07:32:36.635080+0000 mgr.y (mgr.24407) 300 : cluster [DBG] pgmap v466: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: cluster 2026-03-10T07:32:36.635080+0000 mgr.y (mgr.24407) 300 : cluster [DBG] pgmap v466: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: cluster 2026-03-10T07:32:37.168757+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T07:32:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: cluster 2026-03-10T07:32:37.168757+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T07:32:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: audit 2026-03-10T07:32:37.184706+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: audit 2026-03-10T07:32:37.184706+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: audit 2026-03-10T07:32:37.189626+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: audit 2026-03-10T07:32:37.189626+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: audit 2026-03-10T07:32:37.191144+0000 mon.a (mon.0) 2434 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: audit 2026-03-10T07:32:37.191144+0000 mon.a (mon.0) 2434 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: audit 2026-03-10T07:32:38.188802+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: audit 2026-03-10T07:32:38.188802+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: cluster 2026-03-10T07:32:38.193711+0000 mon.a (mon.0) 2436 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T07:32:38.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:38 vm03 bash[23382]: cluster 2026-03-10T07:32:38.193711+0000 mon.a (mon.0) 2436 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: cluster 2026-03-10T07:32:36.635080+0000 mgr.y (mgr.24407) 300 : cluster [DBG] pgmap v466: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: cluster 2026-03-10T07:32:36.635080+0000 mgr.y (mgr.24407) 300 : cluster [DBG] pgmap v466: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: cluster 2026-03-10T07:32:37.168757+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: cluster 2026-03-10T07:32:37.168757+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: audit 2026-03-10T07:32:37.184706+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: audit 2026-03-10T07:32:37.184706+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: audit 2026-03-10T07:32:37.189626+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: audit 2026-03-10T07:32:37.189626+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: audit 2026-03-10T07:32:37.191144+0000 mon.a (mon.0) 2434 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: audit 2026-03-10T07:32:37.191144+0000 mon.a (mon.0) 2434 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: audit 2026-03-10T07:32:38.188802+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: audit 2026-03-10T07:32:38.188802+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: cluster 2026-03-10T07:32:38.193711+0000 mon.a (mon.0) 2436 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:38 vm00 bash[28005]: cluster 2026-03-10T07:32:38.193711+0000 mon.a (mon.0) 2436 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: cluster 2026-03-10T07:32:36.635080+0000 mgr.y (mgr.24407) 300 : cluster [DBG] pgmap v466: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: cluster 2026-03-10T07:32:36.635080+0000 mgr.y (mgr.24407) 300 : cluster [DBG] pgmap v466: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: cluster 2026-03-10T07:32:37.168757+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: cluster 2026-03-10T07:32:37.168757+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: audit 2026-03-10T07:32:37.184706+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: audit 2026-03-10T07:32:37.184706+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: audit 2026-03-10T07:32:37.189626+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: audit 2026-03-10T07:32:37.189626+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: audit 2026-03-10T07:32:37.191144+0000 mon.a (mon.0) 2434 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: audit 2026-03-10T07:32:37.191144+0000 mon.a (mon.0) 2434 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: audit 2026-03-10T07:32:38.188802+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: audit 2026-03-10T07:32:38.188802+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm00-59637-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: cluster 2026-03-10T07:32:38.193711+0000 mon.a (mon.0) 2436 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T07:32:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:38 vm00 bash[20701]: cluster 2026-03-10T07:32:38.193711+0000 mon.a (mon.0) 2436 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T07:32:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:39 vm03 bash[23382]: audit 2026-03-10T07:32:38.194205+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:39 vm03 bash[23382]: audit 2026-03-10T07:32:38.194205+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:39 vm03 bash[23382]: cluster 2026-03-10T07:32:39.209200+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T07:32:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:39 vm03 bash[23382]: cluster 2026-03-10T07:32:39.209200+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T07:32:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:39 vm00 bash[28005]: audit 2026-03-10T07:32:38.194205+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:39 vm00 bash[28005]: audit 2026-03-10T07:32:38.194205+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:39 vm00 bash[28005]: cluster 2026-03-10T07:32:39.209200+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T07:32:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:39 vm00 bash[28005]: cluster 2026-03-10T07:32:39.209200+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T07:32:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:39 vm00 bash[20701]: audit 2026-03-10T07:32:38.194205+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:39 vm00 bash[20701]: audit 2026-03-10T07:32:38.194205+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:39 vm00 bash[20701]: cluster 2026-03-10T07:32:39.209200+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T07:32:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:39 vm00 bash[20701]: cluster 2026-03-10T07:32:39.209200+0000 mon.a (mon.0) 2438 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T07:32:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:40 vm03 bash[23382]: cluster 2026-03-10T07:32:38.635363+0000 mgr.y (mgr.24407) 301 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:40 vm03 bash[23382]: cluster 2026-03-10T07:32:38.635363+0000 mgr.y (mgr.24407) 301 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:40 vm03 bash[23382]: audit 2026-03-10T07:32:39.449672+0000 mon.a (mon.0) 2439 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:40 vm03 bash[23382]: audit 2026-03-10T07:32:39.449672+0000 mon.a (mon.0) 2439 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:40 vm03 bash[23382]: audit 2026-03-10T07:32:39.450353+0000 mon.c (mon.2) 276 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:40 vm03 bash[23382]: audit 2026-03-10T07:32:39.450353+0000 mon.c (mon.2) 276 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:40 vm03 bash[23382]: audit 2026-03-10T07:32:40.195790+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:40 vm03 bash[23382]: audit 2026-03-10T07:32:40.195790+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:40 vm03 bash[23382]: cluster 2026-03-10T07:32:40.205636+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T07:32:40.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:40 vm03 bash[23382]: cluster 2026-03-10T07:32:40.205636+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:40 vm00 bash[28005]: cluster 2026-03-10T07:32:38.635363+0000 mgr.y (mgr.24407) 301 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:40 vm00 bash[28005]: cluster 2026-03-10T07:32:38.635363+0000 mgr.y (mgr.24407) 301 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:40 vm00 bash[28005]: audit 2026-03-10T07:32:39.449672+0000 mon.a (mon.0) 2439 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:40 vm00 bash[28005]: audit 2026-03-10T07:32:39.449672+0000 mon.a (mon.0) 2439 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:40 vm00 bash[28005]: audit 2026-03-10T07:32:39.450353+0000 mon.c (mon.2) 276 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:40 vm00 bash[28005]: audit 2026-03-10T07:32:39.450353+0000 mon.c (mon.2) 276 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:40 vm00 bash[28005]: audit 2026-03-10T07:32:40.195790+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:40 vm00 bash[28005]: audit 2026-03-10T07:32:40.195790+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:40 vm00 bash[28005]: cluster 2026-03-10T07:32:40.205636+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:40 vm00 bash[28005]: cluster 2026-03-10T07:32:40.205636+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:40 vm00 bash[20701]: cluster 2026-03-10T07:32:38.635363+0000 mgr.y (mgr.24407) 301 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:40 vm00 bash[20701]: cluster 2026-03-10T07:32:38.635363+0000 mgr.y (mgr.24407) 301 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 721 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:40 vm00 bash[20701]: audit 2026-03-10T07:32:39.449672+0000 mon.a (mon.0) 2439 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:40 vm00 bash[20701]: audit 2026-03-10T07:32:39.449672+0000 mon.a (mon.0) 2439 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:40 vm00 bash[20701]: audit 2026-03-10T07:32:39.450353+0000 mon.c (mon.2) 276 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:40 vm00 bash[20701]: audit 2026-03-10T07:32:39.450353+0000 mon.c (mon.2) 276 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:40 vm00 bash[20701]: audit 2026-03-10T07:32:40.195790+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:40 vm00 bash[20701]: audit 2026-03-10T07:32:40.195790+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm00-59637-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:40 vm00 bash[20701]: cluster 2026-03-10T07:32:40.205636+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T07:32:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:40 vm00 bash[20701]: cluster 2026-03-10T07:32:40.205636+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T07:32:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:41 vm00 bash[28005]: cluster 2026-03-10T07:32:41.155951+0000 mon.a (mon.0) 2442 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:41 vm00 bash[28005]: cluster 2026-03-10T07:32:41.155951+0000 mon.a (mon.0) 2442 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:41 vm00 bash[28005]: cluster 2026-03-10T07:32:41.203488+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T07:32:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:41 vm00 bash[28005]: cluster 2026-03-10T07:32:41.203488+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T07:32:41.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:32:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:32:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:32:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:41 vm00 bash[20701]: cluster 2026-03-10T07:32:41.155951+0000 mon.a (mon.0) 2442 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:41 vm00 bash[20701]: cluster 2026-03-10T07:32:41.155951+0000 mon.a (mon.0) 2442 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:41 vm00 bash[20701]: cluster 2026-03-10T07:32:41.203488+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T07:32:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:41 vm00 bash[20701]: cluster 2026-03-10T07:32:41.203488+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T07:32:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:41 vm03 bash[23382]: cluster 2026-03-10T07:32:41.155951+0000 mon.a (mon.0) 2442 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:41 vm03 bash[23382]: cluster 2026-03-10T07:32:41.155951+0000 mon.a (mon.0) 2442 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:41 vm03 bash[23382]: cluster 2026-03-10T07:32:41.203488+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T07:32:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:41 vm03 bash[23382]: cluster 2026-03-10T07:32:41.203488+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T07:32:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:42 vm03 bash[23382]: cluster 2026-03-10T07:32:40.635722+0000 mgr.y (mgr.24407) 302 : cluster [DBG] pgmap v472: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:42 vm03 bash[23382]: cluster 2026-03-10T07:32:40.635722+0000 mgr.y (mgr.24407) 302 : cluster [DBG] pgmap v472: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:42 vm03 bash[23382]: cluster 2026-03-10T07:32:42.215261+0000 mon.a (mon.0) 2444 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T07:32:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:42 vm03 bash[23382]: cluster 2026-03-10T07:32:42.215261+0000 mon.a (mon.0) 2444 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T07:32:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:42 vm03 bash[23382]: audit 2026-03-10T07:32:42.216566+0000 mon.a (mon.0) 2445 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:42 vm03 bash[23382]: audit 2026-03-10T07:32:42.216566+0000 mon.a (mon.0) 2445 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:42 vm00 bash[28005]: cluster 2026-03-10T07:32:40.635722+0000 mgr.y (mgr.24407) 302 : cluster [DBG] pgmap v472: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:42 vm00 bash[28005]: cluster 2026-03-10T07:32:40.635722+0000 mgr.y (mgr.24407) 302 : cluster [DBG] pgmap v472: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:42 vm00 bash[28005]: cluster 2026-03-10T07:32:42.215261+0000 mon.a (mon.0) 2444 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:42 vm00 bash[28005]: cluster 2026-03-10T07:32:42.215261+0000 mon.a (mon.0) 2444 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:42 vm00 bash[28005]: audit 2026-03-10T07:32:42.216566+0000 mon.a (mon.0) 2445 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:42 vm00 bash[28005]: audit 2026-03-10T07:32:42.216566+0000 mon.a (mon.0) 2445 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:42 vm00 bash[20701]: cluster 2026-03-10T07:32:40.635722+0000 mgr.y (mgr.24407) 302 : cluster [DBG] pgmap v472: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:42 vm00 bash[20701]: cluster 2026-03-10T07:32:40.635722+0000 mgr.y (mgr.24407) 302 : cluster [DBG] pgmap v472: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:42 vm00 bash[20701]: cluster 2026-03-10T07:32:42.215261+0000 mon.a (mon.0) 2444 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:42 vm00 bash[20701]: cluster 2026-03-10T07:32:42.215261+0000 mon.a (mon.0) 2444 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:42 vm00 bash[20701]: audit 2026-03-10T07:32:42.216566+0000 mon.a (mon.0) 2445 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:42 vm00 bash[20701]: audit 2026-03-10T07:32:42.216566+0000 mon.a (mon.0) 2445 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:43.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:32:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:32:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.126823+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.126823+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.129052+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.129052+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.206295+0000 mon.a (mon.0) 2447 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.206295+0000 mon.a (mon.0) 2447 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.206426+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.206426+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.207540+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.207540+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: cluster 2026-03-10T07:32:43.210293+0000 mon.a (mon.0) 2449 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T07:32:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: cluster 2026-03-10T07:32:43.210293+0000 mon.a (mon.0) 2449 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T07:32:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.210719+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.210719+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.217534+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:43 vm03 bash[23382]: audit 2026-03-10T07:32:43.217534+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.126823+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.126823+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.129052+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.129052+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.206295+0000 mon.a (mon.0) 2447 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.206295+0000 mon.a (mon.0) 2447 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.206426+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.206426+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.207540+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.207540+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: cluster 2026-03-10T07:32:43.210293+0000 mon.a (mon.0) 2449 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: cluster 2026-03-10T07:32:43.210293+0000 mon.a (mon.0) 2449 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.210719+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.210719+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.217534+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:43 vm00 bash[28005]: audit 2026-03-10T07:32:43.217534+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.126823+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.126823+0000 mon.b (mon.1) 375 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.129052+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.129052+0000 mon.a (mon.0) 2446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.206295+0000 mon.a (mon.0) 2447 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.206295+0000 mon.a (mon.0) 2447 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.206426+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.206426+0000 mon.a (mon.0) 2448 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.207540+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.207540+0000 mon.b (mon.1) 376 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: cluster 2026-03-10T07:32:43.210293+0000 mon.a (mon.0) 2449 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: cluster 2026-03-10T07:32:43.210293+0000 mon.a (mon.0) 2449 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.210719+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.210719+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]: dispatch 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.217534+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:43 vm00 bash[20701]: audit 2026-03-10T07:32:43.217534+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: cluster 2026-03-10T07:32:42.636046+0000 mgr.y (mgr.24407) 303 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: cluster 2026-03-10T07:32:42.636046+0000 mgr.y (mgr.24407) 303 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:43.158999+0000 mgr.y (mgr.24407) 304 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:43.158999+0000 mgr.y (mgr.24407) 304 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: cluster 2026-03-10T07:32:44.206204+0000 mon.a (mon.0) 2452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: cluster 2026-03-10T07:32:44.206204+0000 mon.a (mon.0) 2452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.209503+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.209503+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.209865+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.209865+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: cluster 2026-03-10T07:32:44.216303+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: cluster 2026-03-10T07:32:44.216303+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.233248+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.233248+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.233938+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.233938+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.234349+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.234349+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.239244+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.239244+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.239654+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.239654+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.239862+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:44 vm03 bash[23382]: audit 2026-03-10T07:32:44.239862+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: cluster 2026-03-10T07:32:42.636046+0000 mgr.y (mgr.24407) 303 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: cluster 2026-03-10T07:32:42.636046+0000 mgr.y (mgr.24407) 303 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:43.158999+0000 mgr.y (mgr.24407) 304 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:43.158999+0000 mgr.y (mgr.24407) 304 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: cluster 2026-03-10T07:32:44.206204+0000 mon.a (mon.0) 2452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: cluster 2026-03-10T07:32:44.206204+0000 mon.a (mon.0) 2452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.209503+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.209503+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.209865+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.209865+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: cluster 2026-03-10T07:32:44.216303+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: cluster 2026-03-10T07:32:44.216303+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.233248+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.233248+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.233938+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.233938+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.234349+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.234349+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.239244+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.239244+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.239654+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.239654+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.239862+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:44 vm00 bash[28005]: audit 2026-03-10T07:32:44.239862+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: cluster 2026-03-10T07:32:42.636046+0000 mgr.y (mgr.24407) 303 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: cluster 2026-03-10T07:32:42.636046+0000 mgr.y (mgr.24407) 303 : cluster [DBG] pgmap v475: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:43.158999+0000 mgr.y (mgr.24407) 304 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:43.158999+0000 mgr.y (mgr.24407) 304 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: cluster 2026-03-10T07:32:44.206204+0000 mon.a (mon.0) 2452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: cluster 2026-03-10T07:32:44.206204+0000 mon.a (mon.0) 2452 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.209503+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.209503+0000 mon.a (mon.0) 2453 : audit [INF] from='client.? 192.168.123.100:0/545101067' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm00-59637-64"}]': finished 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.209865+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.209865+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]': finished 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: cluster 2026-03-10T07:32:44.216303+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: cluster 2026-03-10T07:32:44.216303+0000 mon.a (mon.0) 2455 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.233248+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.233248+0000 mon.c (mon.2) 277 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.233938+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.233938+0000 mon.a (mon.0) 2456 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.234349+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.234349+0000 mon.c (mon.2) 278 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.239244+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.239244+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.239654+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.239654+0000 mon.c (mon.2) 279 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.239862+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:44 vm00 bash[20701]: audit 2026-03-10T07:32:44.239862+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:45.214 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP2::163:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:5d165639:::164:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:f43765fc:::165:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:b4c720e9:::166:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:e694b040:::167:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:afa38db2:::168:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:77ba9f53:::169:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:87495034:::170:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:7c96bf0e:::171:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:dbe346cc:::172:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:e943ec24:::173:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:f97a9c0c:::174:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:6f26e74d:::175:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:4f95e106:::176:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:0e6f2f8f:::177:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:05db05f1:::178:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:38a78d66:::179:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:d095610b:::180:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:a1a9d709:::181:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:1e5d39db:::182:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:f7df4fb9:::183:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:03a7f161:::184:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:ba70721e:::185:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:28e5662d:::186:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:973d52de:::187:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:4303eb1c:::188:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:b990b48e:::189:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:29b8165b:::190:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:3547f197:::191:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:7e260936:::192:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:1abec7b1:::193:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:10fdda93:::194:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:15817eea:::195:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:770bab57:::196:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:ed9e13e7:::197:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:71471a8f:::198:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: checking for 268:10fb1d02:::199:head 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetWrite (8030 ms) 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetTrim 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,0 2026-03-10T07:32:45.215 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: first is 1773127921 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,1773127923,1773127924,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,1773127923,1773127924,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,1773127923,1773127924,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,1773127923,1773127924,1773127926,1773127927,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,1773127923,1773127924,1773127926,1773127927,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,1773127923,1773127924,1773127926,1773127927,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,1773127923,1773127924,1773127926,1773127927,1773127929,1773127930,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,1773127923,1773127924,1773127926,1773127927,1773127929,1773127930,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127921,1773127923,1773127924,1773127926,1773127927,1773127929,1773127930,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773127924,1773127926,1773127927,1773127929,1773127930,1773127932,1773127933,0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: first now 1773127924, trimmed 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetTrim (20535 ms) 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteOn2ndRead 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: foo0 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteOn2ndRead (14045 ms) 2026-03-10T07:32:45.216 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ProxyRead 2026-03-10T07:32:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:44.275395+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:44.275395+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:44.276191+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:44.276191+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:44.277409+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:44.277409+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:44.278153+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:44.278153+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:45.212640+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:45.212640+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: cluster 2026-03-10T07:32:45.219360+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: cluster 2026-03-10T07:32:45.219360+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:45.235257+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:45.235257+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:45.235582+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:45.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:45 vm03 bash[23382]: audit 2026-03-10T07:32:45.235582+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:44.275395+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:44.275395+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:44.276191+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:44.276191+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:44.277409+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:44.277409+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:44.278153+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:44.278153+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:45.212640+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:45.212640+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: cluster 2026-03-10T07:32:45.219360+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: cluster 2026-03-10T07:32:45.219360+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:45.235257+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:45.235257+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:45.235582+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:45 vm00 bash[28005]: audit 2026-03-10T07:32:45.235582+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:44.275395+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:44.275395+0000 mon.b (mon.1) 377 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:44.276191+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:44.276191+0000 mon.b (mon.1) 378 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:44.277409+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:44.277409+0000 mon.a (mon.0) 2459 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:44.278153+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:44.278153+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-49"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:45.212640+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:45.212640+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm00-59637-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: cluster 2026-03-10T07:32:45.219360+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: cluster 2026-03-10T07:32:45.219360+0000 mon.a (mon.0) 2462 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:45.235257+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:45.235257+0000 mon.c (mon.2) 280 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:45.235582+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:45 vm00 bash[20701]: audit 2026-03-10T07:32:45.235582+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:46 vm03 bash[23382]: cluster 2026-03-10T07:32:44.636479+0000 mgr.y (mgr.24407) 305 : cluster [DBG] pgmap v478: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:46 vm03 bash[23382]: cluster 2026-03-10T07:32:44.636479+0000 mgr.y (mgr.24407) 305 : cluster [DBG] pgmap v478: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:46 vm03 bash[23382]: audit 2026-03-10T07:32:46.222407+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:46 vm03 bash[23382]: audit 2026-03-10T07:32:46.222407+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:46 vm03 bash[23382]: cluster 2026-03-10T07:32:46.241904+0000 mon.a (mon.0) 2464 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T07:32:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:46 vm03 bash[23382]: cluster 2026-03-10T07:32:46.241904+0000 mon.a (mon.0) 2464 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T07:32:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:46 vm03 bash[23382]: audit 2026-03-10T07:32:46.246464+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:46 vm03 bash[23382]: audit 2026-03-10T07:32:46.246464+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:46 vm00 bash[28005]: cluster 2026-03-10T07:32:44.636479+0000 mgr.y (mgr.24407) 305 : cluster [DBG] pgmap v478: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:46 vm00 bash[28005]: cluster 2026-03-10T07:32:44.636479+0000 mgr.y (mgr.24407) 305 : cluster [DBG] pgmap v478: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:46 vm00 bash[28005]: audit 2026-03-10T07:32:46.222407+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:46 vm00 bash[28005]: audit 2026-03-10T07:32:46.222407+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:46 vm00 bash[28005]: cluster 2026-03-10T07:32:46.241904+0000 mon.a (mon.0) 2464 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:46 vm00 bash[28005]: cluster 2026-03-10T07:32:46.241904+0000 mon.a (mon.0) 2464 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:46 vm00 bash[28005]: audit 2026-03-10T07:32:46.246464+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:46 vm00 bash[28005]: audit 2026-03-10T07:32:46.246464+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:46 vm00 bash[20701]: cluster 2026-03-10T07:32:44.636479+0000 mgr.y (mgr.24407) 305 : cluster [DBG] pgmap v478: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:46 vm00 bash[20701]: cluster 2026-03-10T07:32:44.636479+0000 mgr.y (mgr.24407) 305 : cluster [DBG] pgmap v478: 292 pgs: 292 active+clean; 8.3 MiB data, 725 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:46 vm00 bash[20701]: audit 2026-03-10T07:32:46.222407+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:46 vm00 bash[20701]: audit 2026-03-10T07:32:46.222407+0000 mon.b (mon.1) 379 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:46 vm00 bash[20701]: cluster 2026-03-10T07:32:46.241904+0000 mon.a (mon.0) 2464 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:46 vm00 bash[20701]: cluster 2026-03-10T07:32:46.241904+0000 mon.a (mon.0) 2464 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:46 vm00 bash[20701]: audit 2026-03-10T07:32:46.246464+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:46 vm00 bash[20701]: audit 2026-03-10T07:32:46.246464+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:32:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: cluster 2026-03-10T07:32:46.636976+0000 mgr.y (mgr.24407) 306 : cluster [DBG] pgmap v481: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: cluster 2026-03-10T07:32:46.636976+0000 mgr.y (mgr.24407) 306 : cluster [DBG] pgmap v481: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: audit 2026-03-10T07:32:47.219993+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: audit 2026-03-10T07:32:47.219993+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: audit 2026-03-10T07:32:47.220059+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: audit 2026-03-10T07:32:47.220059+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: cluster 2026-03-10T07:32:47.237647+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T07:32:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: cluster 2026-03-10T07:32:47.237647+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T07:32:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: cluster 2026-03-10T07:32:47.255553+0000 mon.a (mon.0) 2469 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: cluster 2026-03-10T07:32:47.255553+0000 mon.a (mon.0) 2469 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: audit 2026-03-10T07:32:47.274470+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: audit 2026-03-10T07:32:47.274470+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: audit 2026-03-10T07:32:47.276522+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:48 vm03 bash[23382]: audit 2026-03-10T07:32:47.276522+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: cluster 2026-03-10T07:32:46.636976+0000 mgr.y (mgr.24407) 306 : cluster [DBG] pgmap v481: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: cluster 2026-03-10T07:32:46.636976+0000 mgr.y (mgr.24407) 306 : cluster [DBG] pgmap v481: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: audit 2026-03-10T07:32:47.219993+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: audit 2026-03-10T07:32:47.219993+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: audit 2026-03-10T07:32:47.220059+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: audit 2026-03-10T07:32:47.220059+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: cluster 2026-03-10T07:32:47.237647+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: cluster 2026-03-10T07:32:47.237647+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: cluster 2026-03-10T07:32:47.255553+0000 mon.a (mon.0) 2469 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: cluster 2026-03-10T07:32:47.255553+0000 mon.a (mon.0) 2469 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: audit 2026-03-10T07:32:47.274470+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: audit 2026-03-10T07:32:47.274470+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: audit 2026-03-10T07:32:47.276522+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:48 vm00 bash[28005]: audit 2026-03-10T07:32:47.276522+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: cluster 2026-03-10T07:32:46.636976+0000 mgr.y (mgr.24407) 306 : cluster [DBG] pgmap v481: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: cluster 2026-03-10T07:32:46.636976+0000 mgr.y (mgr.24407) 306 : cluster [DBG] pgmap v481: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: audit 2026-03-10T07:32:47.219993+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: audit 2026-03-10T07:32:47.219993+0000 mon.a (mon.0) 2466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm00-59637-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: audit 2026-03-10T07:32:47.220059+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: audit 2026-03-10T07:32:47.220059+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: cluster 2026-03-10T07:32:47.237647+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: cluster 2026-03-10T07:32:47.237647+0000 mon.a (mon.0) 2468 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: cluster 2026-03-10T07:32:47.255553+0000 mon.a (mon.0) 2469 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: cluster 2026-03-10T07:32:47.255553+0000 mon.a (mon.0) 2469 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: audit 2026-03-10T07:32:47.274470+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: audit 2026-03-10T07:32:47.274470+0000 mon.b (mon.1) 380 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: audit 2026-03-10T07:32:47.276522+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:48 vm00 bash[20701]: audit 2026-03-10T07:32:47.276522+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:32:49.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:48.232250+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:49.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:48.232250+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:48.236148+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:48.236148+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: cluster 2026-03-10T07:32:48.241089+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: cluster 2026-03-10T07:32:48.241089+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:48.247008+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:48.247008+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:49.235316+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:49.235316+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:49.237772+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:49.237772+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: cluster 2026-03-10T07:32:49.239259+0000 mon.a (mon.0) 2475 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: cluster 2026-03-10T07:32:49.239259+0000 mon.a (mon.0) 2475 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:49.240853+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:49.240853+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:49.241100+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:49.241100+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:49.243316+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:49.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:49 vm03 bash[23382]: audit 2026-03-10T07:32:49.243316+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:48.232250+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:48.232250+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:48.236148+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:48.236148+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: cluster 2026-03-10T07:32:48.241089+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: cluster 2026-03-10T07:32:48.241089+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:48.247008+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:48.247008+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:49.235316+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:49.235316+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:49.237772+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:49.237772+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: cluster 2026-03-10T07:32:49.239259+0000 mon.a (mon.0) 2475 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: cluster 2026-03-10T07:32:49.239259+0000 mon.a (mon.0) 2475 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:49.240853+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:49.240853+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:49.241100+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:49.241100+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:49.243316+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:49 vm00 bash[28005]: audit 2026-03-10T07:32:49.243316+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:48.232250+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:48.232250+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:48.236148+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:48.236148+0000 mon.b (mon.1) 381 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: cluster 2026-03-10T07:32:48.241089+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: cluster 2026-03-10T07:32:48.241089+0000 mon.a (mon.0) 2472 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:48.247008+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:48.247008+0000 mon.a (mon.0) 2473 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:49.235316+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:49.235316+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:49.237772+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:49.237772+0000 mon.b (mon.1) 382 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: cluster 2026-03-10T07:32:49.239259+0000 mon.a (mon.0) 2475 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: cluster 2026-03-10T07:32:49.239259+0000 mon.a (mon.0) 2475 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:49.240853+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:49.240853+0000 mon.c (mon.2) 281 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:49.241100+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:49.241100+0000 mon.a (mon.0) 2476 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:49.243316+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:49 vm00 bash[20701]: audit 2026-03-10T07:32:49.243316+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]: dispatch 2026-03-10T07:32:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:50 vm03 bash[23382]: cluster 2026-03-10T07:32:48.637510+0000 mgr.y (mgr.24407) 307 : cluster [DBG] pgmap v484: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:50 vm03 bash[23382]: cluster 2026-03-10T07:32:48.637510+0000 mgr.y (mgr.24407) 307 : cluster [DBG] pgmap v484: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:50 vm03 bash[23382]: cluster 2026-03-10T07:32:50.235540+0000 mon.a (mon.0) 2478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:50 vm03 bash[23382]: cluster 2026-03-10T07:32:50.235540+0000 mon.a (mon.0) 2478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:50.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:50 vm00 bash[28005]: cluster 2026-03-10T07:32:48.637510+0000 mgr.y (mgr.24407) 307 : cluster [DBG] pgmap v484: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:50.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:50 vm00 bash[28005]: cluster 2026-03-10T07:32:48.637510+0000 mgr.y (mgr.24407) 307 : cluster [DBG] pgmap v484: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:50.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:50 vm00 bash[28005]: cluster 2026-03-10T07:32:50.235540+0000 mon.a (mon.0) 2478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:50.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:50 vm00 bash[28005]: cluster 2026-03-10T07:32:50.235540+0000 mon.a (mon.0) 2478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:50.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:50 vm00 bash[20701]: cluster 2026-03-10T07:32:48.637510+0000 mgr.y (mgr.24407) 307 : cluster [DBG] pgmap v484: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:50.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:50 vm00 bash[20701]: cluster 2026-03-10T07:32:48.637510+0000 mgr.y (mgr.24407) 307 : cluster [DBG] pgmap v484: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:32:50.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:50 vm00 bash[20701]: cluster 2026-03-10T07:32:50.235540+0000 mon.a (mon.0) 2478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:50.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:50 vm00 bash[20701]: cluster 2026-03-10T07:32:50.235540+0000 mon.a (mon.0) 2478 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:32:51.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:32:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:32:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:32:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.373753+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.373753+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.373919+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]': finished 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.373919+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]': finished 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.380428+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.380428+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: cluster 2026-03-10T07:32:50.380945+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: cluster 2026-03-10T07:32:50.380945+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.389922+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.389922+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.426220+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.426220+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.428321+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:50.428321+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: cluster 2026-03-10T07:32:50.637837+0000 mgr.y (mgr.24407) 308 : cluster [DBG] pgmap v487: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: cluster 2026-03-10T07:32:50.637837+0000 mgr.y (mgr.24407) 308 : cluster [DBG] pgmap v487: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:51.377089+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:51.377089+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:51.377139+0000 mon.a (mon.0) 2485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:51.377139+0000 mon.a (mon.0) 2485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: cluster 2026-03-10T07:32:51.380614+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: cluster 2026-03-10T07:32:51.380614+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:51.383815+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:51.383815+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:51.387284+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:51 vm03 bash[23382]: audit 2026-03-10T07:32:51.387284+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.373753+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.373753+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.373919+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]': finished 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.373919+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]': finished 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.380428+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.380428+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: cluster 2026-03-10T07:32:50.380945+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: cluster 2026-03-10T07:32:50.380945+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.389922+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.389922+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.426220+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.426220+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.428321+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:50.428321+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: cluster 2026-03-10T07:32:50.637837+0000 mgr.y (mgr.24407) 308 : cluster [DBG] pgmap v487: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: cluster 2026-03-10T07:32:50.637837+0000 mgr.y (mgr.24407) 308 : cluster [DBG] pgmap v487: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:51.377089+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:51.377089+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:51.377139+0000 mon.a (mon.0) 2485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:51.377139+0000 mon.a (mon.0) 2485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: cluster 2026-03-10T07:32:51.380614+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: cluster 2026-03-10T07:32:51.380614+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:51.383815+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:51.383815+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:51.387284+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:51 vm00 bash[28005]: audit 2026-03-10T07:32:51.387284+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.373753+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.373753+0000 mon.a (mon.0) 2479 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.373919+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]': finished 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.373919+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-51", "mode": "writeback"}]': finished 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.380428+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.380428+0000 mon.c (mon.2) 282 : audit [INF] from='client.? 192.168.123.100:0/67571121' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: cluster 2026-03-10T07:32:50.380945+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: cluster 2026-03-10T07:32:50.380945+0000 mon.a (mon.0) 2481 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.389922+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.389922+0000 mon.a (mon.0) 2482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.426220+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.426220+0000 mon.b (mon.1) 383 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.428321+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:50.428321+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: cluster 2026-03-10T07:32:50.637837+0000 mgr.y (mgr.24407) 308 : cluster [DBG] pgmap v487: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: cluster 2026-03-10T07:32:50.637837+0000 mgr.y (mgr.24407) 308 : cluster [DBG] pgmap v487: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:51.377089+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:51.377089+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm00-59637-65"}]': finished 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:51.377139+0000 mon.a (mon.0) 2485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:51.377139+0000 mon.a (mon.0) 2485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: cluster 2026-03-10T07:32:51.380614+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: cluster 2026-03-10T07:32:51.380614+0000 mon.a (mon.0) 2486 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:51.383815+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:51.383815+0000 mon.b (mon.1) 384 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:51.387284+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:51 vm00 bash[20701]: audit 2026-03-10T07:32:51.387284+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.406046+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.406046+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.419624+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.419624+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.420291+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.420291+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.420424+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.420424+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.420908+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.420908+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.421038+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:51.421038+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.380653+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.380653+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.380704+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.380704+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: cluster 2026-03-10T07:32:52.383189+0000 mon.a (mon.0) 2493 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: cluster 2026-03-10T07:32:52.383189+0000 mon.a (mon.0) 2493 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.385780+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.385780+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.389637+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.389637+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.390384+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.390384+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.390617+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:52 vm03 bash[23382]: audit 2026-03-10T07:32:52.390617+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.406046+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.406046+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.419624+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.419624+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.420291+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.420291+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.420424+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.420424+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.420908+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.420908+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.421038+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:51.421038+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.380653+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.380653+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.380704+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.380704+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: cluster 2026-03-10T07:32:52.383189+0000 mon.a (mon.0) 2493 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: cluster 2026-03-10T07:32:52.383189+0000 mon.a (mon.0) 2493 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.385780+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.385780+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.389637+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.389637+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.390384+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.390384+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.390617+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:52 vm00 bash[28005]: audit 2026-03-10T07:32:52.390617+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.406046+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.406046+0000 mon.c (mon.2) 283 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.419624+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.419624+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.420291+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.420291+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.420424+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.420424+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.420908+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.420908+0000 mon.c (mon.2) 285 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.421038+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:51.421038+0000 mon.a (mon.0) 2490 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.380653+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.380653+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.380704+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.380704+0000 mon.a (mon.0) 2492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm00-59637-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: cluster 2026-03-10T07:32:52.383189+0000 mon.a (mon.0) 2493 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: cluster 2026-03-10T07:32:52.383189+0000 mon.a (mon.0) 2493 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.385780+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.385780+0000 mon.b (mon.1) 385 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.389637+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.389637+0000 mon.a (mon.0) 2494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.390384+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.390384+0000 mon.c (mon.2) 286 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.390617+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:52 vm00 bash[20701]: audit 2026-03-10T07:32:52.390617+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:53.411 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:32:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: cluster 2026-03-10T07:32:52.638210+0000 mgr.y (mgr.24407) 309 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: cluster 2026-03-10T07:32:52.638210+0000 mgr.y (mgr.24407) 309 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: audit 2026-03-10T07:32:53.163743+0000 mgr.y (mgr.24407) 310 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: audit 2026-03-10T07:32:53.163743+0000 mgr.y (mgr.24407) 310 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: cluster 2026-03-10T07:32:53.381396+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: cluster 2026-03-10T07:32:53.381396+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: audit 2026-03-10T07:32:53.386734+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: audit 2026-03-10T07:32:53.386734+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: audit 2026-03-10T07:32:53.389783+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: audit 2026-03-10T07:32:53.389783+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: cluster 2026-03-10T07:32:53.397768+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T07:32:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: cluster 2026-03-10T07:32:53.397768+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T07:32:53.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: audit 2026-03-10T07:32:53.401752+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:53.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:53 vm03 bash[23382]: audit 2026-03-10T07:32:53.401752+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: cluster 2026-03-10T07:32:52.638210+0000 mgr.y (mgr.24407) 309 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: cluster 2026-03-10T07:32:52.638210+0000 mgr.y (mgr.24407) 309 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: audit 2026-03-10T07:32:53.163743+0000 mgr.y (mgr.24407) 310 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: audit 2026-03-10T07:32:53.163743+0000 mgr.y (mgr.24407) 310 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: cluster 2026-03-10T07:32:53.381396+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: cluster 2026-03-10T07:32:53.381396+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: audit 2026-03-10T07:32:53.386734+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: audit 2026-03-10T07:32:53.386734+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: audit 2026-03-10T07:32:53.389783+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: audit 2026-03-10T07:32:53.389783+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: cluster 2026-03-10T07:32:53.397768+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: cluster 2026-03-10T07:32:53.397768+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: audit 2026-03-10T07:32:53.401752+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:53 vm00 bash[28005]: audit 2026-03-10T07:32:53.401752+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: cluster 2026-03-10T07:32:52.638210+0000 mgr.y (mgr.24407) 309 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: cluster 2026-03-10T07:32:52.638210+0000 mgr.y (mgr.24407) 309 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: audit 2026-03-10T07:32:53.163743+0000 mgr.y (mgr.24407) 310 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: audit 2026-03-10T07:32:53.163743+0000 mgr.y (mgr.24407) 310 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:32:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: cluster 2026-03-10T07:32:53.381396+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: cluster 2026-03-10T07:32:53.381396+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:32:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: audit 2026-03-10T07:32:53.386734+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: audit 2026-03-10T07:32:53.386734+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:32:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: audit 2026-03-10T07:32:53.389783+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: audit 2026-03-10T07:32:53.389783+0000 mon.b (mon.1) 386 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: cluster 2026-03-10T07:32:53.397768+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T07:32:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: cluster 2026-03-10T07:32:53.397768+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T07:32:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: audit 2026-03-10T07:32:53.401752+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:53 vm00 bash[20701]: audit 2026-03-10T07:32:53.401752+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:32:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.390467+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.390467+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.390518+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.390518+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.394362+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.394362+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: cluster 2026-03-10T07:32:54.402639+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T07:32:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: cluster 2026-03-10T07:32:54.402639+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T07:32:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.415285+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.415285+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.460652+0000 mon.a (mon.0) 2504 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.460652+0000 mon.a (mon.0) 2504 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.462888+0000 mon.c (mon.2) 287 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: audit 2026-03-10T07:32:54.462888+0000 mon.c (mon.2) 287 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: cluster 2026-03-10T07:32:54.638797+0000 mgr.y (mgr.24407) 311 : cluster [DBG] pgmap v493: 300 pgs: 1 creating+peering, 7 unknown, 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:32:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:55 vm03 bash[23382]: cluster 2026-03-10T07:32:54.638797+0000 mgr.y (mgr.24407) 311 : cluster [DBG] pgmap v493: 300 pgs: 1 creating+peering, 7 unknown, 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.390467+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.390467+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.390518+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.390518+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.394362+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.394362+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: cluster 2026-03-10T07:32:54.402639+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: cluster 2026-03-10T07:32:54.402639+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.415285+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.415285+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.460652+0000 mon.a (mon.0) 2504 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.460652+0000 mon.a (mon.0) 2504 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.462888+0000 mon.c (mon.2) 287 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: audit 2026-03-10T07:32:54.462888+0000 mon.c (mon.2) 287 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: cluster 2026-03-10T07:32:54.638797+0000 mgr.y (mgr.24407) 311 : cluster [DBG] pgmap v493: 300 pgs: 1 creating+peering, 7 unknown, 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:55 vm00 bash[28005]: cluster 2026-03-10T07:32:54.638797+0000 mgr.y (mgr.24407) 311 : cluster [DBG] pgmap v493: 300 pgs: 1 creating+peering, 7 unknown, 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.390467+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.390467+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm00-59637-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.390518+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.390518+0000 mon.a (mon.0) 2501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.394362+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.394362+0000 mon.b (mon.1) 387 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: cluster 2026-03-10T07:32:54.402639+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: cluster 2026-03-10T07:32:54.402639+0000 mon.a (mon.0) 2502 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.415285+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.415285+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.460652+0000 mon.a (mon.0) 2504 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.460652+0000 mon.a (mon.0) 2504 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.462888+0000 mon.c (mon.2) 287 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: audit 2026-03-10T07:32:54.462888+0000 mon.c (mon.2) 287 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: cluster 2026-03-10T07:32:54.638797+0000 mgr.y (mgr.24407) 311 : cluster [DBG] pgmap v493: 300 pgs: 1 creating+peering, 7 unknown, 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:32:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:55 vm00 bash[20701]: cluster 2026-03-10T07:32:54.638797+0000 mgr.y (mgr.24407) 311 : cluster [DBG] pgmap v493: 300 pgs: 1 creating+peering, 7 unknown, 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T07:32:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:56 vm03 bash[23382]: audit 2026-03-10T07:32:55.393536+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:32:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:56 vm03 bash[23382]: audit 2026-03-10T07:32:55.393536+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:32:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:56 vm03 bash[23382]: cluster 2026-03-10T07:32:55.401112+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T07:32:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:56 vm03 bash[23382]: cluster 2026-03-10T07:32:55.401112+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T07:32:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:56 vm03 bash[23382]: cluster 2026-03-10T07:32:55.459054+0000 mon.a (mon.0) 2507 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:56 vm03 bash[23382]: cluster 2026-03-10T07:32:55.459054+0000 mon.a (mon.0) 2507 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:56 vm00 bash[28005]: audit 2026-03-10T07:32:55.393536+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:56 vm00 bash[28005]: audit 2026-03-10T07:32:55.393536+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:56 vm00 bash[28005]: cluster 2026-03-10T07:32:55.401112+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:56 vm00 bash[28005]: cluster 2026-03-10T07:32:55.401112+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:56 vm00 bash[28005]: cluster 2026-03-10T07:32:55.459054+0000 mon.a (mon.0) 2507 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:56 vm00 bash[28005]: cluster 2026-03-10T07:32:55.459054+0000 mon.a (mon.0) 2507 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:56 vm00 bash[20701]: audit 2026-03-10T07:32:55.393536+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:56 vm00 bash[20701]: audit 2026-03-10T07:32:55.393536+0000 mon.a (mon.0) 2505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:56 vm00 bash[20701]: cluster 2026-03-10T07:32:55.401112+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:56 vm00 bash[20701]: cluster 2026-03-10T07:32:55.401112+0000 mon.a (mon.0) 2506 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:56 vm00 bash[20701]: cluster 2026-03-10T07:32:55.459054+0000 mon.a (mon.0) 2507 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:56 vm00 bash[20701]: cluster 2026-03-10T07:32:55.459054+0000 mon.a (mon.0) 2507 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:32:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:57 vm03 bash[23382]: cluster 2026-03-10T07:32:56.430533+0000 mon.a (mon.0) 2508 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T07:32:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:57 vm03 bash[23382]: cluster 2026-03-10T07:32:56.430533+0000 mon.a (mon.0) 2508 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T07:32:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:57 vm03 bash[23382]: audit 2026-03-10T07:32:56.432548+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:57 vm03 bash[23382]: audit 2026-03-10T07:32:56.432548+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:57 vm03 bash[23382]: audit 2026-03-10T07:32:56.433539+0000 mon.a (mon.0) 2509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:57 vm03 bash[23382]: audit 2026-03-10T07:32:56.433539+0000 mon.a (mon.0) 2509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:57 vm03 bash[23382]: cluster 2026-03-10T07:32:56.639152+0000 mgr.y (mgr.24407) 312 : cluster [DBG] pgmap v496: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:32:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:57 vm03 bash[23382]: cluster 2026-03-10T07:32:56.639152+0000 mgr.y (mgr.24407) 312 : cluster [DBG] pgmap v496: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:57 vm00 bash[28005]: cluster 2026-03-10T07:32:56.430533+0000 mon.a (mon.0) 2508 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:57 vm00 bash[28005]: cluster 2026-03-10T07:32:56.430533+0000 mon.a (mon.0) 2508 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:57 vm00 bash[28005]: audit 2026-03-10T07:32:56.432548+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:57 vm00 bash[28005]: audit 2026-03-10T07:32:56.432548+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:57 vm00 bash[28005]: audit 2026-03-10T07:32:56.433539+0000 mon.a (mon.0) 2509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:57 vm00 bash[28005]: audit 2026-03-10T07:32:56.433539+0000 mon.a (mon.0) 2509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:57 vm00 bash[28005]: cluster 2026-03-10T07:32:56.639152+0000 mgr.y (mgr.24407) 312 : cluster [DBG] pgmap v496: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:57 vm00 bash[28005]: cluster 2026-03-10T07:32:56.639152+0000 mgr.y (mgr.24407) 312 : cluster [DBG] pgmap v496: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:57 vm00 bash[20701]: cluster 2026-03-10T07:32:56.430533+0000 mon.a (mon.0) 2508 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:57 vm00 bash[20701]: cluster 2026-03-10T07:32:56.430533+0000 mon.a (mon.0) 2508 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:57 vm00 bash[20701]: audit 2026-03-10T07:32:56.432548+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:57 vm00 bash[20701]: audit 2026-03-10T07:32:56.432548+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:57 vm00 bash[20701]: audit 2026-03-10T07:32:56.433539+0000 mon.a (mon.0) 2509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:57 vm00 bash[20701]: audit 2026-03-10T07:32:56.433539+0000 mon.a (mon.0) 2509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:57 vm00 bash[20701]: cluster 2026-03-10T07:32:56.639152+0000 mgr.y (mgr.24407) 312 : cluster [DBG] pgmap v496: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:32:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:57 vm00 bash[20701]: cluster 2026-03-10T07:32:56.639152+0000 mgr.y (mgr.24407) 312 : cluster [DBG] pgmap v496: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:32:58.450 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP2 (3048 ms) 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPP 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPP (7026 ms) 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPPNS 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPPNS (7055 ms) 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.StatRemovePP 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.StatRemovePP (7311 ms) 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ExecuteClassPP 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ExecuteClassPP (7048 ms) 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.OmapPP 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.OmapPP (7171 ms) 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.MultiWritePP 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ OK ] LibRadosAioEC.MultiWritePP (7066 ms) 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC (140935 ms total) 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [----------] Global test environment tear-down 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [==========] 57 tests from 4 test suites ran. (294479 ms total) 2026-03-10T07:32:58.451 INFO:tasks.workunit.client.0.vm00.stdout: api_aio_pp: [ PASSED ] 57 tests. 2026-03-10T07:32:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:58 vm03 bash[23382]: cluster 2026-03-10T07:32:57.422840+0000 mon.a (mon.0) 2510 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:32:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:58 vm03 bash[23382]: cluster 2026-03-10T07:32:57.422840+0000 mon.a (mon.0) 2510 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:32:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:58 vm03 bash[23382]: audit 2026-03-10T07:32:57.431005+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:58 vm03 bash[23382]: audit 2026-03-10T07:32:57.431005+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:58.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:58 vm03 bash[23382]: cluster 2026-03-10T07:32:57.438233+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T07:32:58.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:58 vm03 bash[23382]: cluster 2026-03-10T07:32:57.438233+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T07:32:58.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:58 vm03 bash[23382]: audit 2026-03-10T07:32:57.438426+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:58.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:58 vm03 bash[23382]: audit 2026-03-10T07:32:57.438426+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:58.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:58 vm03 bash[23382]: audit 2026-03-10T07:32:57.444821+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:58.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:58 vm03 bash[23382]: audit 2026-03-10T07:32:57.444821+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:58 vm00 bash[28005]: cluster 2026-03-10T07:32:57.422840+0000 mon.a (mon.0) 2510 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:58 vm00 bash[28005]: cluster 2026-03-10T07:32:57.422840+0000 mon.a (mon.0) 2510 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:58 vm00 bash[28005]: audit 2026-03-10T07:32:57.431005+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:58 vm00 bash[28005]: audit 2026-03-10T07:32:57.431005+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:58 vm00 bash[28005]: cluster 2026-03-10T07:32:57.438233+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:58 vm00 bash[28005]: cluster 2026-03-10T07:32:57.438233+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:58 vm00 bash[28005]: audit 2026-03-10T07:32:57.438426+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:58 vm00 bash[28005]: audit 2026-03-10T07:32:57.438426+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:58 vm00 bash[28005]: audit 2026-03-10T07:32:57.444821+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:58 vm00 bash[28005]: audit 2026-03-10T07:32:57.444821+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:58 vm00 bash[20701]: cluster 2026-03-10T07:32:57.422840+0000 mon.a (mon.0) 2510 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:58 vm00 bash[20701]: cluster 2026-03-10T07:32:57.422840+0000 mon.a (mon.0) 2510 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:58 vm00 bash[20701]: audit 2026-03-10T07:32:57.431005+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:58 vm00 bash[20701]: audit 2026-03-10T07:32:57.431005+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:58 vm00 bash[20701]: cluster 2026-03-10T07:32:57.438233+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:58 vm00 bash[20701]: cluster 2026-03-10T07:32:57.438233+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:58 vm00 bash[20701]: audit 2026-03-10T07:32:57.438426+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:58 vm00 bash[20701]: audit 2026-03-10T07:32:57.438426+0000 mon.c (mon.2) 289 : audit [INF] from='client.? 192.168.123.100:0/772410834' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:58 vm00 bash[20701]: audit 2026-03-10T07:32:57.444821+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:58 vm00 bash[20701]: audit 2026-03-10T07:32:57.444821+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]: dispatch 2026-03-10T07:32:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:59 vm03 bash[23382]: audit 2026-03-10T07:32:58.433761+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:59 vm03 bash[23382]: audit 2026-03-10T07:32:58.433761+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:59 vm03 bash[23382]: cluster 2026-03-10T07:32:58.436600+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T07:32:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:59 vm03 bash[23382]: cluster 2026-03-10T07:32:58.436600+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T07:32:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:59 vm03 bash[23382]: cluster 2026-03-10T07:32:58.639454+0000 mgr.y (mgr.24407) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:32:59.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:32:59 vm03 bash[23382]: cluster 2026-03-10T07:32:58.639454+0000 mgr.y (mgr.24407) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:32:59.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:59 vm00 bash[28005]: audit 2026-03-10T07:32:58.433761+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:59 vm00 bash[28005]: audit 2026-03-10T07:32:58.433761+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:59 vm00 bash[28005]: cluster 2026-03-10T07:32:58.436600+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T07:32:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:59 vm00 bash[28005]: cluster 2026-03-10T07:32:58.436600+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T07:32:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:59 vm00 bash[28005]: cluster 2026-03-10T07:32:58.639454+0000 mgr.y (mgr.24407) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:32:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:32:59 vm00 bash[28005]: cluster 2026-03-10T07:32:58.639454+0000 mgr.y (mgr.24407) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:32:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:59 vm00 bash[20701]: audit 2026-03-10T07:32:58.433761+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:59 vm00 bash[20701]: audit 2026-03-10T07:32:58.433761+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm00-59637-66"}]': finished 2026-03-10T07:32:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:59 vm00 bash[20701]: cluster 2026-03-10T07:32:58.436600+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T07:32:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:59 vm00 bash[20701]: cluster 2026-03-10T07:32:58.436600+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T07:32:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:59 vm00 bash[20701]: cluster 2026-03-10T07:32:58.639454+0000 mgr.y (mgr.24407) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:32:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:32:59 vm00 bash[20701]: cluster 2026-03-10T07:32:58.639454+0000 mgr.y (mgr.24407) 313 : cluster [DBG] pgmap v499: 292 pgs: 292 active+clean; 8.3 MiB data, 727 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:33:01.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:33:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:33:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:33:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:01 vm03 bash[23382]: cluster 2026-03-10T07:33:00.640232+0000 mgr.y (mgr.24407) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 976 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:01 vm03 bash[23382]: cluster 2026-03-10T07:33:00.640232+0000 mgr.y (mgr.24407) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 976 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:01 vm03 bash[23382]: cluster 2026-03-10T07:33:01.158767+0000 mon.a (mon.0) 2516 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:01 vm03 bash[23382]: cluster 2026-03-10T07:33:01.158767+0000 mon.a (mon.0) 2516 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:02.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:01 vm00 bash[20701]: cluster 2026-03-10T07:33:00.640232+0000 mgr.y (mgr.24407) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 976 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:02.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:01 vm00 bash[20701]: cluster 2026-03-10T07:33:00.640232+0000 mgr.y (mgr.24407) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 976 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:02.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:01 vm00 bash[20701]: cluster 2026-03-10T07:33:01.158767+0000 mon.a (mon.0) 2516 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:02.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:01 vm00 bash[20701]: cluster 2026-03-10T07:33:01.158767+0000 mon.a (mon.0) 2516 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:02.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:01 vm00 bash[28005]: cluster 2026-03-10T07:33:00.640232+0000 mgr.y (mgr.24407) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 976 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:02.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:01 vm00 bash[28005]: cluster 2026-03-10T07:33:00.640232+0000 mgr.y (mgr.24407) 314 : cluster [DBG] pgmap v500: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 976 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:02.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:01 vm00 bash[28005]: cluster 2026-03-10T07:33:01.158767+0000 mon.a (mon.0) 2516 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:02.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:01 vm00 bash[28005]: cluster 2026-03-10T07:33:01.158767+0000 mon.a (mon.0) 2516 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:03.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:33:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:33:04.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:03 vm03 bash[23382]: cluster 2026-03-10T07:33:02.640624+0000 mgr.y (mgr.24407) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:04.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:03 vm03 bash[23382]: cluster 2026-03-10T07:33:02.640624+0000 mgr.y (mgr.24407) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:04.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:03 vm03 bash[23382]: audit 2026-03-10T07:33:03.166702+0000 mgr.y (mgr.24407) 316 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:04.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:03 vm03 bash[23382]: audit 2026-03-10T07:33:03.166702+0000 mgr.y (mgr.24407) 316 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:04.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:03 vm00 bash[20701]: cluster 2026-03-10T07:33:02.640624+0000 mgr.y (mgr.24407) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:04.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:03 vm00 bash[20701]: cluster 2026-03-10T07:33:02.640624+0000 mgr.y (mgr.24407) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:04.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:03 vm00 bash[20701]: audit 2026-03-10T07:33:03.166702+0000 mgr.y (mgr.24407) 316 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:04.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:03 vm00 bash[20701]: audit 2026-03-10T07:33:03.166702+0000 mgr.y (mgr.24407) 316 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:04.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:03 vm00 bash[28005]: cluster 2026-03-10T07:33:02.640624+0000 mgr.y (mgr.24407) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:04.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:03 vm00 bash[28005]: cluster 2026-03-10T07:33:02.640624+0000 mgr.y (mgr.24407) 315 : cluster [DBG] pgmap v501: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:04.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:03 vm00 bash[28005]: audit 2026-03-10T07:33:03.166702+0000 mgr.y (mgr.24407) 316 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:04.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:03 vm00 bash[28005]: audit 2026-03-10T07:33:03.166702+0000 mgr.y (mgr.24407) 316 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: cluster 2026-03-10T07:33:04.641130+0000 mgr.y (mgr.24407) 317 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: cluster 2026-03-10T07:33:04.641130+0000 mgr.y (mgr.24407) 317 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.387669+0000 mon.c (mon.2) 290 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.387669+0000 mon.c (mon.2) 290 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.410214+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.410214+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.412329+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.412329+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.730302+0000 mon.c (mon.2) 291 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.730302+0000 mon.c (mon.2) 291 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.731394+0000 mon.c (mon.2) 292 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.731394+0000 mon.c (mon.2) 292 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:33:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.738206+0000 mon.a (mon.0) 2518 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:33:06.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:05 vm03 bash[23382]: audit 2026-03-10T07:33:05.738206+0000 mon.a (mon.0) 2518 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: cluster 2026-03-10T07:33:04.641130+0000 mgr.y (mgr.24407) 317 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: cluster 2026-03-10T07:33:04.641130+0000 mgr.y (mgr.24407) 317 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.387669+0000 mon.c (mon.2) 290 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.387669+0000 mon.c (mon.2) 290 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.410214+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.410214+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.412329+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.412329+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.730302+0000 mon.c (mon.2) 291 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.730302+0000 mon.c (mon.2) 291 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.731394+0000 mon.c (mon.2) 292 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.731394+0000 mon.c (mon.2) 292 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.738206+0000 mon.a (mon.0) 2518 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:05 vm00 bash[28005]: audit 2026-03-10T07:33:05.738206+0000 mon.a (mon.0) 2518 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: cluster 2026-03-10T07:33:04.641130+0000 mgr.y (mgr.24407) 317 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: cluster 2026-03-10T07:33:04.641130+0000 mgr.y (mgr.24407) 317 : cluster [DBG] pgmap v502: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.387669+0000 mon.c (mon.2) 290 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.387669+0000 mon.c (mon.2) 290 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.410214+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.410214+0000 mon.b (mon.1) 388 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.412329+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.412329+0000 mon.a (mon.0) 2517 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.730302+0000 mon.c (mon.2) 291 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.730302+0000 mon.c (mon.2) 291 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.731394+0000 mon.c (mon.2) 292 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.731394+0000 mon.c (mon.2) 292 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.738206+0000 mon.a (mon.0) 2518 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:33:06.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:05 vm00 bash[20701]: audit 2026-03-10T07:33:05.738206+0000 mon.a (mon.0) 2518 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:33:07.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: audit 2026-03-10T07:33:05.927651+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:33:07.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: audit 2026-03-10T07:33:05.927651+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:33:07.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: audit 2026-03-10T07:33:05.933869+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: audit 2026-03-10T07:33:05.933869+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: cluster 2026-03-10T07:33:05.935212+0000 mon.a (mon.0) 2520 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T07:33:07.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: cluster 2026-03-10T07:33:05.935212+0000 mon.a (mon.0) 2520 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T07:33:07.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: audit 2026-03-10T07:33:05.938047+0000 mon.a (mon.0) 2521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: audit 2026-03-10T07:33:05.938047+0000 mon.a (mon.0) 2521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: audit 2026-03-10T07:33:06.931573+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:33:07.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: audit 2026-03-10T07:33:06.931573+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:33:07.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: cluster 2026-03-10T07:33:06.942589+0000 mon.a (mon.0) 2523 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T07:33:07.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:06 vm03 bash[23382]: cluster 2026-03-10T07:33:06.942589+0000 mon.a (mon.0) 2523 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: audit 2026-03-10T07:33:05.927651+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: audit 2026-03-10T07:33:05.927651+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: audit 2026-03-10T07:33:05.933869+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: audit 2026-03-10T07:33:05.933869+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: cluster 2026-03-10T07:33:05.935212+0000 mon.a (mon.0) 2520 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: cluster 2026-03-10T07:33:05.935212+0000 mon.a (mon.0) 2520 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: audit 2026-03-10T07:33:05.938047+0000 mon.a (mon.0) 2521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: audit 2026-03-10T07:33:05.938047+0000 mon.a (mon.0) 2521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: audit 2026-03-10T07:33:06.931573+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: audit 2026-03-10T07:33:06.931573+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: cluster 2026-03-10T07:33:06.942589+0000 mon.a (mon.0) 2523 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:06 vm00 bash[28005]: cluster 2026-03-10T07:33:06.942589+0000 mon.a (mon.0) 2523 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: audit 2026-03-10T07:33:05.927651+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: audit 2026-03-10T07:33:05.927651+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: audit 2026-03-10T07:33:05.933869+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: audit 2026-03-10T07:33:05.933869+0000 mon.b (mon.1) 389 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: cluster 2026-03-10T07:33:05.935212+0000 mon.a (mon.0) 2520 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: cluster 2026-03-10T07:33:05.935212+0000 mon.a (mon.0) 2520 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: audit 2026-03-10T07:33:05.938047+0000 mon.a (mon.0) 2521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: audit 2026-03-10T07:33:05.938047+0000 mon.a (mon.0) 2521 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: audit 2026-03-10T07:33:06.931573+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: audit 2026-03-10T07:33:06.931573+0000 mon.a (mon.0) 2522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]': finished 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: cluster 2026-03-10T07:33:06.942589+0000 mon.a (mon.0) 2523 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T07:33:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:06 vm00 bash[20701]: cluster 2026-03-10T07:33:06.942589+0000 mon.a (mon.0) 2523 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T07:33:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:07 vm03 bash[23382]: cluster 2026-03-10T07:33:06.641537+0000 mgr.y (mgr.24407) 318 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:07 vm03 bash[23382]: cluster 2026-03-10T07:33:06.641537+0000 mgr.y (mgr.24407) 318 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:07 vm03 bash[23382]: audit 2026-03-10T07:33:06.980252+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:07 vm03 bash[23382]: audit 2026-03-10T07:33:06.980252+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:07 vm03 bash[23382]: audit 2026-03-10T07:33:06.981114+0000 mon.b (mon.1) 391 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:07 vm03 bash[23382]: audit 2026-03-10T07:33:06.981114+0000 mon.b (mon.1) 391 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:08.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:07 vm03 bash[23382]: audit 2026-03-10T07:33:06.982422+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:07 vm03 bash[23382]: audit 2026-03-10T07:33:06.982422+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:07 vm03 bash[23382]: audit 2026-03-10T07:33:06.983074+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:08.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:07 vm03 bash[23382]: audit 2026-03-10T07:33:06.983074+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:07 vm00 bash[28005]: cluster 2026-03-10T07:33:06.641537+0000 mgr.y (mgr.24407) 318 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:07 vm00 bash[28005]: cluster 2026-03-10T07:33:06.641537+0000 mgr.y (mgr.24407) 318 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:07 vm00 bash[28005]: audit 2026-03-10T07:33:06.980252+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:07 vm00 bash[28005]: audit 2026-03-10T07:33:06.980252+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:07 vm00 bash[28005]: audit 2026-03-10T07:33:06.981114+0000 mon.b (mon.1) 391 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:07 vm00 bash[28005]: audit 2026-03-10T07:33:06.981114+0000 mon.b (mon.1) 391 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:07 vm00 bash[28005]: audit 2026-03-10T07:33:06.982422+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:07 vm00 bash[28005]: audit 2026-03-10T07:33:06.982422+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:07 vm00 bash[28005]: audit 2026-03-10T07:33:06.983074+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:07 vm00 bash[28005]: audit 2026-03-10T07:33:06.983074+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:07 vm00 bash[20701]: cluster 2026-03-10T07:33:06.641537+0000 mgr.y (mgr.24407) 318 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:07 vm00 bash[20701]: cluster 2026-03-10T07:33:06.641537+0000 mgr.y (mgr.24407) 318 : cluster [DBG] pgmap v504: 292 pgs: 292 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:08 vm00 bash[20701]: audit 2026-03-10T07:33:06.980252+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:08 vm00 bash[20701]: audit 2026-03-10T07:33:06.980252+0000 mon.b (mon.1) 390 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:08 vm00 bash[20701]: audit 2026-03-10T07:33:06.981114+0000 mon.b (mon.1) 391 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:08 vm00 bash[20701]: audit 2026-03-10T07:33:06.981114+0000 mon.b (mon.1) 391 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:08 vm00 bash[20701]: audit 2026-03-10T07:33:06.982422+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:08 vm00 bash[20701]: audit 2026-03-10T07:33:06.982422+0000 mon.a (mon.0) 2524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:08 vm00 bash[20701]: audit 2026-03-10T07:33:06.983074+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:08 vm00 bash[20701]: audit 2026-03-10T07:33:06.983074+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-51"}]: dispatch 2026-03-10T07:33:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:08 vm03 bash[23382]: cluster 2026-03-10T07:33:07.979812+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T07:33:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:08 vm03 bash[23382]: cluster 2026-03-10T07:33:07.979812+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T07:33:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:09 vm03 bash[23382]: cluster 2026-03-10T07:33:08.986786+0000 mon.a (mon.0) 2527 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T07:33:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:09 vm03 bash[23382]: cluster 2026-03-10T07:33:08.986786+0000 mon.a (mon.0) 2527 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T07:33:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:09 vm03 bash[23382]: audit 2026-03-10T07:33:08.988754+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:09 vm03 bash[23382]: audit 2026-03-10T07:33:08.988754+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:09 vm03 bash[23382]: audit 2026-03-10T07:33:08.991355+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:09 vm03 bash[23382]: audit 2026-03-10T07:33:08.991355+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:09 vm00 bash[28005]: cluster 2026-03-10T07:33:07.979812+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:09 vm00 bash[28005]: cluster 2026-03-10T07:33:07.979812+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:09 vm00 bash[28005]: cluster 2026-03-10T07:33:08.986786+0000 mon.a (mon.0) 2527 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:09 vm00 bash[28005]: cluster 2026-03-10T07:33:08.986786+0000 mon.a (mon.0) 2527 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:09 vm00 bash[28005]: audit 2026-03-10T07:33:08.988754+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:09 vm00 bash[28005]: audit 2026-03-10T07:33:08.988754+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:09 vm00 bash[28005]: audit 2026-03-10T07:33:08.991355+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:09 vm00 bash[28005]: audit 2026-03-10T07:33:08.991355+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:09 vm00 bash[20701]: cluster 2026-03-10T07:33:07.979812+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:09 vm00 bash[20701]: cluster 2026-03-10T07:33:07.979812+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:09 vm00 bash[20701]: cluster 2026-03-10T07:33:08.986786+0000 mon.a (mon.0) 2527 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:09 vm00 bash[20701]: cluster 2026-03-10T07:33:08.986786+0000 mon.a (mon.0) 2527 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:09 vm00 bash[20701]: audit 2026-03-10T07:33:08.988754+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:09 vm00 bash[20701]: audit 2026-03-10T07:33:08.988754+0000 mon.b (mon.1) 392 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:09 vm00 bash[20701]: audit 2026-03-10T07:33:08.991355+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:09.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:09 vm00 bash[20701]: audit 2026-03-10T07:33:08.991355+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:10 vm03 bash[23382]: cluster 2026-03-10T07:33:08.641881+0000 mgr.y (mgr.24407) 319 : cluster [DBG] pgmap v507: 260 pgs: 260 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:33:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:10 vm03 bash[23382]: cluster 2026-03-10T07:33:08.641881+0000 mgr.y (mgr.24407) 319 : cluster [DBG] pgmap v507: 260 pgs: 260 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:33:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:10 vm03 bash[23382]: cluster 2026-03-10T07:33:08.993778+0000 mon.a (mon.0) 2529 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:33:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:10 vm03 bash[23382]: cluster 2026-03-10T07:33:08.993778+0000 mon.a (mon.0) 2529 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:33:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:10 vm03 bash[23382]: audit 2026-03-10T07:33:09.470138+0000 mon.c (mon.2) 293 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:10 vm03 bash[23382]: audit 2026-03-10T07:33:09.470138+0000 mon.c (mon.2) 293 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:10 vm03 bash[23382]: audit 2026-03-10T07:33:09.971980+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:10 vm03 bash[23382]: audit 2026-03-10T07:33:09.971980+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:10 vm03 bash[23382]: cluster 2026-03-10T07:33:09.975708+0000 mon.a (mon.0) 2531 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T07:33:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:10 vm03 bash[23382]: cluster 2026-03-10T07:33:09.975708+0000 mon.a (mon.0) 2531 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:10 vm00 bash[28005]: cluster 2026-03-10T07:33:08.641881+0000 mgr.y (mgr.24407) 319 : cluster [DBG] pgmap v507: 260 pgs: 260 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:10 vm00 bash[28005]: cluster 2026-03-10T07:33:08.641881+0000 mgr.y (mgr.24407) 319 : cluster [DBG] pgmap v507: 260 pgs: 260 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:10 vm00 bash[28005]: cluster 2026-03-10T07:33:08.993778+0000 mon.a (mon.0) 2529 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:10 vm00 bash[28005]: cluster 2026-03-10T07:33:08.993778+0000 mon.a (mon.0) 2529 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:10 vm00 bash[28005]: audit 2026-03-10T07:33:09.470138+0000 mon.c (mon.2) 293 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:10 vm00 bash[28005]: audit 2026-03-10T07:33:09.470138+0000 mon.c (mon.2) 293 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:10 vm00 bash[28005]: audit 2026-03-10T07:33:09.971980+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:10 vm00 bash[28005]: audit 2026-03-10T07:33:09.971980+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:10 vm00 bash[28005]: cluster 2026-03-10T07:33:09.975708+0000 mon.a (mon.0) 2531 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:10 vm00 bash[28005]: cluster 2026-03-10T07:33:09.975708+0000 mon.a (mon.0) 2531 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:10 vm00 bash[20701]: cluster 2026-03-10T07:33:08.641881+0000 mgr.y (mgr.24407) 319 : cluster [DBG] pgmap v507: 260 pgs: 260 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:10 vm00 bash[20701]: cluster 2026-03-10T07:33:08.641881+0000 mgr.y (mgr.24407) 319 : cluster [DBG] pgmap v507: 260 pgs: 260 active+clean; 8.3 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:10 vm00 bash[20701]: cluster 2026-03-10T07:33:08.993778+0000 mon.a (mon.0) 2529 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:10 vm00 bash[20701]: cluster 2026-03-10T07:33:08.993778+0000 mon.a (mon.0) 2529 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:10 vm00 bash[20701]: audit 2026-03-10T07:33:09.470138+0000 mon.c (mon.2) 293 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:10 vm00 bash[20701]: audit 2026-03-10T07:33:09.470138+0000 mon.c (mon.2) 293 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:10 vm00 bash[20701]: audit 2026-03-10T07:33:09.971980+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:10 vm00 bash[20701]: audit 2026-03-10T07:33:09.971980+0000 mon.a (mon.0) 2530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:10 vm00 bash[20701]: cluster 2026-03-10T07:33:09.975708+0000 mon.a (mon.0) 2531 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T07:33:10.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:10 vm00 bash[20701]: cluster 2026-03-10T07:33:09.975708+0000 mon.a (mon.0) 2531 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T07:33:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:11 vm03 bash[23382]: audit 2026-03-10T07:33:10.028635+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:11 vm03 bash[23382]: audit 2026-03-10T07:33:10.028635+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:11 vm03 bash[23382]: audit 2026-03-10T07:33:10.029498+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:11 vm03 bash[23382]: audit 2026-03-10T07:33:10.029498+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:11 vm03 bash[23382]: audit 2026-03-10T07:33:10.030720+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:11 vm03 bash[23382]: audit 2026-03-10T07:33:10.030720+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:11 vm03 bash[23382]: audit 2026-03-10T07:33:10.031469+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:11 vm03 bash[23382]: audit 2026-03-10T07:33:10.031469+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:11 vm03 bash[23382]: cluster 2026-03-10T07:33:10.977764+0000 mon.a (mon.0) 2534 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T07:33:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:11 vm03 bash[23382]: cluster 2026-03-10T07:33:10.977764+0000 mon.a (mon.0) 2534 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:11 vm00 bash[28005]: audit 2026-03-10T07:33:10.028635+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:11 vm00 bash[28005]: audit 2026-03-10T07:33:10.028635+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:11 vm00 bash[28005]: audit 2026-03-10T07:33:10.029498+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:11 vm00 bash[28005]: audit 2026-03-10T07:33:10.029498+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:11 vm00 bash[28005]: audit 2026-03-10T07:33:10.030720+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:11 vm00 bash[28005]: audit 2026-03-10T07:33:10.030720+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:11 vm00 bash[28005]: audit 2026-03-10T07:33:10.031469+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:11 vm00 bash[28005]: audit 2026-03-10T07:33:10.031469+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:11 vm00 bash[28005]: cluster 2026-03-10T07:33:10.977764+0000 mon.a (mon.0) 2534 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:11 vm00 bash[28005]: cluster 2026-03-10T07:33:10.977764+0000 mon.a (mon.0) 2534 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:33:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:33:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:11 vm00 bash[20701]: audit 2026-03-10T07:33:10.028635+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:11 vm00 bash[20701]: audit 2026-03-10T07:33:10.028635+0000 mon.b (mon.1) 393 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:11 vm00 bash[20701]: audit 2026-03-10T07:33:10.029498+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:11 vm00 bash[20701]: audit 2026-03-10T07:33:10.029498+0000 mon.b (mon.1) 394 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:11 vm00 bash[20701]: audit 2026-03-10T07:33:10.030720+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:11 vm00 bash[20701]: audit 2026-03-10T07:33:10.030720+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:11 vm00 bash[20701]: audit 2026-03-10T07:33:10.031469+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:11 vm00 bash[20701]: audit 2026-03-10T07:33:10.031469+0000 mon.a (mon.0) 2533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-53"}]: dispatch 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:11 vm00 bash[20701]: cluster 2026-03-10T07:33:10.977764+0000 mon.a (mon.0) 2534 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T07:33:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:11 vm00 bash[20701]: cluster 2026-03-10T07:33:10.977764+0000 mon.a (mon.0) 2534 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T07:33:12.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:12 vm03 bash[23382]: cluster 2026-03-10T07:33:10.642304+0000 mgr.y (mgr.24407) 320 : cluster [DBG] pgmap v510: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:33:12.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:12 vm03 bash[23382]: cluster 2026-03-10T07:33:10.642304+0000 mgr.y (mgr.24407) 320 : cluster [DBG] pgmap v510: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:33:12.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:12 vm03 bash[23382]: cluster 2026-03-10T07:33:11.982728+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T07:33:12.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:12 vm03 bash[23382]: cluster 2026-03-10T07:33:11.982728+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T07:33:12.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:12 vm03 bash[23382]: audit 2026-03-10T07:33:11.988966+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:12.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:12 vm03 bash[23382]: audit 2026-03-10T07:33:11.988966+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:12.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:12 vm03 bash[23382]: audit 2026-03-10T07:33:11.996450+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:12.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:12 vm03 bash[23382]: audit 2026-03-10T07:33:11.996450+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:12 vm00 bash[20701]: cluster 2026-03-10T07:33:10.642304+0000 mgr.y (mgr.24407) 320 : cluster [DBG] pgmap v510: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:12 vm00 bash[20701]: cluster 2026-03-10T07:33:10.642304+0000 mgr.y (mgr.24407) 320 : cluster [DBG] pgmap v510: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:12 vm00 bash[20701]: cluster 2026-03-10T07:33:11.982728+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:12 vm00 bash[20701]: cluster 2026-03-10T07:33:11.982728+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:12 vm00 bash[20701]: audit 2026-03-10T07:33:11.988966+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:12 vm00 bash[20701]: audit 2026-03-10T07:33:11.988966+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:12 vm00 bash[20701]: audit 2026-03-10T07:33:11.996450+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:12 vm00 bash[20701]: audit 2026-03-10T07:33:11.996450+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:12 vm00 bash[28005]: cluster 2026-03-10T07:33:10.642304+0000 mgr.y (mgr.24407) 320 : cluster [DBG] pgmap v510: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:12 vm00 bash[28005]: cluster 2026-03-10T07:33:10.642304+0000 mgr.y (mgr.24407) 320 : cluster [DBG] pgmap v510: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:12 vm00 bash[28005]: cluster 2026-03-10T07:33:11.982728+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:12 vm00 bash[28005]: cluster 2026-03-10T07:33:11.982728+0000 mon.a (mon.0) 2535 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:12 vm00 bash[28005]: audit 2026-03-10T07:33:11.988966+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:12 vm00 bash[28005]: audit 2026-03-10T07:33:11.988966+0000 mon.b (mon.1) 395 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:12 vm00 bash[28005]: audit 2026-03-10T07:33:11.996450+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:12 vm00 bash[28005]: audit 2026-03-10T07:33:11.996450+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:13.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:33:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:33:14.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: cluster 2026-03-10T07:33:12.642687+0000 mgr.y (mgr.24407) 321 : cluster [DBG] pgmap v513: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:33:14.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: cluster 2026-03-10T07:33:12.642687+0000 mgr.y (mgr.24407) 321 : cluster [DBG] pgmap v513: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:12.983594+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:12.983594+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: cluster 2026-03-10T07:33:12.987131+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: cluster 2026-03-10T07:33:12.987131+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:12.996555+0000 mon.b (mon.1) 396 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:12.996555+0000 mon.b (mon.1) 396 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: cluster 2026-03-10T07:33:13.016482+0000 mon.a (mon.0) 2539 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: cluster 2026-03-10T07:33:13.016482+0000 mon.a (mon.0) 2539 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:13.059435+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:13.059435+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:13.060048+0000 mon.b (mon.1) 398 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:13.060048+0000 mon.b (mon.1) 398 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:13.061474+0000 mon.a (mon.0) 2540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:13.061474+0000 mon.a (mon.0) 2540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:13.061980+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:13.061980+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:13.173842+0000 mgr.y (mgr.24407) 322 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:13 vm03 bash[23382]: audit 2026-03-10T07:33:13.173842+0000 mgr.y (mgr.24407) 322 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:13 vm00 bash[28005]: cluster 2026-03-10T07:33:12.642687+0000 mgr.y (mgr.24407) 321 : cluster [DBG] pgmap v513: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: cluster 2026-03-10T07:33:12.642687+0000 mgr.y (mgr.24407) 321 : cluster [DBG] pgmap v513: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:12.983594+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:12.983594+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: cluster 2026-03-10T07:33:12.987131+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: cluster 2026-03-10T07:33:12.987131+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:12.996555+0000 mon.b (mon.1) 396 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:12.996555+0000 mon.b (mon.1) 396 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: cluster 2026-03-10T07:33:13.016482+0000 mon.a (mon.0) 2539 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: cluster 2026-03-10T07:33:13.016482+0000 mon.a (mon.0) 2539 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:13.059435+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:13.059435+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:13.060048+0000 mon.b (mon.1) 398 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:13.060048+0000 mon.b (mon.1) 398 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:13.061474+0000 mon.a (mon.0) 2540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:13.061474+0000 mon.a (mon.0) 2540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:13.061980+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:13.061980+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:13.173842+0000 mgr.y (mgr.24407) 322 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:14 vm00 bash[28005]: audit 2026-03-10T07:33:13.173842+0000 mgr.y (mgr.24407) 322 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:13 vm00 bash[20701]: cluster 2026-03-10T07:33:12.642687+0000 mgr.y (mgr.24407) 321 : cluster [DBG] pgmap v513: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:13 vm00 bash[20701]: cluster 2026-03-10T07:33:12.642687+0000 mgr.y (mgr.24407) 321 : cluster [DBG] pgmap v513: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 785 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:12.983594+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:12.983594+0000 mon.a (mon.0) 2537 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: cluster 2026-03-10T07:33:12.987131+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: cluster 2026-03-10T07:33:12.987131+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:12.996555+0000 mon.b (mon.1) 396 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:14.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:12.996555+0000 mon.b (mon.1) 396 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: cluster 2026-03-10T07:33:13.016482+0000 mon.a (mon.0) 2539 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: cluster 2026-03-10T07:33:13.016482+0000 mon.a (mon.0) 2539 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:13.059435+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:13.059435+0000 mon.b (mon.1) 397 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:13.060048+0000 mon.b (mon.1) 398 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:13.060048+0000 mon.b (mon.1) 398 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:13.061474+0000 mon.a (mon.0) 2540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:13.061474+0000 mon.a (mon.0) 2540 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:13.061980+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:13.061980+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-55"}]: dispatch 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:13.173842+0000 mgr.y (mgr.24407) 322 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:14 vm00 bash[20701]: audit 2026-03-10T07:33:13.173842+0000 mgr.y (mgr.24407) 322 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:15 vm03 bash[23382]: cluster 2026-03-10T07:33:13.990949+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T07:33:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:15 vm03 bash[23382]: cluster 2026-03-10T07:33:13.990949+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T07:33:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:15 vm00 bash[28005]: cluster 2026-03-10T07:33:13.990949+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T07:33:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:15 vm00 bash[28005]: cluster 2026-03-10T07:33:13.990949+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T07:33:15.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:15 vm00 bash[20701]: cluster 2026-03-10T07:33:13.990949+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T07:33:15.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:15 vm00 bash[20701]: cluster 2026-03-10T07:33:13.990949+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T07:33:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:16 vm03 bash[23382]: cluster 2026-03-10T07:33:14.643188+0000 mgr.y (mgr.24407) 323 : cluster [DBG] pgmap v516: 260 pgs: 260 active+clean; 8.3 MiB data, 803 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T07:33:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:16 vm03 bash[23382]: cluster 2026-03-10T07:33:14.643188+0000 mgr.y (mgr.24407) 323 : cluster [DBG] pgmap v516: 260 pgs: 260 active+clean; 8.3 MiB data, 803 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T07:33:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:16 vm03 bash[23382]: cluster 2026-03-10T07:33:15.013854+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T07:33:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:16 vm03 bash[23382]: cluster 2026-03-10T07:33:15.013854+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T07:33:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:16 vm03 bash[23382]: audit 2026-03-10T07:33:15.036023+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:16 vm03 bash[23382]: audit 2026-03-10T07:33:15.036023+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:16 vm03 bash[23382]: audit 2026-03-10T07:33:15.038720+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:16 vm03 bash[23382]: audit 2026-03-10T07:33:15.038720+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:16 vm00 bash[28005]: cluster 2026-03-10T07:33:14.643188+0000 mgr.y (mgr.24407) 323 : cluster [DBG] pgmap v516: 260 pgs: 260 active+clean; 8.3 MiB data, 803 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:16 vm00 bash[28005]: cluster 2026-03-10T07:33:14.643188+0000 mgr.y (mgr.24407) 323 : cluster [DBG] pgmap v516: 260 pgs: 260 active+clean; 8.3 MiB data, 803 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:16 vm00 bash[28005]: cluster 2026-03-10T07:33:15.013854+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:16 vm00 bash[28005]: cluster 2026-03-10T07:33:15.013854+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:16 vm00 bash[28005]: audit 2026-03-10T07:33:15.036023+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:16 vm00 bash[28005]: audit 2026-03-10T07:33:15.036023+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:16 vm00 bash[28005]: audit 2026-03-10T07:33:15.038720+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:16 vm00 bash[28005]: audit 2026-03-10T07:33:15.038720+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:16 vm00 bash[20701]: cluster 2026-03-10T07:33:14.643188+0000 mgr.y (mgr.24407) 323 : cluster [DBG] pgmap v516: 260 pgs: 260 active+clean; 8.3 MiB data, 803 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:16 vm00 bash[20701]: cluster 2026-03-10T07:33:14.643188+0000 mgr.y (mgr.24407) 323 : cluster [DBG] pgmap v516: 260 pgs: 260 active+clean; 8.3 MiB data, 803 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:16 vm00 bash[20701]: cluster 2026-03-10T07:33:15.013854+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:16 vm00 bash[20701]: cluster 2026-03-10T07:33:15.013854+0000 mon.a (mon.0) 2543 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:16 vm00 bash[20701]: audit 2026-03-10T07:33:15.036023+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:16 vm00 bash[20701]: audit 2026-03-10T07:33:15.036023+0000 mon.b (mon.1) 399 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:16 vm00 bash[20701]: audit 2026-03-10T07:33:15.038720+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:16 vm00 bash[20701]: audit 2026-03-10T07:33:15.038720+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: audit 2026-03-10T07:33:16.117779+0000 mon.a (mon.0) 2545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: audit 2026-03-10T07:33:16.117779+0000 mon.a (mon.0) 2545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: cluster 2026-03-10T07:33:16.120858+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T07:33:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: cluster 2026-03-10T07:33:16.120858+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T07:33:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: audit 2026-03-10T07:33:16.182784+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: audit 2026-03-10T07:33:16.182784+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: audit 2026-03-10T07:33:16.183549+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: audit 2026-03-10T07:33:16.183549+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: audit 2026-03-10T07:33:16.184869+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: audit 2026-03-10T07:33:16.184869+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: audit 2026-03-10T07:33:16.185577+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:17 vm03 bash[23382]: audit 2026-03-10T07:33:16.185577+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: audit 2026-03-10T07:33:16.117779+0000 mon.a (mon.0) 2545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: audit 2026-03-10T07:33:16.117779+0000 mon.a (mon.0) 2545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: cluster 2026-03-10T07:33:16.120858+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: cluster 2026-03-10T07:33:16.120858+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: audit 2026-03-10T07:33:16.182784+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: audit 2026-03-10T07:33:16.182784+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: audit 2026-03-10T07:33:16.183549+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: audit 2026-03-10T07:33:16.183549+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: audit 2026-03-10T07:33:16.184869+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: audit 2026-03-10T07:33:16.184869+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: audit 2026-03-10T07:33:16.185577+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:17 vm00 bash[28005]: audit 2026-03-10T07:33:16.185577+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: audit 2026-03-10T07:33:16.117779+0000 mon.a (mon.0) 2545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: audit 2026-03-10T07:33:16.117779+0000 mon.a (mon.0) 2545 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: cluster 2026-03-10T07:33:16.120858+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: cluster 2026-03-10T07:33:16.120858+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: audit 2026-03-10T07:33:16.182784+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: audit 2026-03-10T07:33:16.182784+0000 mon.b (mon.1) 400 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: audit 2026-03-10T07:33:16.183549+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: audit 2026-03-10T07:33:16.183549+0000 mon.b (mon.1) 401 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: audit 2026-03-10T07:33:16.184869+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: audit 2026-03-10T07:33:16.184869+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: audit 2026-03-10T07:33:16.185577+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:17 vm00 bash[20701]: audit 2026-03-10T07:33:16.185577+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-57"}]: dispatch 2026-03-10T07:33:18.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:18 vm03 bash[23382]: cluster 2026-03-10T07:33:16.643564+0000 mgr.y (mgr.24407) 324 : cluster [DBG] pgmap v519: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:18 vm03 bash[23382]: cluster 2026-03-10T07:33:16.643564+0000 mgr.y (mgr.24407) 324 : cluster [DBG] pgmap v519: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:18 vm03 bash[23382]: cluster 2026-03-10T07:33:17.168058+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T07:33:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:18 vm03 bash[23382]: cluster 2026-03-10T07:33:17.168058+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T07:33:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:18 vm00 bash[28005]: cluster 2026-03-10T07:33:16.643564+0000 mgr.y (mgr.24407) 324 : cluster [DBG] pgmap v519: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:18 vm00 bash[28005]: cluster 2026-03-10T07:33:16.643564+0000 mgr.y (mgr.24407) 324 : cluster [DBG] pgmap v519: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:18 vm00 bash[28005]: cluster 2026-03-10T07:33:17.168058+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T07:33:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:18 vm00 bash[28005]: cluster 2026-03-10T07:33:17.168058+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T07:33:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:18 vm00 bash[20701]: cluster 2026-03-10T07:33:16.643564+0000 mgr.y (mgr.24407) 324 : cluster [DBG] pgmap v519: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:18 vm00 bash[20701]: cluster 2026-03-10T07:33:16.643564+0000 mgr.y (mgr.24407) 324 : cluster [DBG] pgmap v519: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:18 vm00 bash[20701]: cluster 2026-03-10T07:33:17.168058+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T07:33:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:18 vm00 bash[20701]: cluster 2026-03-10T07:33:17.168058+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T07:33:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:19 vm03 bash[23382]: cluster 2026-03-10T07:33:18.173392+0000 mon.a (mon.0) 2550 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T07:33:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:19 vm03 bash[23382]: cluster 2026-03-10T07:33:18.173392+0000 mon.a (mon.0) 2550 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T07:33:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:19 vm03 bash[23382]: audit 2026-03-10T07:33:18.182836+0000 mon.b (mon.1) 402 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:19 vm03 bash[23382]: audit 2026-03-10T07:33:18.182836+0000 mon.b (mon.1) 402 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:19 vm03 bash[23382]: audit 2026-03-10T07:33:18.186770+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:19 vm03 bash[23382]: audit 2026-03-10T07:33:18.186770+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:19 vm00 bash[28005]: cluster 2026-03-10T07:33:18.173392+0000 mon.a (mon.0) 2550 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:19 vm00 bash[28005]: cluster 2026-03-10T07:33:18.173392+0000 mon.a (mon.0) 2550 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:19 vm00 bash[28005]: audit 2026-03-10T07:33:18.182836+0000 mon.b (mon.1) 402 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:19 vm00 bash[28005]: audit 2026-03-10T07:33:18.182836+0000 mon.b (mon.1) 402 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:19 vm00 bash[28005]: audit 2026-03-10T07:33:18.186770+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:19 vm00 bash[28005]: audit 2026-03-10T07:33:18.186770+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:19 vm00 bash[20701]: cluster 2026-03-10T07:33:18.173392+0000 mon.a (mon.0) 2550 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:19 vm00 bash[20701]: cluster 2026-03-10T07:33:18.173392+0000 mon.a (mon.0) 2550 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:19 vm00 bash[20701]: audit 2026-03-10T07:33:18.182836+0000 mon.b (mon.1) 402 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:19 vm00 bash[20701]: audit 2026-03-10T07:33:18.182836+0000 mon.b (mon.1) 402 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:19 vm00 bash[20701]: audit 2026-03-10T07:33:18.186770+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:19 vm00 bash[20701]: audit 2026-03-10T07:33:18.186770+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:20.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: cluster 2026-03-10T07:33:18.644016+0000 mgr.y (mgr.24407) 325 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: cluster 2026-03-10T07:33:18.644016+0000 mgr.y (mgr.24407) 325 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: cluster 2026-03-10T07:33:19.167495+0000 mon.a (mon.0) 2552 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: cluster 2026-03-10T07:33:19.167495+0000 mon.a (mon.0) 2552 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.169870+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.169870+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: cluster 2026-03-10T07:33:19.177325+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: cluster 2026-03-10T07:33:19.177325+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.179516+0000 mon.b (mon.1) 403 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.179516+0000 mon.b (mon.1) 403 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.240522+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.240522+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.241224+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.241224+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.242555+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.242555+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.243205+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: audit 2026-03-10T07:33:19.243205+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: cluster 2026-03-10T07:33:20.176704+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T07:33:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:20 vm03 bash[23382]: cluster 2026-03-10T07:33:20.176704+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: cluster 2026-03-10T07:33:18.644016+0000 mgr.y (mgr.24407) 325 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: cluster 2026-03-10T07:33:18.644016+0000 mgr.y (mgr.24407) 325 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: cluster 2026-03-10T07:33:19.167495+0000 mon.a (mon.0) 2552 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: cluster 2026-03-10T07:33:19.167495+0000 mon.a (mon.0) 2552 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.169870+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.169870+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: cluster 2026-03-10T07:33:19.177325+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: cluster 2026-03-10T07:33:19.177325+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.179516+0000 mon.b (mon.1) 403 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.179516+0000 mon.b (mon.1) 403 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.240522+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.240522+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.241224+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.241224+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.242555+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.242555+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.243205+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: audit 2026-03-10T07:33:19.243205+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: cluster 2026-03-10T07:33:20.176704+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:20 vm00 bash[28005]: cluster 2026-03-10T07:33:20.176704+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: cluster 2026-03-10T07:33:18.644016+0000 mgr.y (mgr.24407) 325 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: cluster 2026-03-10T07:33:18.644016+0000 mgr.y (mgr.24407) 325 : cluster [DBG] pgmap v522: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 843 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: cluster 2026-03-10T07:33:19.167495+0000 mon.a (mon.0) 2552 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: cluster 2026-03-10T07:33:19.167495+0000 mon.a (mon.0) 2552 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.169870+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.169870+0000 mon.a (mon.0) 2553 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: cluster 2026-03-10T07:33:19.177325+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: cluster 2026-03-10T07:33:19.177325+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.179516+0000 mon.b (mon.1) 403 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.179516+0000 mon.b (mon.1) 403 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.240522+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.240522+0000 mon.b (mon.1) 404 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.241224+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.241224+0000 mon.b (mon.1) 405 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.242555+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.242555+0000 mon.a (mon.0) 2555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.243205+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: audit 2026-03-10T07:33:19.243205+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-59"}]: dispatch 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: cluster 2026-03-10T07:33:20.176704+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T07:33:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:20 vm00 bash[20701]: cluster 2026-03-10T07:33:20.176704+0000 mon.a (mon.0) 2557 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T07:33:21.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:33:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:33:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:33:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:22 vm03 bash[23382]: cluster 2026-03-10T07:33:20.644407+0000 mgr.y (mgr.24407) 326 : cluster [DBG] pgmap v525: 260 pgs: 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:22 vm03 bash[23382]: cluster 2026-03-10T07:33:20.644407+0000 mgr.y (mgr.24407) 326 : cluster [DBG] pgmap v525: 260 pgs: 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:22 vm03 bash[23382]: cluster 2026-03-10T07:33:21.187433+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T07:33:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:22 vm03 bash[23382]: cluster 2026-03-10T07:33:21.187433+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T07:33:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:22 vm03 bash[23382]: audit 2026-03-10T07:33:21.200597+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:22 vm03 bash[23382]: audit 2026-03-10T07:33:21.200597+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:22 vm03 bash[23382]: audit 2026-03-10T07:33:21.204971+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:22 vm03 bash[23382]: audit 2026-03-10T07:33:21.204971+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:22 vm00 bash[20701]: cluster 2026-03-10T07:33:20.644407+0000 mgr.y (mgr.24407) 326 : cluster [DBG] pgmap v525: 260 pgs: 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:22 vm00 bash[20701]: cluster 2026-03-10T07:33:20.644407+0000 mgr.y (mgr.24407) 326 : cluster [DBG] pgmap v525: 260 pgs: 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:22 vm00 bash[20701]: cluster 2026-03-10T07:33:21.187433+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:22 vm00 bash[20701]: cluster 2026-03-10T07:33:21.187433+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:22 vm00 bash[20701]: audit 2026-03-10T07:33:21.200597+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:22 vm00 bash[20701]: audit 2026-03-10T07:33:21.200597+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:22 vm00 bash[20701]: audit 2026-03-10T07:33:21.204971+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:22 vm00 bash[20701]: audit 2026-03-10T07:33:21.204971+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:22 vm00 bash[28005]: cluster 2026-03-10T07:33:20.644407+0000 mgr.y (mgr.24407) 326 : cluster [DBG] pgmap v525: 260 pgs: 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:22 vm00 bash[28005]: cluster 2026-03-10T07:33:20.644407+0000 mgr.y (mgr.24407) 326 : cluster [DBG] pgmap v525: 260 pgs: 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:22 vm00 bash[28005]: cluster 2026-03-10T07:33:21.187433+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:22 vm00 bash[28005]: cluster 2026-03-10T07:33:21.187433+0000 mon.a (mon.0) 2558 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:22 vm00 bash[28005]: audit 2026-03-10T07:33:21.200597+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:22 vm00 bash[28005]: audit 2026-03-10T07:33:21.200597+0000 mon.b (mon.1) 406 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:22 vm00 bash[28005]: audit 2026-03-10T07:33:21.204971+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:22 vm00 bash[28005]: audit 2026-03-10T07:33:21.204971+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:23.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:33:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:33:23.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:23 vm03 bash[23382]: audit 2026-03-10T07:33:22.201515+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:23.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:23 vm03 bash[23382]: audit 2026-03-10T07:33:22.201515+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:23.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:23 vm03 bash[23382]: cluster 2026-03-10T07:33:22.222121+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T07:33:23.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:23 vm03 bash[23382]: cluster 2026-03-10T07:33:22.222121+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T07:33:23.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:23 vm03 bash[23382]: audit 2026-03-10T07:33:22.227834+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:23.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:23 vm03 bash[23382]: audit 2026-03-10T07:33:22.227834+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:23.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:23 vm03 bash[23382]: audit 2026-03-10T07:33:22.236434+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:23.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:23 vm03 bash[23382]: audit 2026-03-10T07:33:22.236434+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:23.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:23 vm03 bash[23382]: audit 2026-03-10T07:33:22.243288+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:23.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:23 vm03 bash[23382]: audit 2026-03-10T07:33:22.243288+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:23 vm00 bash[28005]: audit 2026-03-10T07:33:22.201515+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:23 vm00 bash[28005]: audit 2026-03-10T07:33:22.201515+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:23 vm00 bash[28005]: cluster 2026-03-10T07:33:22.222121+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:23 vm00 bash[28005]: cluster 2026-03-10T07:33:22.222121+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:23 vm00 bash[28005]: audit 2026-03-10T07:33:22.227834+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:23 vm00 bash[28005]: audit 2026-03-10T07:33:22.227834+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:23 vm00 bash[28005]: audit 2026-03-10T07:33:22.236434+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:23 vm00 bash[28005]: audit 2026-03-10T07:33:22.236434+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:23 vm00 bash[28005]: audit 2026-03-10T07:33:22.243288+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:23 vm00 bash[28005]: audit 2026-03-10T07:33:22.243288+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:23 vm00 bash[20701]: audit 2026-03-10T07:33:22.201515+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:23 vm00 bash[20701]: audit 2026-03-10T07:33:22.201515+0000 mon.a (mon.0) 2560 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:23 vm00 bash[20701]: cluster 2026-03-10T07:33:22.222121+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:23 vm00 bash[20701]: cluster 2026-03-10T07:33:22.222121+0000 mon.a (mon.0) 2561 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:23 vm00 bash[20701]: audit 2026-03-10T07:33:22.227834+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:23 vm00 bash[20701]: audit 2026-03-10T07:33:22.227834+0000 mon.b (mon.1) 407 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:23.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:23 vm00 bash[20701]: audit 2026-03-10T07:33:22.236434+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:23.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:23 vm00 bash[20701]: audit 2026-03-10T07:33:22.236434+0000 mon.b (mon.1) 408 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:23.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:23 vm00 bash[20701]: audit 2026-03-10T07:33:22.243288+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:23.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:23 vm00 bash[20701]: audit 2026-03-10T07:33:22.243288+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: cluster 2026-03-10T07:33:22.644831+0000 mgr.y (mgr.24407) 327 : cluster [DBG] pgmap v528: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: cluster 2026-03-10T07:33:22.644831+0000 mgr.y (mgr.24407) 327 : cluster [DBG] pgmap v528: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.181034+0000 mgr.y (mgr.24407) 328 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.181034+0000 mgr.y (mgr.24407) 328 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.215272+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.215272+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: cluster 2026-03-10T07:33:23.219920+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: cluster 2026-03-10T07:33:23.219920+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.284682+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.284682+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.285689+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.285689+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.286825+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: cluster 2026-03-10T07:33:22.644831+0000 mgr.y (mgr.24407) 327 : cluster [DBG] pgmap v528: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: cluster 2026-03-10T07:33:22.644831+0000 mgr.y (mgr.24407) 327 : cluster [DBG] pgmap v528: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.181034+0000 mgr.y (mgr.24407) 328 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.181034+0000 mgr.y (mgr.24407) 328 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.215272+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:24.241 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.215272+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:24.242 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: cluster 2026-03-10T07:33:23.219920+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T07:33:24.242 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: cluster 2026-03-10T07:33:23.219920+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T07:33:24.242 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.284682+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.242 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.284682+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.242 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.285689+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:24.242 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.285689+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:24.242 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.286825+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.242 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.286825+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.242 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.287690+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:24.242 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:24 vm00 bash[20701]: audit 2026-03-10T07:33:23.287690+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:24.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: cluster 2026-03-10T07:33:22.644831+0000 mgr.y (mgr.24407) 327 : cluster [DBG] pgmap v528: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:24.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: cluster 2026-03-10T07:33:22.644831+0000 mgr.y (mgr.24407) 327 : cluster [DBG] pgmap v528: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 861 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:24.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.181034+0000 mgr.y (mgr.24407) 328 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.181034+0000 mgr.y (mgr.24407) 328 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.215272+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.215272+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: cluster 2026-03-10T07:33:23.219920+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: cluster 2026-03-10T07:33:23.219920+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.284682+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.284682+0000 mon.b (mon.1) 409 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.285689+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.285689+0000 mon.b (mon.1) 410 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.286825+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.286825+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.287690+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:24 vm03 bash[23382]: audit 2026-03-10T07:33:23.287690+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.286825+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:33:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.287690+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:24 vm00 bash[28005]: audit 2026-03-10T07:33:23.287690+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-61"}]: dispatch 2026-03-10T07:33:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:25 vm00 bash[28005]: cluster 2026-03-10T07:33:24.264252+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T07:33:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:25 vm00 bash[28005]: cluster 2026-03-10T07:33:24.264252+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T07:33:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:25 vm00 bash[28005]: audit 2026-03-10T07:33:24.477361+0000 mon.c (mon.2) 294 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:25 vm00 bash[28005]: audit 2026-03-10T07:33:24.477361+0000 mon.c (mon.2) 294 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:25 vm00 bash[20701]: cluster 2026-03-10T07:33:24.264252+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T07:33:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:25 vm00 bash[20701]: cluster 2026-03-10T07:33:24.264252+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T07:33:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:25 vm00 bash[20701]: audit 2026-03-10T07:33:24.477361+0000 mon.c (mon.2) 294 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:25 vm00 bash[20701]: audit 2026-03-10T07:33:24.477361+0000 mon.c (mon.2) 294 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:25 vm03 bash[23382]: cluster 2026-03-10T07:33:24.264252+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T07:33:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:25 vm03 bash[23382]: cluster 2026-03-10T07:33:24.264252+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T07:33:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:25 vm03 bash[23382]: audit 2026-03-10T07:33:24.477361+0000 mon.c (mon.2) 294 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:25 vm03 bash[23382]: audit 2026-03-10T07:33:24.477361+0000 mon.c (mon.2) 294 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:26 vm00 bash[28005]: cluster 2026-03-10T07:33:24.645289+0000 mgr.y (mgr.24407) 329 : cluster [DBG] pgmap v531: 260 pgs: 260 active+clean; 8.3 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:26 vm00 bash[28005]: cluster 2026-03-10T07:33:24.645289+0000 mgr.y (mgr.24407) 329 : cluster [DBG] pgmap v531: 260 pgs: 260 active+clean; 8.3 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:26 vm00 bash[28005]: cluster 2026-03-10T07:33:25.285291+0000 mon.a (mon.0) 2568 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:26 vm00 bash[28005]: cluster 2026-03-10T07:33:25.285291+0000 mon.a (mon.0) 2568 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:26 vm00 bash[28005]: audit 2026-03-10T07:33:25.286281+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:26 vm00 bash[28005]: audit 2026-03-10T07:33:25.286281+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:26 vm00 bash[28005]: audit 2026-03-10T07:33:25.288279+0000 mon.a (mon.0) 2569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:26 vm00 bash[28005]: audit 2026-03-10T07:33:25.288279+0000 mon.a (mon.0) 2569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:26 vm00 bash[28005]: cluster 2026-03-10T07:33:26.162385+0000 mon.a (mon.0) 2570 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:26 vm00 bash[28005]: cluster 2026-03-10T07:33:26.162385+0000 mon.a (mon.0) 2570 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:26 vm00 bash[20701]: cluster 2026-03-10T07:33:24.645289+0000 mgr.y (mgr.24407) 329 : cluster [DBG] pgmap v531: 260 pgs: 260 active+clean; 8.3 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:26 vm00 bash[20701]: cluster 2026-03-10T07:33:24.645289+0000 mgr.y (mgr.24407) 329 : cluster [DBG] pgmap v531: 260 pgs: 260 active+clean; 8.3 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:26 vm00 bash[20701]: cluster 2026-03-10T07:33:25.285291+0000 mon.a (mon.0) 2568 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:26 vm00 bash[20701]: cluster 2026-03-10T07:33:25.285291+0000 mon.a (mon.0) 2568 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T07:33:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:26 vm00 bash[20701]: audit 2026-03-10T07:33:25.286281+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:26 vm00 bash[20701]: audit 2026-03-10T07:33:25.286281+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:26 vm00 bash[20701]: audit 2026-03-10T07:33:25.288279+0000 mon.a (mon.0) 2569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:26 vm00 bash[20701]: audit 2026-03-10T07:33:25.288279+0000 mon.a (mon.0) 2569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:26 vm00 bash[20701]: cluster 2026-03-10T07:33:26.162385+0000 mon.a (mon.0) 2570 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:26.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:26 vm00 bash[20701]: cluster 2026-03-10T07:33:26.162385+0000 mon.a (mon.0) 2570 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:26.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:26 vm03 bash[23382]: cluster 2026-03-10T07:33:24.645289+0000 mgr.y (mgr.24407) 329 : cluster [DBG] pgmap v531: 260 pgs: 260 active+clean; 8.3 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:33:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:26 vm03 bash[23382]: cluster 2026-03-10T07:33:24.645289+0000 mgr.y (mgr.24407) 329 : cluster [DBG] pgmap v531: 260 pgs: 260 active+clean; 8.3 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:33:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:26 vm03 bash[23382]: cluster 2026-03-10T07:33:25.285291+0000 mon.a (mon.0) 2568 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T07:33:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:26 vm03 bash[23382]: cluster 2026-03-10T07:33:25.285291+0000 mon.a (mon.0) 2568 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T07:33:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:26 vm03 bash[23382]: audit 2026-03-10T07:33:25.286281+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:26 vm03 bash[23382]: audit 2026-03-10T07:33:25.286281+0000 mon.b (mon.1) 411 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:26 vm03 bash[23382]: audit 2026-03-10T07:33:25.288279+0000 mon.a (mon.0) 2569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:26 vm03 bash[23382]: audit 2026-03-10T07:33:25.288279+0000 mon.a (mon.0) 2569 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:33:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:26 vm03 bash[23382]: cluster 2026-03-10T07:33:26.162385+0000 mon.a (mon.0) 2570 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:26 vm03 bash[23382]: cluster 2026-03-10T07:33:26.162385+0000 mon.a (mon.0) 2570 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: audit 2026-03-10T07:33:26.277014+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: audit 2026-03-10T07:33:26.277014+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: audit 2026-03-10T07:33:26.281265+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: audit 2026-03-10T07:33:26.281265+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: cluster 2026-03-10T07:33:26.282620+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: cluster 2026-03-10T07:33:26.282620+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: audit 2026-03-10T07:33:26.283685+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: audit 2026-03-10T07:33:26.283685+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: audit 2026-03-10T07:33:26.295235+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: audit 2026-03-10T07:33:26.295235+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: audit 2026-03-10T07:33:27.280772+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: audit 2026-03-10T07:33:27.280772+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: cluster 2026-03-10T07:33:27.289277+0000 mon.a (mon.0) 2575 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:27 vm00 bash[28005]: cluster 2026-03-10T07:33:27.289277+0000 mon.a (mon.0) 2575 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: audit 2026-03-10T07:33:26.277014+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: audit 2026-03-10T07:33:26.277014+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: audit 2026-03-10T07:33:26.281265+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: audit 2026-03-10T07:33:26.281265+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: cluster 2026-03-10T07:33:26.282620+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T07:33:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: cluster 2026-03-10T07:33:26.282620+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T07:33:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: audit 2026-03-10T07:33:26.283685+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: audit 2026-03-10T07:33:26.283685+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: audit 2026-03-10T07:33:26.295235+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: audit 2026-03-10T07:33:26.295235+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: audit 2026-03-10T07:33:27.280772+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: audit 2026-03-10T07:33:27.280772+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: cluster 2026-03-10T07:33:27.289277+0000 mon.a (mon.0) 2575 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T07:33:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:27 vm00 bash[20701]: cluster 2026-03-10T07:33:27.289277+0000 mon.a (mon.0) 2575 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: audit 2026-03-10T07:33:26.277014+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: audit 2026-03-10T07:33:26.277014+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: audit 2026-03-10T07:33:26.281265+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: audit 2026-03-10T07:33:26.281265+0000 mon.b (mon.1) 412 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: cluster 2026-03-10T07:33:26.282620+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: cluster 2026-03-10T07:33:26.282620+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: audit 2026-03-10T07:33:26.283685+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: audit 2026-03-10T07:33:26.283685+0000 mon.b (mon.1) 413 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: audit 2026-03-10T07:33:26.295235+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: audit 2026-03-10T07:33:26.295235+0000 mon.a (mon.0) 2573 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: audit 2026-03-10T07:33:27.280772+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: audit 2026-03-10T07:33:27.280772+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: cluster 2026-03-10T07:33:27.289277+0000 mon.a (mon.0) 2575 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T07:33:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:27 vm03 bash[23382]: cluster 2026-03-10T07:33:27.289277+0000 mon.a (mon.0) 2575 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T07:33:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:28 vm00 bash[28005]: cluster 2026-03-10T07:33:26.645696+0000 mgr.y (mgr.24407) 330 : cluster [DBG] pgmap v534: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:28 vm00 bash[28005]: cluster 2026-03-10T07:33:26.645696+0000 mgr.y (mgr.24407) 330 : cluster [DBG] pgmap v534: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:28 vm00 bash[20701]: cluster 2026-03-10T07:33:26.645696+0000 mgr.y (mgr.24407) 330 : cluster [DBG] pgmap v534: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:28 vm00 bash[20701]: cluster 2026-03-10T07:33:26.645696+0000 mgr.y (mgr.24407) 330 : cluster [DBG] pgmap v534: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:28 vm03 bash[23382]: cluster 2026-03-10T07:33:26.645696+0000 mgr.y (mgr.24407) 330 : cluster [DBG] pgmap v534: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:28 vm03 bash[23382]: cluster 2026-03-10T07:33:26.645696+0000 mgr.y (mgr.24407) 330 : cluster [DBG] pgmap v534: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:29.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:29 vm03 bash[23382]: cluster 2026-03-10T07:33:28.365602+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T07:33:29.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:29 vm03 bash[23382]: cluster 2026-03-10T07:33:28.365602+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T07:33:29.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:29 vm03 bash[23382]: cluster 2026-03-10T07:33:28.646090+0000 mgr.y (mgr.24407) 331 : cluster [DBG] pgmap v537: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:29.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:29 vm03 bash[23382]: cluster 2026-03-10T07:33:28.646090+0000 mgr.y (mgr.24407) 331 : cluster [DBG] pgmap v537: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:29 vm00 bash[28005]: cluster 2026-03-10T07:33:28.365602+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T07:33:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:29 vm00 bash[28005]: cluster 2026-03-10T07:33:28.365602+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T07:33:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:29 vm00 bash[28005]: cluster 2026-03-10T07:33:28.646090+0000 mgr.y (mgr.24407) 331 : cluster [DBG] pgmap v537: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:29 vm00 bash[28005]: cluster 2026-03-10T07:33:28.646090+0000 mgr.y (mgr.24407) 331 : cluster [DBG] pgmap v537: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:29 vm00 bash[20701]: cluster 2026-03-10T07:33:28.365602+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T07:33:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:29 vm00 bash[20701]: cluster 2026-03-10T07:33:28.365602+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T07:33:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:29 vm00 bash[20701]: cluster 2026-03-10T07:33:28.646090+0000 mgr.y (mgr.24407) 331 : cluster [DBG] pgmap v537: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:29 vm00 bash[20701]: cluster 2026-03-10T07:33:28.646090+0000 mgr.y (mgr.24407) 331 : cluster [DBG] pgmap v537: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 901 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:33:30.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:30 vm03 bash[23382]: cluster 2026-03-10T07:33:29.414369+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T07:33:30.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:30 vm03 bash[23382]: cluster 2026-03-10T07:33:29.414369+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T07:33:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:30 vm00 bash[28005]: cluster 2026-03-10T07:33:29.414369+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T07:33:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:30 vm00 bash[28005]: cluster 2026-03-10T07:33:29.414369+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T07:33:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:30 vm00 bash[20701]: cluster 2026-03-10T07:33:29.414369+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T07:33:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:30 vm00 bash[20701]: cluster 2026-03-10T07:33:29.414369+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T07:33:31.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:33:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:33:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:33:31.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:31 vm03 bash[23382]: cluster 2026-03-10T07:33:30.397918+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T07:33:31.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:31 vm03 bash[23382]: cluster 2026-03-10T07:33:30.397918+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T07:33:31.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:31 vm03 bash[23382]: cluster 2026-03-10T07:33:30.646471+0000 mgr.y (mgr.24407) 332 : cluster [DBG] pgmap v540: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:33:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:31 vm03 bash[23382]: cluster 2026-03-10T07:33:30.646471+0000 mgr.y (mgr.24407) 332 : cluster [DBG] pgmap v540: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:33:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:31 vm00 bash[28005]: cluster 2026-03-10T07:33:30.397918+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T07:33:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:31 vm00 bash[28005]: cluster 2026-03-10T07:33:30.397918+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T07:33:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:31 vm00 bash[28005]: cluster 2026-03-10T07:33:30.646471+0000 mgr.y (mgr.24407) 332 : cluster [DBG] pgmap v540: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:33:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:31 vm00 bash[28005]: cluster 2026-03-10T07:33:30.646471+0000 mgr.y (mgr.24407) 332 : cluster [DBG] pgmap v540: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:33:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:31 vm00 bash[20701]: cluster 2026-03-10T07:33:30.397918+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T07:33:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:31 vm00 bash[20701]: cluster 2026-03-10T07:33:30.397918+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T07:33:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:31 vm00 bash[20701]: cluster 2026-03-10T07:33:30.646471+0000 mgr.y (mgr.24407) 332 : cluster [DBG] pgmap v540: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:33:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:31 vm00 bash[20701]: cluster 2026-03-10T07:33:30.646471+0000 mgr.y (mgr.24407) 332 : cluster [DBG] pgmap v540: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:33:32.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:32 vm03 bash[23382]: cluster 2026-03-10T07:33:31.407367+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T07:33:32.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:32 vm03 bash[23382]: cluster 2026-03-10T07:33:31.407367+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T07:33:32.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:32 vm00 bash[28005]: cluster 2026-03-10T07:33:31.407367+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T07:33:32.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:32 vm00 bash[28005]: cluster 2026-03-10T07:33:31.407367+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T07:33:32.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:32 vm00 bash[20701]: cluster 2026-03-10T07:33:31.407367+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T07:33:32.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:32 vm00 bash[20701]: cluster 2026-03-10T07:33:31.407367+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T07:33:33.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:33:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:33:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:33 vm03 bash[23382]: cluster 2026-03-10T07:33:32.646800+0000 mgr.y (mgr.24407) 333 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.1 KiB/s wr, 5 op/s 2026-03-10T07:33:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:33 vm03 bash[23382]: cluster 2026-03-10T07:33:32.646800+0000 mgr.y (mgr.24407) 333 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.1 KiB/s wr, 5 op/s 2026-03-10T07:33:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:33 vm03 bash[23382]: audit 2026-03-10T07:33:33.188370+0000 mgr.y (mgr.24407) 334 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:33 vm03 bash[23382]: audit 2026-03-10T07:33:33.188370+0000 mgr.y (mgr.24407) 334 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:33.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:33 vm00 bash[28005]: cluster 2026-03-10T07:33:32.646800+0000 mgr.y (mgr.24407) 333 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.1 KiB/s wr, 5 op/s 2026-03-10T07:33:33.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:33 vm00 bash[28005]: cluster 2026-03-10T07:33:32.646800+0000 mgr.y (mgr.24407) 333 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.1 KiB/s wr, 5 op/s 2026-03-10T07:33:33.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:33 vm00 bash[28005]: audit 2026-03-10T07:33:33.188370+0000 mgr.y (mgr.24407) 334 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:33.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:33 vm00 bash[28005]: audit 2026-03-10T07:33:33.188370+0000 mgr.y (mgr.24407) 334 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:33.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:33 vm00 bash[20701]: cluster 2026-03-10T07:33:32.646800+0000 mgr.y (mgr.24407) 333 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.1 KiB/s wr, 5 op/s 2026-03-10T07:33:33.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:33 vm00 bash[20701]: cluster 2026-03-10T07:33:32.646800+0000 mgr.y (mgr.24407) 333 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.1 KiB/s wr, 5 op/s 2026-03-10T07:33:33.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:33 vm00 bash[20701]: audit 2026-03-10T07:33:33.188370+0000 mgr.y (mgr.24407) 334 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:33.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:33 vm00 bash[20701]: audit 2026-03-10T07:33:33.188370+0000 mgr.y (mgr.24407) 334 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:36.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:35 vm03 bash[23382]: cluster 2026-03-10T07:33:34.647268+0000 mgr.y (mgr.24407) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:36.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:35 vm03 bash[23382]: cluster 2026-03-10T07:33:34.647268+0000 mgr.y (mgr.24407) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:35 vm00 bash[28005]: cluster 2026-03-10T07:33:34.647268+0000 mgr.y (mgr.24407) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:35 vm00 bash[28005]: cluster 2026-03-10T07:33:34.647268+0000 mgr.y (mgr.24407) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:35 vm00 bash[20701]: cluster 2026-03-10T07:33:34.647268+0000 mgr.y (mgr.24407) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:35 vm00 bash[20701]: cluster 2026-03-10T07:33:34.647268+0000 mgr.y (mgr.24407) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T07:33:38.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:37 vm03 bash[23382]: cluster 2026-03-10T07:33:36.648117+0000 mgr.y (mgr.24407) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T07:33:38.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:37 vm03 bash[23382]: cluster 2026-03-10T07:33:36.648117+0000 mgr.y (mgr.24407) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T07:33:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:37 vm00 bash[28005]: cluster 2026-03-10T07:33:36.648117+0000 mgr.y (mgr.24407) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T07:33:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:37 vm00 bash[28005]: cluster 2026-03-10T07:33:36.648117+0000 mgr.y (mgr.24407) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T07:33:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:37 vm00 bash[20701]: cluster 2026-03-10T07:33:36.648117+0000 mgr.y (mgr.24407) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T07:33:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:37 vm00 bash[20701]: cluster 2026-03-10T07:33:36.648117+0000 mgr.y (mgr.24407) 336 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T07:33:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:39 vm03 bash[23382]: cluster 2026-03-10T07:33:38.648574+0000 mgr.y (mgr.24407) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 248 B/s wr, 1 op/s 2026-03-10T07:33:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:39 vm03 bash[23382]: cluster 2026-03-10T07:33:38.648574+0000 mgr.y (mgr.24407) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 248 B/s wr, 1 op/s 2026-03-10T07:33:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:39 vm03 bash[23382]: audit 2026-03-10T07:33:39.484654+0000 mon.c (mon.2) 295 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:39 vm03 bash[23382]: audit 2026-03-10T07:33:39.484654+0000 mon.c (mon.2) 295 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:39 vm00 bash[28005]: cluster 2026-03-10T07:33:38.648574+0000 mgr.y (mgr.24407) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 248 B/s wr, 1 op/s 2026-03-10T07:33:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:39 vm00 bash[28005]: cluster 2026-03-10T07:33:38.648574+0000 mgr.y (mgr.24407) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 248 B/s wr, 1 op/s 2026-03-10T07:33:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:39 vm00 bash[28005]: audit 2026-03-10T07:33:39.484654+0000 mon.c (mon.2) 295 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:39 vm00 bash[28005]: audit 2026-03-10T07:33:39.484654+0000 mon.c (mon.2) 295 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:40.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:39 vm00 bash[20701]: cluster 2026-03-10T07:33:38.648574+0000 mgr.y (mgr.24407) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 248 B/s wr, 1 op/s 2026-03-10T07:33:40.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:39 vm00 bash[20701]: cluster 2026-03-10T07:33:38.648574+0000 mgr.y (mgr.24407) 337 : cluster [DBG] pgmap v545: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 248 B/s wr, 1 op/s 2026-03-10T07:33:40.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:39 vm00 bash[20701]: audit 2026-03-10T07:33:39.484654+0000 mon.c (mon.2) 295 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:40.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:39 vm00 bash[20701]: audit 2026-03-10T07:33:39.484654+0000 mon.c (mon.2) 295 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:41.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:33:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:33:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:33:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:42 vm03 bash[23382]: cluster 2026-03-10T07:33:40.649358+0000 mgr.y (mgr.24407) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:33:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:42 vm03 bash[23382]: cluster 2026-03-10T07:33:40.649358+0000 mgr.y (mgr.24407) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:33:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:42 vm03 bash[23382]: cluster 2026-03-10T07:33:41.172550+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T07:33:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:42 vm03 bash[23382]: cluster 2026-03-10T07:33:41.172550+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T07:33:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:42 vm00 bash[20701]: cluster 2026-03-10T07:33:40.649358+0000 mgr.y (mgr.24407) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:33:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:42 vm00 bash[20701]: cluster 2026-03-10T07:33:40.649358+0000 mgr.y (mgr.24407) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:33:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:42 vm00 bash[20701]: cluster 2026-03-10T07:33:41.172550+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T07:33:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:42 vm00 bash[20701]: cluster 2026-03-10T07:33:41.172550+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T07:33:42.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:42 vm00 bash[28005]: cluster 2026-03-10T07:33:40.649358+0000 mgr.y (mgr.24407) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:33:42.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:42 vm00 bash[28005]: cluster 2026-03-10T07:33:40.649358+0000 mgr.y (mgr.24407) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:33:42.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:42 vm00 bash[28005]: cluster 2026-03-10T07:33:41.172550+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T07:33:42.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:42 vm00 bash[28005]: cluster 2026-03-10T07:33:41.172550+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T07:33:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:43 vm03 bash[23382]: cluster 2026-03-10T07:33:42.353534+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T07:33:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:43 vm03 bash[23382]: cluster 2026-03-10T07:33:42.353534+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T07:33:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:43 vm03 bash[23382]: cluster 2026-03-10T07:33:42.649720+0000 mgr.y (mgr.24407) 339 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:33:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:43 vm03 bash[23382]: cluster 2026-03-10T07:33:42.649720+0000 mgr.y (mgr.24407) 339 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:33:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:43 vm03 bash[23382]: audit 2026-03-10T07:33:43.196319+0000 mgr.y (mgr.24407) 340 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:43 vm03 bash[23382]: audit 2026-03-10T07:33:43.196319+0000 mgr.y (mgr.24407) 340 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:43.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:33:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:43 vm00 bash[28005]: cluster 2026-03-10T07:33:42.353534+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:43 vm00 bash[28005]: cluster 2026-03-10T07:33:42.353534+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:43 vm00 bash[28005]: cluster 2026-03-10T07:33:42.649720+0000 mgr.y (mgr.24407) 339 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:43 vm00 bash[28005]: cluster 2026-03-10T07:33:42.649720+0000 mgr.y (mgr.24407) 339 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:43 vm00 bash[28005]: audit 2026-03-10T07:33:43.196319+0000 mgr.y (mgr.24407) 340 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:43 vm00 bash[28005]: audit 2026-03-10T07:33:43.196319+0000 mgr.y (mgr.24407) 340 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:43 vm00 bash[20701]: cluster 2026-03-10T07:33:42.353534+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:43 vm00 bash[20701]: cluster 2026-03-10T07:33:42.353534+0000 mon.a (mon.0) 2581 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:43 vm00 bash[20701]: cluster 2026-03-10T07:33:42.649720+0000 mgr.y (mgr.24407) 339 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:43 vm00 bash[20701]: cluster 2026-03-10T07:33:42.649720+0000 mgr.y (mgr.24407) 339 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:43 vm00 bash[20701]: audit 2026-03-10T07:33:43.196319+0000 mgr.y (mgr.24407) 340 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:43 vm00 bash[20701]: audit 2026-03-10T07:33:43.196319+0000 mgr.y (mgr.24407) 340 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:46.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:45 vm03 bash[23382]: cluster 2026-03-10T07:33:44.650336+0000 mgr.y (mgr.24407) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:33:46.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:45 vm03 bash[23382]: cluster 2026-03-10T07:33:44.650336+0000 mgr.y (mgr.24407) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:33:46.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:45 vm00 bash[28005]: cluster 2026-03-10T07:33:44.650336+0000 mgr.y (mgr.24407) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:33:46.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:45 vm00 bash[28005]: cluster 2026-03-10T07:33:44.650336+0000 mgr.y (mgr.24407) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:33:46.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:45 vm00 bash[20701]: cluster 2026-03-10T07:33:44.650336+0000 mgr.y (mgr.24407) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:33:46.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:45 vm00 bash[20701]: cluster 2026-03-10T07:33:44.650336+0000 mgr.y (mgr.24407) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:33:48.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:47 vm00 bash[28005]: cluster 2026-03-10T07:33:46.651201+0000 mgr.y (mgr.24407) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:48.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:47 vm00 bash[28005]: cluster 2026-03-10T07:33:46.651201+0000 mgr.y (mgr.24407) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:48.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:47 vm00 bash[20701]: cluster 2026-03-10T07:33:46.651201+0000 mgr.y (mgr.24407) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:48.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:47 vm00 bash[20701]: cluster 2026-03-10T07:33:46.651201+0000 mgr.y (mgr.24407) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:48.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:47 vm03 bash[23382]: cluster 2026-03-10T07:33:46.651201+0000 mgr.y (mgr.24407) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:48.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:47 vm03 bash[23382]: cluster 2026-03-10T07:33:46.651201+0000 mgr.y (mgr.24407) 342 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:49 vm00 bash[28005]: cluster 2026-03-10T07:33:48.651557+0000 mgr.y (mgr.24407) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:50.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:49 vm00 bash[28005]: cluster 2026-03-10T07:33:48.651557+0000 mgr.y (mgr.24407) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:50.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:49 vm00 bash[20701]: cluster 2026-03-10T07:33:48.651557+0000 mgr.y (mgr.24407) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:50.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:49 vm00 bash[20701]: cluster 2026-03-10T07:33:48.651557+0000 mgr.y (mgr.24407) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:49 vm03 bash[23382]: cluster 2026-03-10T07:33:48.651557+0000 mgr.y (mgr.24407) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:49 vm03 bash[23382]: cluster 2026-03-10T07:33:48.651557+0000 mgr.y (mgr.24407) 343 : cluster [DBG] pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:51.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:33:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:33:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:33:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:52 vm03 bash[23382]: cluster 2026-03-10T07:33:50.652448+0000 mgr.y (mgr.24407) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:52 vm03 bash[23382]: cluster 2026-03-10T07:33:50.652448+0000 mgr.y (mgr.24407) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:52 vm03 bash[23382]: cluster 2026-03-10T07:33:51.185919+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T07:33:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:52 vm03 bash[23382]: cluster 2026-03-10T07:33:51.185919+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T07:33:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:52 vm00 bash[28005]: cluster 2026-03-10T07:33:50.652448+0000 mgr.y (mgr.24407) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:52 vm00 bash[28005]: cluster 2026-03-10T07:33:50.652448+0000 mgr.y (mgr.24407) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:52 vm00 bash[28005]: cluster 2026-03-10T07:33:51.185919+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T07:33:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:52 vm00 bash[28005]: cluster 2026-03-10T07:33:51.185919+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T07:33:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:52 vm00 bash[20701]: cluster 2026-03-10T07:33:50.652448+0000 mgr.y (mgr.24407) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:52 vm00 bash[20701]: cluster 2026-03-10T07:33:50.652448+0000 mgr.y (mgr.24407) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:52 vm00 bash[20701]: cluster 2026-03-10T07:33:51.185919+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T07:33:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:52 vm00 bash[20701]: cluster 2026-03-10T07:33:51.185919+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T07:33:53.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:33:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:33:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:54 vm03 bash[23382]: cluster 2026-03-10T07:33:52.652821+0000 mgr.y (mgr.24407) 345 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:54 vm03 bash[23382]: cluster 2026-03-10T07:33:52.652821+0000 mgr.y (mgr.24407) 345 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:54 vm03 bash[23382]: audit 2026-03-10T07:33:53.201076+0000 mgr.y (mgr.24407) 346 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:54 vm03 bash[23382]: audit 2026-03-10T07:33:53.201076+0000 mgr.y (mgr.24407) 346 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:54 vm03 bash[23382]: cluster 2026-03-10T07:33:53.281444+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T07:33:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:54 vm03 bash[23382]: cluster 2026-03-10T07:33:53.281444+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:54 vm00 bash[28005]: cluster 2026-03-10T07:33:52.652821+0000 mgr.y (mgr.24407) 345 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:54 vm00 bash[28005]: cluster 2026-03-10T07:33:52.652821+0000 mgr.y (mgr.24407) 345 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:54 vm00 bash[28005]: audit 2026-03-10T07:33:53.201076+0000 mgr.y (mgr.24407) 346 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:54 vm00 bash[28005]: audit 2026-03-10T07:33:53.201076+0000 mgr.y (mgr.24407) 346 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:54 vm00 bash[28005]: cluster 2026-03-10T07:33:53.281444+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:54 vm00 bash[28005]: cluster 2026-03-10T07:33:53.281444+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:54 vm00 bash[20701]: cluster 2026-03-10T07:33:52.652821+0000 mgr.y (mgr.24407) 345 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:54 vm00 bash[20701]: cluster 2026-03-10T07:33:52.652821+0000 mgr.y (mgr.24407) 345 : cluster [DBG] pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:54 vm00 bash[20701]: audit 2026-03-10T07:33:53.201076+0000 mgr.y (mgr.24407) 346 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:54 vm00 bash[20701]: audit 2026-03-10T07:33:53.201076+0000 mgr.y (mgr.24407) 346 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:54 vm00 bash[20701]: cluster 2026-03-10T07:33:53.281444+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T07:33:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:54 vm00 bash[20701]: cluster 2026-03-10T07:33:53.281444+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T07:33:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:55 vm00 bash[28005]: audit 2026-03-10T07:33:54.493210+0000 mon.c (mon.2) 296 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:55 vm00 bash[28005]: audit 2026-03-10T07:33:54.493210+0000 mon.c (mon.2) 296 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:55 vm00 bash[20701]: audit 2026-03-10T07:33:54.493210+0000 mon.c (mon.2) 296 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:55 vm00 bash[20701]: audit 2026-03-10T07:33:54.493210+0000 mon.c (mon.2) 296 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:55 vm03 bash[23382]: audit 2026-03-10T07:33:54.493210+0000 mon.c (mon.2) 296 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:55 vm03 bash[23382]: audit 2026-03-10T07:33:54.493210+0000 mon.c (mon.2) 296 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:33:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:56 vm00 bash[28005]: cluster 2026-03-10T07:33:54.653415+0000 mgr.y (mgr.24407) 347 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:33:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:56 vm00 bash[28005]: cluster 2026-03-10T07:33:54.653415+0000 mgr.y (mgr.24407) 347 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:33:56.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:56 vm00 bash[20701]: cluster 2026-03-10T07:33:54.653415+0000 mgr.y (mgr.24407) 347 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:33:56.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:56 vm00 bash[20701]: cluster 2026-03-10T07:33:54.653415+0000 mgr.y (mgr.24407) 347 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:33:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:56 vm03 bash[23382]: cluster 2026-03-10T07:33:54.653415+0000 mgr.y (mgr.24407) 347 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:33:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:56 vm03 bash[23382]: cluster 2026-03-10T07:33:54.653415+0000 mgr.y (mgr.24407) 347 : cluster [DBG] pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:33:58.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:58 vm00 bash[28005]: cluster 2026-03-10T07:33:56.654074+0000 mgr.y (mgr.24407) 348 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T07:33:58.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:58 vm00 bash[28005]: cluster 2026-03-10T07:33:56.654074+0000 mgr.y (mgr.24407) 348 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T07:33:58.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:58 vm00 bash[20701]: cluster 2026-03-10T07:33:56.654074+0000 mgr.y (mgr.24407) 348 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T07:33:58.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:58 vm00 bash[20701]: cluster 2026-03-10T07:33:56.654074+0000 mgr.y (mgr.24407) 348 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T07:33:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:58 vm03 bash[23382]: cluster 2026-03-10T07:33:56.654074+0000 mgr.y (mgr.24407) 348 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T07:33:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:58 vm03 bash[23382]: cluster 2026-03-10T07:33:56.654074+0000 mgr.y (mgr.24407) 348 : cluster [DBG] pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T07:34:00.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:59 vm00 bash[28005]: cluster 2026-03-10T07:33:58.654489+0000 mgr.y (mgr.24407) 349 : cluster [DBG] pgmap v559: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:34:00.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:33:59 vm00 bash[28005]: cluster 2026-03-10T07:33:58.654489+0000 mgr.y (mgr.24407) 349 : cluster [DBG] pgmap v559: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:34:00.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:59 vm00 bash[20701]: cluster 2026-03-10T07:33:58.654489+0000 mgr.y (mgr.24407) 349 : cluster [DBG] pgmap v559: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:34:00.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:33:59 vm00 bash[20701]: cluster 2026-03-10T07:33:58.654489+0000 mgr.y (mgr.24407) 349 : cluster [DBG] pgmap v559: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:34:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:59 vm03 bash[23382]: cluster 2026-03-10T07:33:58.654489+0000 mgr.y (mgr.24407) 349 : cluster [DBG] pgmap v559: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:34:00.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:33:59 vm03 bash[23382]: cluster 2026-03-10T07:33:58.654489+0000 mgr.y (mgr.24407) 349 : cluster [DBG] pgmap v559: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:34:01.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:34:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:34:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:34:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:02 vm03 bash[23382]: cluster 2026-03-10T07:34:00.655579+0000 mgr.y (mgr.24407) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:02 vm03 bash[23382]: cluster 2026-03-10T07:34:00.655579+0000 mgr.y (mgr.24407) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:02 vm03 bash[23382]: cluster 2026-03-10T07:34:01.180206+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T07:34:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:02 vm03 bash[23382]: cluster 2026-03-10T07:34:01.180206+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T07:34:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:02 vm00 bash[28005]: cluster 2026-03-10T07:34:00.655579+0000 mgr.y (mgr.24407) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:02 vm00 bash[28005]: cluster 2026-03-10T07:34:00.655579+0000 mgr.y (mgr.24407) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:02 vm00 bash[28005]: cluster 2026-03-10T07:34:01.180206+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T07:34:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:02 vm00 bash[28005]: cluster 2026-03-10T07:34:01.180206+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T07:34:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:02 vm00 bash[20701]: cluster 2026-03-10T07:34:00.655579+0000 mgr.y (mgr.24407) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:02 vm00 bash[20701]: cluster 2026-03-10T07:34:00.655579+0000 mgr.y (mgr.24407) 350 : cluster [DBG] pgmap v560: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:02 vm00 bash[20701]: cluster 2026-03-10T07:34:01.180206+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T07:34:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:02 vm00 bash[20701]: cluster 2026-03-10T07:34:01.180206+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T07:34:03.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:34:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:34:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: cluster 2026-03-10T07:34:02.655960+0000 mgr.y (mgr.24407) 351 : cluster [DBG] pgmap v562: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: cluster 2026-03-10T07:34:02.655960+0000 mgr.y (mgr.24407) 351 : cluster [DBG] pgmap v562: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: audit 2026-03-10T07:34:03.208319+0000 mgr.y (mgr.24407) 352 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: audit 2026-03-10T07:34:03.208319+0000 mgr.y (mgr.24407) 352 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: audit 2026-03-10T07:34:03.293461+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: audit 2026-03-10T07:34:03.293461+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: audit 2026-03-10T07:34:03.294597+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: audit 2026-03-10T07:34:03.294597+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: audit 2026-03-10T07:34:03.295427+0000 mon.a (mon.0) 2585 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: audit 2026-03-10T07:34:03.295427+0000 mon.a (mon.0) 2585 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: audit 2026-03-10T07:34:03.296354+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:04 vm03 bash[23382]: audit 2026-03-10T07:34:03.296354+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: cluster 2026-03-10T07:34:02.655960+0000 mgr.y (mgr.24407) 351 : cluster [DBG] pgmap v562: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: cluster 2026-03-10T07:34:02.655960+0000 mgr.y (mgr.24407) 351 : cluster [DBG] pgmap v562: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: audit 2026-03-10T07:34:03.208319+0000 mgr.y (mgr.24407) 352 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: audit 2026-03-10T07:34:03.208319+0000 mgr.y (mgr.24407) 352 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: audit 2026-03-10T07:34:03.293461+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: audit 2026-03-10T07:34:03.293461+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: audit 2026-03-10T07:34:03.294597+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: audit 2026-03-10T07:34:03.294597+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: audit 2026-03-10T07:34:03.295427+0000 mon.a (mon.0) 2585 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: audit 2026-03-10T07:34:03.295427+0000 mon.a (mon.0) 2585 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: audit 2026-03-10T07:34:03.296354+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:04 vm00 bash[28005]: audit 2026-03-10T07:34:03.296354+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:04.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: cluster 2026-03-10T07:34:02.655960+0000 mgr.y (mgr.24407) 351 : cluster [DBG] pgmap v562: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: cluster 2026-03-10T07:34:02.655960+0000 mgr.y (mgr.24407) 351 : cluster [DBG] pgmap v562: 292 pgs: 292 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: audit 2026-03-10T07:34:03.208319+0000 mgr.y (mgr.24407) 352 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: audit 2026-03-10T07:34:03.208319+0000 mgr.y (mgr.24407) 352 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: audit 2026-03-10T07:34:03.293461+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: audit 2026-03-10T07:34:03.293461+0000 mon.b (mon.1) 414 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: audit 2026-03-10T07:34:03.294597+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: audit 2026-03-10T07:34:03.294597+0000 mon.b (mon.1) 415 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: audit 2026-03-10T07:34:03.295427+0000 mon.a (mon.0) 2585 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: audit 2026-03-10T07:34:03.295427+0000 mon.a (mon.0) 2585 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: audit 2026-03-10T07:34:03.296354+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:04 vm00 bash[20701]: audit 2026-03-10T07:34:03.296354+0000 mon.a (mon.0) 2586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-63"}]: dispatch 2026-03-10T07:34:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:05 vm03 bash[23382]: cluster 2026-03-10T07:34:04.202720+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T07:34:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:05 vm03 bash[23382]: cluster 2026-03-10T07:34:04.202720+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T07:34:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:05 vm03 bash[23382]: audit 2026-03-10T07:34:05.196167+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:05 vm03 bash[23382]: audit 2026-03-10T07:34:05.196167+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:05 vm03 bash[23382]: cluster 2026-03-10T07:34:05.200785+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T07:34:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:05 vm03 bash[23382]: cluster 2026-03-10T07:34:05.200785+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T07:34:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:05 vm03 bash[23382]: audit 2026-03-10T07:34:05.201423+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:05 vm03 bash[23382]: audit 2026-03-10T07:34:05.201423+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:05 vm00 bash[28005]: cluster 2026-03-10T07:34:04.202720+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:05 vm00 bash[28005]: cluster 2026-03-10T07:34:04.202720+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:05 vm00 bash[28005]: audit 2026-03-10T07:34:05.196167+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:05 vm00 bash[28005]: audit 2026-03-10T07:34:05.196167+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:05 vm00 bash[28005]: cluster 2026-03-10T07:34:05.200785+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:05 vm00 bash[28005]: cluster 2026-03-10T07:34:05.200785+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:05 vm00 bash[28005]: audit 2026-03-10T07:34:05.201423+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:05 vm00 bash[28005]: audit 2026-03-10T07:34:05.201423+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:05 vm00 bash[20701]: cluster 2026-03-10T07:34:04.202720+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:05 vm00 bash[20701]: cluster 2026-03-10T07:34:04.202720+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:05 vm00 bash[20701]: audit 2026-03-10T07:34:05.196167+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:05 vm00 bash[20701]: audit 2026-03-10T07:34:05.196167+0000 mon.b (mon.1) 416 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:05 vm00 bash[20701]: cluster 2026-03-10T07:34:05.200785+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:05 vm00 bash[20701]: cluster 2026-03-10T07:34:05.200785+0000 mon.a (mon.0) 2588 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T07:34:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:05 vm00 bash[20701]: audit 2026-03-10T07:34:05.201423+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:05 vm00 bash[20701]: audit 2026-03-10T07:34:05.201423+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: cluster 2026-03-10T07:34:04.656537+0000 mgr.y (mgr.24407) 353 : cluster [DBG] pgmap v564: 260 pgs: 260 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: cluster 2026-03-10T07:34:04.656537+0000 mgr.y (mgr.24407) 353 : cluster [DBG] pgmap v564: 260 pgs: 260 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:05.780337+0000 mon.c (mon.2) 297 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:05.780337+0000 mon.c (mon.2) 297 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.107320+0000 mon.c (mon.2) 298 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.107320+0000 mon.c (mon.2) 298 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.108213+0000 mon.c (mon.2) 299 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.108213+0000 mon.c (mon.2) 299 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.114253+0000 mon.a (mon.0) 2590 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.114253+0000 mon.a (mon.0) 2590 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.193227+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.193227+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.207254+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.207254+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: cluster 2026-03-10T07:34:06.207324+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: cluster 2026-03-10T07:34:06.207324+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.212821+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.212821+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.214454+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:06 vm03 bash[23382]: audit 2026-03-10T07:34:06.214454+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: cluster 2026-03-10T07:34:04.656537+0000 mgr.y (mgr.24407) 353 : cluster [DBG] pgmap v564: 260 pgs: 260 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: cluster 2026-03-10T07:34:04.656537+0000 mgr.y (mgr.24407) 353 : cluster [DBG] pgmap v564: 260 pgs: 260 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:05.780337+0000 mon.c (mon.2) 297 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:34:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:05.780337+0000 mon.c (mon.2) 297 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:34:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.107320+0000 mon.c (mon.2) 298 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:34:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.107320+0000 mon.c (mon.2) 298 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:34:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.108213+0000 mon.c (mon.2) 299 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:34:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.108213+0000 mon.c (mon.2) 299 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.114253+0000 mon.a (mon.0) 2590 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.114253+0000 mon.a (mon.0) 2590 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.193227+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.193227+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.207254+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.207254+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: cluster 2026-03-10T07:34:06.207324+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: cluster 2026-03-10T07:34:06.207324+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.212821+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.212821+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.214454+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:06 vm00 bash[28005]: audit 2026-03-10T07:34:06.214454+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: cluster 2026-03-10T07:34:04.656537+0000 mgr.y (mgr.24407) 353 : cluster [DBG] pgmap v564: 260 pgs: 260 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: cluster 2026-03-10T07:34:04.656537+0000 mgr.y (mgr.24407) 353 : cluster [DBG] pgmap v564: 260 pgs: 260 active+clean; 8.3 MiB data, 920 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:05.780337+0000 mon.c (mon.2) 297 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:05.780337+0000 mon.c (mon.2) 297 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.107320+0000 mon.c (mon.2) 298 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.107320+0000 mon.c (mon.2) 298 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.108213+0000 mon.c (mon.2) 299 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.108213+0000 mon.c (mon.2) 299 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.114253+0000 mon.a (mon.0) 2590 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.114253+0000 mon.a (mon.0) 2590 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.193227+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.193227+0000 mon.a (mon.0) 2591 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.207254+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.207254+0000 mon.b (mon.1) 417 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: cluster 2026-03-10T07:34:06.207324+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: cluster 2026-03-10T07:34:06.207324+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.212821+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.212821+0000 mon.b (mon.1) 418 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.214454+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:06 vm00 bash[20701]: audit 2026-03-10T07:34:06.214454+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:08 vm03 bash[23382]: cluster 2026-03-10T07:34:06.656941+0000 mgr.y (mgr.24407) 354 : cluster [DBG] pgmap v567: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:08 vm03 bash[23382]: cluster 2026-03-10T07:34:06.656941+0000 mgr.y (mgr.24407) 354 : cluster [DBG] pgmap v567: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:08 vm03 bash[23382]: audit 2026-03-10T07:34:07.197226+0000 mon.a (mon.0) 2594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:08 vm03 bash[23382]: audit 2026-03-10T07:34:07.197226+0000 mon.a (mon.0) 2594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:08 vm03 bash[23382]: cluster 2026-03-10T07:34:07.200407+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T07:34:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:08 vm03 bash[23382]: cluster 2026-03-10T07:34:07.200407+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:08 vm00 bash[28005]: cluster 2026-03-10T07:34:06.656941+0000 mgr.y (mgr.24407) 354 : cluster [DBG] pgmap v567: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:08 vm00 bash[28005]: cluster 2026-03-10T07:34:06.656941+0000 mgr.y (mgr.24407) 354 : cluster [DBG] pgmap v567: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:08 vm00 bash[28005]: audit 2026-03-10T07:34:07.197226+0000 mon.a (mon.0) 2594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:08 vm00 bash[28005]: audit 2026-03-10T07:34:07.197226+0000 mon.a (mon.0) 2594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:08 vm00 bash[28005]: cluster 2026-03-10T07:34:07.200407+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:08 vm00 bash[28005]: cluster 2026-03-10T07:34:07.200407+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:08 vm00 bash[20701]: cluster 2026-03-10T07:34:06.656941+0000 mgr.y (mgr.24407) 354 : cluster [DBG] pgmap v567: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:08 vm00 bash[20701]: cluster 2026-03-10T07:34:06.656941+0000 mgr.y (mgr.24407) 354 : cluster [DBG] pgmap v567: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:08 vm00 bash[20701]: audit 2026-03-10T07:34:07.197226+0000 mon.a (mon.0) 2594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:08 vm00 bash[20701]: audit 2026-03-10T07:34:07.197226+0000 mon.a (mon.0) 2594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:08 vm00 bash[20701]: cluster 2026-03-10T07:34:07.200407+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T07:34:08.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:08 vm00 bash[20701]: cluster 2026-03-10T07:34:07.200407+0000 mon.a (mon.0) 2595 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T07:34:09.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:09 vm00 bash[28005]: cluster 2026-03-10T07:34:08.237038+0000 mon.a (mon.0) 2596 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T07:34:09.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:09 vm00 bash[28005]: cluster 2026-03-10T07:34:08.237038+0000 mon.a (mon.0) 2596 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T07:34:09.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:09 vm00 bash[20701]: cluster 2026-03-10T07:34:08.237038+0000 mon.a (mon.0) 2596 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T07:34:09.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:09 vm00 bash[20701]: cluster 2026-03-10T07:34:08.237038+0000 mon.a (mon.0) 2596 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T07:34:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:09 vm03 bash[23382]: cluster 2026-03-10T07:34:08.237038+0000 mon.a (mon.0) 2596 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T07:34:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:09 vm03 bash[23382]: cluster 2026-03-10T07:34:08.237038+0000 mon.a (mon.0) 2596 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:10 vm00 bash[28005]: cluster 2026-03-10T07:34:08.657318+0000 mgr.y (mgr.24407) 355 : cluster [DBG] pgmap v570: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:10 vm00 bash[28005]: cluster 2026-03-10T07:34:08.657318+0000 mgr.y (mgr.24407) 355 : cluster [DBG] pgmap v570: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:10 vm00 bash[28005]: audit 2026-03-10T07:34:09.499598+0000 mon.c (mon.2) 300 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:10 vm00 bash[28005]: audit 2026-03-10T07:34:09.499598+0000 mon.c (mon.2) 300 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:10 vm00 bash[28005]: cluster 2026-03-10T07:34:09.556054+0000 mon.a (mon.0) 2597 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:10 vm00 bash[28005]: cluster 2026-03-10T07:34:09.556054+0000 mon.a (mon.0) 2597 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:10 vm00 bash[20701]: cluster 2026-03-10T07:34:08.657318+0000 mgr.y (mgr.24407) 355 : cluster [DBG] pgmap v570: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:10 vm00 bash[20701]: cluster 2026-03-10T07:34:08.657318+0000 mgr.y (mgr.24407) 355 : cluster [DBG] pgmap v570: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:10 vm00 bash[20701]: audit 2026-03-10T07:34:09.499598+0000 mon.c (mon.2) 300 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:10 vm00 bash[20701]: audit 2026-03-10T07:34:09.499598+0000 mon.c (mon.2) 300 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:10 vm00 bash[20701]: cluster 2026-03-10T07:34:09.556054+0000 mon.a (mon.0) 2597 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T07:34:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:10 vm00 bash[20701]: cluster 2026-03-10T07:34:09.556054+0000 mon.a (mon.0) 2597 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T07:34:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:10 vm03 bash[23382]: cluster 2026-03-10T07:34:08.657318+0000 mgr.y (mgr.24407) 355 : cluster [DBG] pgmap v570: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:10 vm03 bash[23382]: cluster 2026-03-10T07:34:08.657318+0000 mgr.y (mgr.24407) 355 : cluster [DBG] pgmap v570: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:34:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:10 vm03 bash[23382]: audit 2026-03-10T07:34:09.499598+0000 mon.c (mon.2) 300 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:10 vm03 bash[23382]: audit 2026-03-10T07:34:09.499598+0000 mon.c (mon.2) 300 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:10 vm03 bash[23382]: cluster 2026-03-10T07:34:09.556054+0000 mon.a (mon.0) 2597 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T07:34:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:10 vm03 bash[23382]: cluster 2026-03-10T07:34:09.556054+0000 mon.a (mon.0) 2597 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T07:34:11.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:34:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:34:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:34:11.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:11 vm00 bash[28005]: cluster 2026-03-10T07:34:10.545748+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T07:34:11.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:11 vm00 bash[28005]: cluster 2026-03-10T07:34:10.545748+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T07:34:11.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:11 vm00 bash[28005]: cluster 2026-03-10T07:34:10.657766+0000 mgr.y (mgr.24407) 356 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 7 op/s 2026-03-10T07:34:11.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:11 vm00 bash[28005]: cluster 2026-03-10T07:34:10.657766+0000 mgr.y (mgr.24407) 356 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 7 op/s 2026-03-10T07:34:11.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:11 vm00 bash[20701]: cluster 2026-03-10T07:34:10.545748+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T07:34:11.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:11 vm00 bash[20701]: cluster 2026-03-10T07:34:10.545748+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T07:34:11.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:11 vm00 bash[20701]: cluster 2026-03-10T07:34:10.657766+0000 mgr.y (mgr.24407) 356 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 7 op/s 2026-03-10T07:34:11.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:11 vm00 bash[20701]: cluster 2026-03-10T07:34:10.657766+0000 mgr.y (mgr.24407) 356 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 7 op/s 2026-03-10T07:34:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:11 vm03 bash[23382]: cluster 2026-03-10T07:34:10.545748+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T07:34:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:11 vm03 bash[23382]: cluster 2026-03-10T07:34:10.545748+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T07:34:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:11 vm03 bash[23382]: cluster 2026-03-10T07:34:10.657766+0000 mgr.y (mgr.24407) 356 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 7 op/s 2026-03-10T07:34:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:11 vm03 bash[23382]: cluster 2026-03-10T07:34:10.657766+0000 mgr.y (mgr.24407) 356 : cluster [DBG] pgmap v573: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.5 KiB/s wr, 7 op/s 2026-03-10T07:34:13.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:34:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:34:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:13 vm03 bash[23382]: cluster 2026-03-10T07:34:12.658142+0000 mgr.y (mgr.24407) 357 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.8 KiB/s wr, 5 op/s 2026-03-10T07:34:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:13 vm03 bash[23382]: cluster 2026-03-10T07:34:12.658142+0000 mgr.y (mgr.24407) 357 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.8 KiB/s wr, 5 op/s 2026-03-10T07:34:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:13 vm03 bash[23382]: audit 2026-03-10T07:34:13.216171+0000 mgr.y (mgr.24407) 358 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:13 vm03 bash[23382]: audit 2026-03-10T07:34:13.216171+0000 mgr.y (mgr.24407) 358 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:14.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:13 vm00 bash[28005]: cluster 2026-03-10T07:34:12.658142+0000 mgr.y (mgr.24407) 357 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.8 KiB/s wr, 5 op/s 2026-03-10T07:34:14.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:13 vm00 bash[28005]: cluster 2026-03-10T07:34:12.658142+0000 mgr.y (mgr.24407) 357 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.8 KiB/s wr, 5 op/s 2026-03-10T07:34:14.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:13 vm00 bash[28005]: audit 2026-03-10T07:34:13.216171+0000 mgr.y (mgr.24407) 358 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:14.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:13 vm00 bash[28005]: audit 2026-03-10T07:34:13.216171+0000 mgr.y (mgr.24407) 358 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:13 vm00 bash[20701]: cluster 2026-03-10T07:34:12.658142+0000 mgr.y (mgr.24407) 357 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.8 KiB/s wr, 5 op/s 2026-03-10T07:34:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:13 vm00 bash[20701]: cluster 2026-03-10T07:34:12.658142+0000 mgr.y (mgr.24407) 357 : cluster [DBG] pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.8 KiB/s wr, 5 op/s 2026-03-10T07:34:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:13 vm00 bash[20701]: audit 2026-03-10T07:34:13.216171+0000 mgr.y (mgr.24407) 358 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:13 vm00 bash[20701]: audit 2026-03-10T07:34:13.216171+0000 mgr.y (mgr.24407) 358 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:15 vm03 bash[23382]: cluster 2026-03-10T07:34:14.658629+0000 mgr.y (mgr.24407) 359 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.6 KiB/s wr, 5 op/s 2026-03-10T07:34:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:15 vm03 bash[23382]: cluster 2026-03-10T07:34:14.658629+0000 mgr.y (mgr.24407) 359 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.6 KiB/s wr, 5 op/s 2026-03-10T07:34:16.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:15 vm00 bash[28005]: cluster 2026-03-10T07:34:14.658629+0000 mgr.y (mgr.24407) 359 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.6 KiB/s wr, 5 op/s 2026-03-10T07:34:16.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:15 vm00 bash[28005]: cluster 2026-03-10T07:34:14.658629+0000 mgr.y (mgr.24407) 359 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.6 KiB/s wr, 5 op/s 2026-03-10T07:34:16.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:15 vm00 bash[20701]: cluster 2026-03-10T07:34:14.658629+0000 mgr.y (mgr.24407) 359 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.6 KiB/s wr, 5 op/s 2026-03-10T07:34:16.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:15 vm00 bash[20701]: cluster 2026-03-10T07:34:14.658629+0000 mgr.y (mgr.24407) 359 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.6 KiB/s wr, 5 op/s 2026-03-10T07:34:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:18 vm03 bash[23382]: cluster 2026-03-10T07:34:16.659209+0000 mgr.y (mgr.24407) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-10T07:34:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:18 vm03 bash[23382]: cluster 2026-03-10T07:34:16.659209+0000 mgr.y (mgr.24407) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-10T07:34:18.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:18 vm00 bash[28005]: cluster 2026-03-10T07:34:16.659209+0000 mgr.y (mgr.24407) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-10T07:34:18.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:18 vm00 bash[28005]: cluster 2026-03-10T07:34:16.659209+0000 mgr.y (mgr.24407) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-10T07:34:18.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:18 vm00 bash[20701]: cluster 2026-03-10T07:34:16.659209+0000 mgr.y (mgr.24407) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-10T07:34:18.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:18 vm00 bash[20701]: cluster 2026-03-10T07:34:16.659209+0000 mgr.y (mgr.24407) 360 : cluster [DBG] pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-10T07:34:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:20 vm00 bash[28005]: cluster 2026-03-10T07:34:18.659655+0000 mgr.y (mgr.24407) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:34:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:20 vm00 bash[28005]: cluster 2026-03-10T07:34:18.659655+0000 mgr.y (mgr.24407) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:34:20.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:20 vm00 bash[20701]: cluster 2026-03-10T07:34:18.659655+0000 mgr.y (mgr.24407) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:34:20.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:20 vm00 bash[20701]: cluster 2026-03-10T07:34:18.659655+0000 mgr.y (mgr.24407) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:34:20.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:20 vm03 bash[23382]: cluster 2026-03-10T07:34:18.659655+0000 mgr.y (mgr.24407) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:34:20.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:20 vm03 bash[23382]: cluster 2026-03-10T07:34:18.659655+0000 mgr.y (mgr.24407) 361 : cluster [DBG] pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:34:21.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:34:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:34:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:34:21.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:21 vm00 bash[20701]: audit 2026-03-10T07:34:20.555425+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:21 vm00 bash[20701]: audit 2026-03-10T07:34:20.555425+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:21 vm00 bash[20701]: audit 2026-03-10T07:34:20.556440+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:21.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:21 vm00 bash[20701]: audit 2026-03-10T07:34:20.556440+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:21.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:21 vm00 bash[20701]: audit 2026-03-10T07:34:20.557214+0000 mon.a (mon.0) 2599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:21 vm00 bash[20701]: audit 2026-03-10T07:34:20.557214+0000 mon.a (mon.0) 2599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:21 vm00 bash[20701]: audit 2026-03-10T07:34:20.558040+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:21.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:21 vm00 bash[20701]: audit 2026-03-10T07:34:20.558040+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:21 vm00 bash[28005]: audit 2026-03-10T07:34:20.555425+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:21 vm00 bash[28005]: audit 2026-03-10T07:34:20.555425+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:21 vm00 bash[28005]: audit 2026-03-10T07:34:20.556440+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:21 vm00 bash[28005]: audit 2026-03-10T07:34:20.556440+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:21 vm00 bash[28005]: audit 2026-03-10T07:34:20.557214+0000 mon.a (mon.0) 2599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:21 vm00 bash[28005]: audit 2026-03-10T07:34:20.557214+0000 mon.a (mon.0) 2599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:21 vm00 bash[28005]: audit 2026-03-10T07:34:20.558040+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:21 vm00 bash[28005]: audit 2026-03-10T07:34:20.558040+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:21.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:21 vm03 bash[23382]: audit 2026-03-10T07:34:20.555425+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:21 vm03 bash[23382]: audit 2026-03-10T07:34:20.555425+0000 mon.b (mon.1) 419 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:21 vm03 bash[23382]: audit 2026-03-10T07:34:20.556440+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:21.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:21 vm03 bash[23382]: audit 2026-03-10T07:34:20.556440+0000 mon.b (mon.1) 420 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:21.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:21 vm03 bash[23382]: audit 2026-03-10T07:34:20.557214+0000 mon.a (mon.0) 2599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:21 vm03 bash[23382]: audit 2026-03-10T07:34:20.557214+0000 mon.a (mon.0) 2599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:21.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:21 vm03 bash[23382]: audit 2026-03-10T07:34:20.558040+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:21.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:21 vm03 bash[23382]: audit 2026-03-10T07:34:20.558040+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-65"}]: dispatch 2026-03-10T07:34:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:22 vm00 bash[28005]: cluster 2026-03-10T07:34:20.660443+0000 mgr.y (mgr.24407) 362 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:34:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:22 vm00 bash[28005]: cluster 2026-03-10T07:34:20.660443+0000 mgr.y (mgr.24407) 362 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:34:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:22 vm00 bash[28005]: cluster 2026-03-10T07:34:21.098075+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T07:34:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:22 vm00 bash[28005]: cluster 2026-03-10T07:34:21.098075+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T07:34:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:22 vm00 bash[28005]: cluster 2026-03-10T07:34:21.184742+0000 mon.a (mon.0) 2602 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T07:34:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:22 vm00 bash[28005]: cluster 2026-03-10T07:34:21.184742+0000 mon.a (mon.0) 2602 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T07:34:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:22 vm00 bash[28005]: audit 2026-03-10T07:34:21.215561+0000 mon.b (mon.1) 421 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:22 vm00 bash[28005]: audit 2026-03-10T07:34:21.215561+0000 mon.b (mon.1) 421 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:22 vm00 bash[28005]: audit 2026-03-10T07:34:21.217264+0000 mon.a (mon.0) 2603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:22 vm00 bash[28005]: audit 2026-03-10T07:34:21.217264+0000 mon.a (mon.0) 2603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:22 vm00 bash[20701]: cluster 2026-03-10T07:34:20.660443+0000 mgr.y (mgr.24407) 362 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:34:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:22 vm00 bash[20701]: cluster 2026-03-10T07:34:20.660443+0000 mgr.y (mgr.24407) 362 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:34:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:22 vm00 bash[20701]: cluster 2026-03-10T07:34:21.098075+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T07:34:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:22 vm00 bash[20701]: cluster 2026-03-10T07:34:21.098075+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T07:34:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:22 vm00 bash[20701]: cluster 2026-03-10T07:34:21.184742+0000 mon.a (mon.0) 2602 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T07:34:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:22 vm00 bash[20701]: cluster 2026-03-10T07:34:21.184742+0000 mon.a (mon.0) 2602 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T07:34:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:22 vm00 bash[20701]: audit 2026-03-10T07:34:21.215561+0000 mon.b (mon.1) 421 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:22 vm00 bash[20701]: audit 2026-03-10T07:34:21.215561+0000 mon.b (mon.1) 421 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:22 vm00 bash[20701]: audit 2026-03-10T07:34:21.217264+0000 mon.a (mon.0) 2603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:22 vm00 bash[20701]: audit 2026-03-10T07:34:21.217264+0000 mon.a (mon.0) 2603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:22 vm03 bash[23382]: cluster 2026-03-10T07:34:20.660443+0000 mgr.y (mgr.24407) 362 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:34:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:22 vm03 bash[23382]: cluster 2026-03-10T07:34:20.660443+0000 mgr.y (mgr.24407) 362 : cluster [DBG] pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:34:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:22 vm03 bash[23382]: cluster 2026-03-10T07:34:21.098075+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T07:34:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:22 vm03 bash[23382]: cluster 2026-03-10T07:34:21.098075+0000 mon.a (mon.0) 2601 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T07:34:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:22 vm03 bash[23382]: cluster 2026-03-10T07:34:21.184742+0000 mon.a (mon.0) 2602 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T07:34:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:22 vm03 bash[23382]: cluster 2026-03-10T07:34:21.184742+0000 mon.a (mon.0) 2602 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T07:34:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:22 vm03 bash[23382]: audit 2026-03-10T07:34:21.215561+0000 mon.b (mon.1) 421 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:22 vm03 bash[23382]: audit 2026-03-10T07:34:21.215561+0000 mon.b (mon.1) 421 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:22 vm03 bash[23382]: audit 2026-03-10T07:34:21.217264+0000 mon.a (mon.0) 2603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:22 vm03 bash[23382]: audit 2026-03-10T07:34:21.217264+0000 mon.a (mon.0) 2603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:23.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:34:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:34:23.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:23 vm03 bash[23382]: audit 2026-03-10T07:34:22.181966+0000 mon.a (mon.0) 2604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:23.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:23 vm03 bash[23382]: audit 2026-03-10T07:34:22.181966+0000 mon.a (mon.0) 2604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:23.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:23 vm03 bash[23382]: cluster 2026-03-10T07:34:22.187514+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T07:34:23.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:23 vm03 bash[23382]: cluster 2026-03-10T07:34:22.187514+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T07:34:23.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:23 vm03 bash[23382]: audit 2026-03-10T07:34:22.189145+0000 mon.b (mon.1) 422 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:23.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:23 vm03 bash[23382]: audit 2026-03-10T07:34:22.189145+0000 mon.b (mon.1) 422 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:23 vm00 bash[28005]: audit 2026-03-10T07:34:22.181966+0000 mon.a (mon.0) 2604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:23 vm00 bash[28005]: audit 2026-03-10T07:34:22.181966+0000 mon.a (mon.0) 2604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:23 vm00 bash[28005]: cluster 2026-03-10T07:34:22.187514+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:23 vm00 bash[28005]: cluster 2026-03-10T07:34:22.187514+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:23 vm00 bash[28005]: audit 2026-03-10T07:34:22.189145+0000 mon.b (mon.1) 422 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:23 vm00 bash[28005]: audit 2026-03-10T07:34:22.189145+0000 mon.b (mon.1) 422 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:23 vm00 bash[20701]: audit 2026-03-10T07:34:22.181966+0000 mon.a (mon.0) 2604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:23 vm00 bash[20701]: audit 2026-03-10T07:34:22.181966+0000 mon.a (mon.0) 2604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:23 vm00 bash[20701]: cluster 2026-03-10T07:34:22.187514+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:23 vm00 bash[20701]: cluster 2026-03-10T07:34:22.187514+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:23 vm00 bash[20701]: audit 2026-03-10T07:34:22.189145+0000 mon.b (mon.1) 422 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:23 vm00 bash[20701]: audit 2026-03-10T07:34:22.189145+0000 mon.b (mon.1) 422 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: cluster 2026-03-10T07:34:22.660795+0000 mgr.y (mgr.24407) 363 : cluster [DBG] pgmap v582: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: cluster 2026-03-10T07:34:22.660795+0000 mgr.y (mgr.24407) 363 : cluster [DBG] pgmap v582: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: audit 2026-03-10T07:34:23.226285+0000 mgr.y (mgr.24407) 364 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: audit 2026-03-10T07:34:23.226285+0000 mgr.y (mgr.24407) 364 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: cluster 2026-03-10T07:34:23.283034+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: cluster 2026-03-10T07:34:23.283034+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: audit 2026-03-10T07:34:23.332269+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: audit 2026-03-10T07:34:23.332269+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: audit 2026-03-10T07:34:23.333611+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: audit 2026-03-10T07:34:23.333611+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: audit 2026-03-10T07:34:23.334224+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: audit 2026-03-10T07:34:23.334224+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: audit 2026-03-10T07:34:23.335258+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:24 vm00 bash[28005]: audit 2026-03-10T07:34:23.335258+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: cluster 2026-03-10T07:34:22.660795+0000 mgr.y (mgr.24407) 363 : cluster [DBG] pgmap v582: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: cluster 2026-03-10T07:34:22.660795+0000 mgr.y (mgr.24407) 363 : cluster [DBG] pgmap v582: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: audit 2026-03-10T07:34:23.226285+0000 mgr.y (mgr.24407) 364 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: audit 2026-03-10T07:34:23.226285+0000 mgr.y (mgr.24407) 364 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: cluster 2026-03-10T07:34:23.283034+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: cluster 2026-03-10T07:34:23.283034+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: audit 2026-03-10T07:34:23.332269+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: audit 2026-03-10T07:34:23.332269+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: audit 2026-03-10T07:34:23.333611+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: audit 2026-03-10T07:34:23.333611+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: audit 2026-03-10T07:34:23.334224+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: audit 2026-03-10T07:34:23.334224+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: audit 2026-03-10T07:34:23.335258+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:24 vm00 bash[20701]: audit 2026-03-10T07:34:23.335258+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: cluster 2026-03-10T07:34:22.660795+0000 mgr.y (mgr.24407) 363 : cluster [DBG] pgmap v582: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: cluster 2026-03-10T07:34:22.660795+0000 mgr.y (mgr.24407) 363 : cluster [DBG] pgmap v582: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: audit 2026-03-10T07:34:23.226285+0000 mgr.y (mgr.24407) 364 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: audit 2026-03-10T07:34:23.226285+0000 mgr.y (mgr.24407) 364 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: cluster 2026-03-10T07:34:23.283034+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T07:34:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: cluster 2026-03-10T07:34:23.283034+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T07:34:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: audit 2026-03-10T07:34:23.332269+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: audit 2026-03-10T07:34:23.332269+0000 mon.b (mon.1) 423 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: audit 2026-03-10T07:34:23.333611+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: audit 2026-03-10T07:34:23.333611+0000 mon.b (mon.1) 424 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: audit 2026-03-10T07:34:23.334224+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: audit 2026-03-10T07:34:23.334224+0000 mon.a (mon.0) 2607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: audit 2026-03-10T07:34:23.335258+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:24 vm03 bash[23382]: audit 2026-03-10T07:34:23.335258+0000 mon.a (mon.0) 2608 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-67"}]: dispatch 2026-03-10T07:34:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:25 vm00 bash[28005]: cluster 2026-03-10T07:34:24.288992+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T07:34:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:25 vm00 bash[28005]: cluster 2026-03-10T07:34:24.288992+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T07:34:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:25 vm00 bash[28005]: audit 2026-03-10T07:34:24.506334+0000 mon.c (mon.2) 301 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:25 vm00 bash[28005]: audit 2026-03-10T07:34:24.506334+0000 mon.c (mon.2) 301 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:25 vm00 bash[20701]: cluster 2026-03-10T07:34:24.288992+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T07:34:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:25 vm00 bash[20701]: cluster 2026-03-10T07:34:24.288992+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T07:34:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:25 vm00 bash[20701]: audit 2026-03-10T07:34:24.506334+0000 mon.c (mon.2) 301 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:25 vm00 bash[20701]: audit 2026-03-10T07:34:24.506334+0000 mon.c (mon.2) 301 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:25 vm03 bash[23382]: cluster 2026-03-10T07:34:24.288992+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T07:34:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:25 vm03 bash[23382]: cluster 2026-03-10T07:34:24.288992+0000 mon.a (mon.0) 2609 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T07:34:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:25 vm03 bash[23382]: audit 2026-03-10T07:34:24.506334+0000 mon.c (mon.2) 301 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:25 vm03 bash[23382]: audit 2026-03-10T07:34:24.506334+0000 mon.c (mon.2) 301 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:26 vm00 bash[28005]: cluster 2026-03-10T07:34:24.661348+0000 mgr.y (mgr.24407) 365 : cluster [DBG] pgmap v585: 260 pgs: 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:26 vm00 bash[28005]: cluster 2026-03-10T07:34:24.661348+0000 mgr.y (mgr.24407) 365 : cluster [DBG] pgmap v585: 260 pgs: 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:26 vm00 bash[28005]: cluster 2026-03-10T07:34:25.452138+0000 mon.a (mon.0) 2610 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:26 vm00 bash[28005]: cluster 2026-03-10T07:34:25.452138+0000 mon.a (mon.0) 2610 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:26 vm00 bash[28005]: audit 2026-03-10T07:34:25.491236+0000 mon.b (mon.1) 425 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:26 vm00 bash[28005]: audit 2026-03-10T07:34:25.491236+0000 mon.b (mon.1) 425 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:26 vm00 bash[28005]: audit 2026-03-10T07:34:25.492996+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:26 vm00 bash[28005]: audit 2026-03-10T07:34:25.492996+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:26 vm00 bash[20701]: cluster 2026-03-10T07:34:24.661348+0000 mgr.y (mgr.24407) 365 : cluster [DBG] pgmap v585: 260 pgs: 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:26 vm00 bash[20701]: cluster 2026-03-10T07:34:24.661348+0000 mgr.y (mgr.24407) 365 : cluster [DBG] pgmap v585: 260 pgs: 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:26 vm00 bash[20701]: cluster 2026-03-10T07:34:25.452138+0000 mon.a (mon.0) 2610 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:26 vm00 bash[20701]: cluster 2026-03-10T07:34:25.452138+0000 mon.a (mon.0) 2610 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:26 vm00 bash[20701]: audit 2026-03-10T07:34:25.491236+0000 mon.b (mon.1) 425 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:26 vm00 bash[20701]: audit 2026-03-10T07:34:25.491236+0000 mon.b (mon.1) 425 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:26 vm00 bash[20701]: audit 2026-03-10T07:34:25.492996+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:26.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:26 vm00 bash[20701]: audit 2026-03-10T07:34:25.492996+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:26.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:26 vm03 bash[23382]: cluster 2026-03-10T07:34:24.661348+0000 mgr.y (mgr.24407) 365 : cluster [DBG] pgmap v585: 260 pgs: 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:26.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:26 vm03 bash[23382]: cluster 2026-03-10T07:34:24.661348+0000 mgr.y (mgr.24407) 365 : cluster [DBG] pgmap v585: 260 pgs: 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:26.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:26 vm03 bash[23382]: cluster 2026-03-10T07:34:25.452138+0000 mon.a (mon.0) 2610 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T07:34:26.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:26 vm03 bash[23382]: cluster 2026-03-10T07:34:25.452138+0000 mon.a (mon.0) 2610 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T07:34:26.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:26 vm03 bash[23382]: audit 2026-03-10T07:34:25.491236+0000 mon.b (mon.1) 425 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:26.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:26 vm03 bash[23382]: audit 2026-03-10T07:34:25.491236+0000 mon.b (mon.1) 425 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:26.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:26 vm03 bash[23382]: audit 2026-03-10T07:34:25.492996+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:26.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:26 vm03 bash[23382]: audit 2026-03-10T07:34:25.492996+0000 mon.a (mon.0) 2611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.313836+0000 mon.a (mon.0) 2612 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.313836+0000 mon.a (mon.0) 2612 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: cluster 2026-03-10T07:34:26.321709+0000 mon.a (mon.0) 2613 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T07:34:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: cluster 2026-03-10T07:34:26.321709+0000 mon.a (mon.0) 2613 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T07:34:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.349088+0000 mon.b (mon.1) 426 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.349088+0000 mon.b (mon.1) 426 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.373984+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.373984+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.374829+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.374829+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.375582+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.375582+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.376378+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: audit 2026-03-10T07:34:26.376378+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: cluster 2026-03-10T07:34:27.321970+0000 mon.a (mon.0) 2616 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:27 vm00 bash[28005]: cluster 2026-03-10T07:34:27.321970+0000 mon.a (mon.0) 2616 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.313836+0000 mon.a (mon.0) 2612 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.313836+0000 mon.a (mon.0) 2612 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: cluster 2026-03-10T07:34:26.321709+0000 mon.a (mon.0) 2613 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: cluster 2026-03-10T07:34:26.321709+0000 mon.a (mon.0) 2613 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.349088+0000 mon.b (mon.1) 426 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.349088+0000 mon.b (mon.1) 426 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.373984+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.373984+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.374829+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.374829+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.375582+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.375582+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.376378+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: audit 2026-03-10T07:34:26.376378+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: cluster 2026-03-10T07:34:27.321970+0000 mon.a (mon.0) 2616 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T07:34:27.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:27 vm00 bash[20701]: cluster 2026-03-10T07:34:27.321970+0000 mon.a (mon.0) 2616 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T07:34:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.313836+0000 mon.a (mon.0) 2612 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.313836+0000 mon.a (mon.0) 2612 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: cluster 2026-03-10T07:34:26.321709+0000 mon.a (mon.0) 2613 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: cluster 2026-03-10T07:34:26.321709+0000 mon.a (mon.0) 2613 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.349088+0000 mon.b (mon.1) 426 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.349088+0000 mon.b (mon.1) 426 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.373984+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.373984+0000 mon.b (mon.1) 427 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.374829+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.374829+0000 mon.b (mon.1) 428 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.375582+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.375582+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.376378+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: audit 2026-03-10T07:34:26.376378+0000 mon.a (mon.0) 2615 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-69"}]: dispatch 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: cluster 2026-03-10T07:34:27.321970+0000 mon.a (mon.0) 2616 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T07:34:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:27 vm03 bash[23382]: cluster 2026-03-10T07:34:27.321970+0000 mon.a (mon.0) 2616 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T07:34:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:28 vm00 bash[28005]: cluster 2026-03-10T07:34:26.661707+0000 mgr.y (mgr.24407) 366 : cluster [DBG] pgmap v588: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:28 vm00 bash[28005]: cluster 2026-03-10T07:34:26.661707+0000 mgr.y (mgr.24407) 366 : cluster [DBG] pgmap v588: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:28 vm00 bash[28005]: audit 2026-03-10T07:34:28.342356+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:28 vm00 bash[28005]: audit 2026-03-10T07:34:28.342356+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:28 vm00 bash[28005]: cluster 2026-03-10T07:34:28.342761+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T07:34:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:28 vm00 bash[28005]: cluster 2026-03-10T07:34:28.342761+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T07:34:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:28 vm00 bash[28005]: audit 2026-03-10T07:34:28.345946+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:28.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:28 vm00 bash[28005]: audit 2026-03-10T07:34:28.345946+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:28 vm00 bash[20701]: cluster 2026-03-10T07:34:26.661707+0000 mgr.y (mgr.24407) 366 : cluster [DBG] pgmap v588: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:28 vm00 bash[20701]: cluster 2026-03-10T07:34:26.661707+0000 mgr.y (mgr.24407) 366 : cluster [DBG] pgmap v588: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:28 vm00 bash[20701]: audit 2026-03-10T07:34:28.342356+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:28 vm00 bash[20701]: audit 2026-03-10T07:34:28.342356+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:28 vm00 bash[20701]: cluster 2026-03-10T07:34:28.342761+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T07:34:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:28 vm00 bash[20701]: cluster 2026-03-10T07:34:28.342761+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T07:34:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:28 vm00 bash[20701]: audit 2026-03-10T07:34:28.345946+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:28.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:28 vm00 bash[20701]: audit 2026-03-10T07:34:28.345946+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:28 vm03 bash[23382]: cluster 2026-03-10T07:34:26.661707+0000 mgr.y (mgr.24407) 366 : cluster [DBG] pgmap v588: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:28 vm03 bash[23382]: cluster 2026-03-10T07:34:26.661707+0000 mgr.y (mgr.24407) 366 : cluster [DBG] pgmap v588: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:28 vm03 bash[23382]: audit 2026-03-10T07:34:28.342356+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:28 vm03 bash[23382]: audit 2026-03-10T07:34:28.342356+0000 mon.b (mon.1) 429 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:28 vm03 bash[23382]: cluster 2026-03-10T07:34:28.342761+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T07:34:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:28 vm03 bash[23382]: cluster 2026-03-10T07:34:28.342761+0000 mon.a (mon.0) 2617 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T07:34:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:28 vm03 bash[23382]: audit 2026-03-10T07:34:28.345946+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:28 vm03 bash[23382]: audit 2026-03-10T07:34:28.345946+0000 mon.a (mon.0) 2618 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: cluster 2026-03-10T07:34:28.662191+0000 mgr.y (mgr.24407) 367 : cluster [DBG] pgmap v591: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: cluster 2026-03-10T07:34:28.662191+0000 mgr.y (mgr.24407) 367 : cluster [DBG] pgmap v591: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: audit 2026-03-10T07:34:29.326167+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: audit 2026-03-10T07:34:29.326167+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: audit 2026-03-10T07:34:29.340546+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: audit 2026-03-10T07:34:29.340546+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: cluster 2026-03-10T07:34:29.343186+0000 mon.a (mon.0) 2620 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: cluster 2026-03-10T07:34:29.343186+0000 mon.a (mon.0) 2620 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: audit 2026-03-10T07:34:29.350491+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: audit 2026-03-10T07:34:29.350491+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: audit 2026-03-10T07:34:29.356387+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: audit 2026-03-10T07:34:29.356387+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: cluster 2026-03-10T07:34:29.356956+0000 mon.a (mon.0) 2622 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:30 vm00 bash[28005]: cluster 2026-03-10T07:34:29.356956+0000 mon.a (mon.0) 2622 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: cluster 2026-03-10T07:34:28.662191+0000 mgr.y (mgr.24407) 367 : cluster [DBG] pgmap v591: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: cluster 2026-03-10T07:34:28.662191+0000 mgr.y (mgr.24407) 367 : cluster [DBG] pgmap v591: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: audit 2026-03-10T07:34:29.326167+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: audit 2026-03-10T07:34:29.326167+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: audit 2026-03-10T07:34:29.340546+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: audit 2026-03-10T07:34:29.340546+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: cluster 2026-03-10T07:34:29.343186+0000 mon.a (mon.0) 2620 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: cluster 2026-03-10T07:34:29.343186+0000 mon.a (mon.0) 2620 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: audit 2026-03-10T07:34:29.350491+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: audit 2026-03-10T07:34:29.350491+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: audit 2026-03-10T07:34:29.356387+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: audit 2026-03-10T07:34:29.356387+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: cluster 2026-03-10T07:34:29.356956+0000 mon.a (mon.0) 2622 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:30 vm00 bash[20701]: cluster 2026-03-10T07:34:29.356956+0000 mon.a (mon.0) 2622 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:30.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: cluster 2026-03-10T07:34:28.662191+0000 mgr.y (mgr.24407) 367 : cluster [DBG] pgmap v591: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: cluster 2026-03-10T07:34:28.662191+0000 mgr.y (mgr.24407) 367 : cluster [DBG] pgmap v591: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: audit 2026-03-10T07:34:29.326167+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: audit 2026-03-10T07:34:29.326167+0000 mon.a (mon.0) 2619 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: audit 2026-03-10T07:34:29.340546+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: audit 2026-03-10T07:34:29.340546+0000 mon.b (mon.1) 430 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: cluster 2026-03-10T07:34:29.343186+0000 mon.a (mon.0) 2620 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: cluster 2026-03-10T07:34:29.343186+0000 mon.a (mon.0) 2620 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: audit 2026-03-10T07:34:29.350491+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: audit 2026-03-10T07:34:29.350491+0000 mon.b (mon.1) 431 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: audit 2026-03-10T07:34:29.356387+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: audit 2026-03-10T07:34:29.356387+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: cluster 2026-03-10T07:34:29.356956+0000 mon.a (mon.0) 2622 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:30.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:30 vm03 bash[23382]: cluster 2026-03-10T07:34:29.356956+0000 mon.a (mon.0) 2622 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:31.358 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:34:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:34:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:31 vm00 bash[28005]: audit 2026-03-10T07:34:30.329612+0000 mon.a (mon.0) 2623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:31 vm00 bash[28005]: audit 2026-03-10T07:34:30.329612+0000 mon.a (mon.0) 2623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:31 vm00 bash[28005]: cluster 2026-03-10T07:34:30.340438+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:31 vm00 bash[28005]: cluster 2026-03-10T07:34:30.340438+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:31 vm00 bash[28005]: cluster 2026-03-10T07:34:30.662669+0000 mgr.y (mgr.24407) 368 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:31 vm00 bash[28005]: cluster 2026-03-10T07:34:30.662669+0000 mgr.y (mgr.24407) 368 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:31 vm00 bash[20701]: audit 2026-03-10T07:34:30.329612+0000 mon.a (mon.0) 2623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:31 vm00 bash[20701]: audit 2026-03-10T07:34:30.329612+0000 mon.a (mon.0) 2623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:31 vm00 bash[20701]: cluster 2026-03-10T07:34:30.340438+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:31 vm00 bash[20701]: cluster 2026-03-10T07:34:30.340438+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:31 vm00 bash[20701]: cluster 2026-03-10T07:34:30.662669+0000 mgr.y (mgr.24407) 368 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:34:31.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:31 vm00 bash[20701]: cluster 2026-03-10T07:34:30.662669+0000 mgr.y (mgr.24407) 368 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:34:31.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:31 vm03 bash[23382]: audit 2026-03-10T07:34:30.329612+0000 mon.a (mon.0) 2623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:31.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:31 vm03 bash[23382]: audit 2026-03-10T07:34:30.329612+0000 mon.a (mon.0) 2623 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:34:31.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:31 vm03 bash[23382]: cluster 2026-03-10T07:34:30.340438+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T07:34:31.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:31 vm03 bash[23382]: cluster 2026-03-10T07:34:30.340438+0000 mon.a (mon.0) 2624 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T07:34:31.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:31 vm03 bash[23382]: cluster 2026-03-10T07:34:30.662669+0000 mgr.y (mgr.24407) 368 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:34:31.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:31 vm03 bash[23382]: cluster 2026-03-10T07:34:30.662669+0000 mgr.y (mgr.24407) 368 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:34:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:32 vm00 bash[28005]: cluster 2026-03-10T07:34:31.354397+0000 mon.a (mon.0) 2625 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T07:34:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:32 vm00 bash[28005]: cluster 2026-03-10T07:34:31.354397+0000 mon.a (mon.0) 2625 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T07:34:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:32 vm00 bash[20701]: cluster 2026-03-10T07:34:31.354397+0000 mon.a (mon.0) 2625 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T07:34:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:32 vm00 bash[20701]: cluster 2026-03-10T07:34:31.354397+0000 mon.a (mon.0) 2625 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T07:34:32.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:32 vm03 bash[23382]: cluster 2026-03-10T07:34:31.354397+0000 mon.a (mon.0) 2625 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T07:34:32.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:32 vm03 bash[23382]: cluster 2026-03-10T07:34:31.354397+0000 mon.a (mon.0) 2625 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T07:34:33.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:34:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:34:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:33 vm03 bash[23382]: cluster 2026-03-10T07:34:32.663061+0000 mgr.y (mgr.24407) 369 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 707 B/s wr, 2 op/s 2026-03-10T07:34:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:33 vm03 bash[23382]: cluster 2026-03-10T07:34:32.663061+0000 mgr.y (mgr.24407) 369 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 707 B/s wr, 2 op/s 2026-03-10T07:34:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:33 vm03 bash[23382]: audit 2026-03-10T07:34:33.231431+0000 mgr.y (mgr.24407) 370 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:33 vm03 bash[23382]: audit 2026-03-10T07:34:33.231431+0000 mgr.y (mgr.24407) 370 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:33 vm00 bash[28005]: cluster 2026-03-10T07:34:32.663061+0000 mgr.y (mgr.24407) 369 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 707 B/s wr, 2 op/s 2026-03-10T07:34:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:33 vm00 bash[28005]: cluster 2026-03-10T07:34:32.663061+0000 mgr.y (mgr.24407) 369 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 707 B/s wr, 2 op/s 2026-03-10T07:34:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:33 vm00 bash[28005]: audit 2026-03-10T07:34:33.231431+0000 mgr.y (mgr.24407) 370 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:33 vm00 bash[28005]: audit 2026-03-10T07:34:33.231431+0000 mgr.y (mgr.24407) 370 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:33 vm00 bash[20701]: cluster 2026-03-10T07:34:32.663061+0000 mgr.y (mgr.24407) 369 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 707 B/s wr, 2 op/s 2026-03-10T07:34:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:33 vm00 bash[20701]: cluster 2026-03-10T07:34:32.663061+0000 mgr.y (mgr.24407) 369 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 707 B/s wr, 2 op/s 2026-03-10T07:34:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:33 vm00 bash[20701]: audit 2026-03-10T07:34:33.231431+0000 mgr.y (mgr.24407) 370 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:33 vm00 bash[20701]: audit 2026-03-10T07:34:33.231431+0000 mgr.y (mgr.24407) 370 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:36.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:35 vm03 bash[23382]: cluster 2026-03-10T07:34:34.663719+0000 mgr.y (mgr.24407) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 3 op/s 2026-03-10T07:34:36.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:35 vm03 bash[23382]: cluster 2026-03-10T07:34:34.663719+0000 mgr.y (mgr.24407) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 3 op/s 2026-03-10T07:34:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:35 vm00 bash[28005]: cluster 2026-03-10T07:34:34.663719+0000 mgr.y (mgr.24407) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 3 op/s 2026-03-10T07:34:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:35 vm00 bash[28005]: cluster 2026-03-10T07:34:34.663719+0000 mgr.y (mgr.24407) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 3 op/s 2026-03-10T07:34:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:35 vm00 bash[20701]: cluster 2026-03-10T07:34:34.663719+0000 mgr.y (mgr.24407) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 3 op/s 2026-03-10T07:34:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:35 vm00 bash[20701]: cluster 2026-03-10T07:34:34.663719+0000 mgr.y (mgr.24407) 371 : cluster [DBG] pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 3 op/s 2026-03-10T07:34:37.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:36 vm03 bash[23382]: cluster 2026-03-10T07:34:36.178133+0000 mon.a (mon.0) 2626 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:37.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:36 vm03 bash[23382]: cluster 2026-03-10T07:34:36.178133+0000 mon.a (mon.0) 2626 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:36 vm00 bash[28005]: cluster 2026-03-10T07:34:36.178133+0000 mon.a (mon.0) 2626 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:36 vm00 bash[28005]: cluster 2026-03-10T07:34:36.178133+0000 mon.a (mon.0) 2626 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:36 vm00 bash[20701]: cluster 2026-03-10T07:34:36.178133+0000 mon.a (mon.0) 2626 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:36 vm00 bash[20701]: cluster 2026-03-10T07:34:36.178133+0000 mon.a (mon.0) 2626 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:34:38.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:37 vm03 bash[23382]: cluster 2026-03-10T07:34:36.664429+0000 mgr.y (mgr.24407) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:34:38.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:37 vm03 bash[23382]: cluster 2026-03-10T07:34:36.664429+0000 mgr.y (mgr.24407) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:34:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:37 vm00 bash[20701]: cluster 2026-03-10T07:34:36.664429+0000 mgr.y (mgr.24407) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:34:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:37 vm00 bash[20701]: cluster 2026-03-10T07:34:36.664429+0000 mgr.y (mgr.24407) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:34:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:37 vm00 bash[28005]: cluster 2026-03-10T07:34:36.664429+0000 mgr.y (mgr.24407) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:34:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:37 vm00 bash[28005]: cluster 2026-03-10T07:34:36.664429+0000 mgr.y (mgr.24407) 372 : cluster [DBG] pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T07:34:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:39 vm03 bash[23382]: cluster 2026-03-10T07:34:38.664888+0000 mgr.y (mgr.24407) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 737 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T07:34:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:39 vm03 bash[23382]: cluster 2026-03-10T07:34:38.664888+0000 mgr.y (mgr.24407) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 737 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T07:34:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:39 vm03 bash[23382]: audit 2026-03-10T07:34:39.513930+0000 mon.c (mon.2) 302 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:39 vm03 bash[23382]: audit 2026-03-10T07:34:39.513930+0000 mon.c (mon.2) 302 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:39 vm00 bash[28005]: cluster 2026-03-10T07:34:38.664888+0000 mgr.y (mgr.24407) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 737 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T07:34:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:39 vm00 bash[28005]: cluster 2026-03-10T07:34:38.664888+0000 mgr.y (mgr.24407) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 737 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T07:34:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:39 vm00 bash[28005]: audit 2026-03-10T07:34:39.513930+0000 mon.c (mon.2) 302 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:39 vm00 bash[28005]: audit 2026-03-10T07:34:39.513930+0000 mon.c (mon.2) 302 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:40.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:39 vm00 bash[20701]: cluster 2026-03-10T07:34:38.664888+0000 mgr.y (mgr.24407) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 737 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T07:34:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:39 vm00 bash[20701]: cluster 2026-03-10T07:34:38.664888+0000 mgr.y (mgr.24407) 373 : cluster [DBG] pgmap v599: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 737 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T07:34:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:39 vm00 bash[20701]: audit 2026-03-10T07:34:39.513930+0000 mon.c (mon.2) 302 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:39 vm00 bash[20701]: audit 2026-03-10T07:34:39.513930+0000 mon.c (mon.2) 302 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:41.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:34:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:34:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:34:42.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:41 vm03 bash[23382]: cluster 2026-03-10T07:34:40.665728+0000 mgr.y (mgr.24407) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:42.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:41 vm03 bash[23382]: cluster 2026-03-10T07:34:40.665728+0000 mgr.y (mgr.24407) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:41 vm00 bash[28005]: cluster 2026-03-10T07:34:40.665728+0000 mgr.y (mgr.24407) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:41 vm00 bash[28005]: cluster 2026-03-10T07:34:40.665728+0000 mgr.y (mgr.24407) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:41 vm00 bash[20701]: cluster 2026-03-10T07:34:40.665728+0000 mgr.y (mgr.24407) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:41 vm00 bash[20701]: cluster 2026-03-10T07:34:40.665728+0000 mgr.y (mgr.24407) 374 : cluster [DBG] pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:43.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:34:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:34:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:43 vm03 bash[23382]: cluster 2026-03-10T07:34:42.666091+0000 mgr.y (mgr.24407) 375 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T07:34:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:43 vm03 bash[23382]: cluster 2026-03-10T07:34:42.666091+0000 mgr.y (mgr.24407) 375 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T07:34:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:43 vm03 bash[23382]: audit 2026-03-10T07:34:43.239980+0000 mgr.y (mgr.24407) 376 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:43 vm03 bash[23382]: audit 2026-03-10T07:34:43.239980+0000 mgr.y (mgr.24407) 376 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:43 vm00 bash[28005]: cluster 2026-03-10T07:34:42.666091+0000 mgr.y (mgr.24407) 375 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T07:34:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:43 vm00 bash[28005]: cluster 2026-03-10T07:34:42.666091+0000 mgr.y (mgr.24407) 375 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T07:34:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:43 vm00 bash[28005]: audit 2026-03-10T07:34:43.239980+0000 mgr.y (mgr.24407) 376 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:43 vm00 bash[28005]: audit 2026-03-10T07:34:43.239980+0000 mgr.y (mgr.24407) 376 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:43 vm00 bash[20701]: cluster 2026-03-10T07:34:42.666091+0000 mgr.y (mgr.24407) 375 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T07:34:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:43 vm00 bash[20701]: cluster 2026-03-10T07:34:42.666091+0000 mgr.y (mgr.24407) 375 : cluster [DBG] pgmap v601: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T07:34:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:43 vm00 bash[20701]: audit 2026-03-10T07:34:43.239980+0000 mgr.y (mgr.24407) 376 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:43 vm00 bash[20701]: audit 2026-03-10T07:34:43.239980+0000 mgr.y (mgr.24407) 376 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:46.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:46 vm00 bash[28005]: cluster 2026-03-10T07:34:44.666816+0000 mgr.y (mgr.24407) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:46.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:46 vm00 bash[28005]: cluster 2026-03-10T07:34:44.666816+0000 mgr.y (mgr.24407) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:46.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:46 vm00 bash[20701]: cluster 2026-03-10T07:34:44.666816+0000 mgr.y (mgr.24407) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:46.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:46 vm00 bash[20701]: cluster 2026-03-10T07:34:44.666816+0000 mgr.y (mgr.24407) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:46 vm03 bash[23382]: cluster 2026-03-10T07:34:44.666816+0000 mgr.y (mgr.24407) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:46 vm03 bash[23382]: cluster 2026-03-10T07:34:44.666816+0000 mgr.y (mgr.24407) 377 : cluster [DBG] pgmap v602: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T07:34:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:48 vm00 bash[28005]: cluster 2026-03-10T07:34:46.667407+0000 mgr.y (mgr.24407) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-10T07:34:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:48 vm00 bash[28005]: cluster 2026-03-10T07:34:46.667407+0000 mgr.y (mgr.24407) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-10T07:34:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:48 vm00 bash[20701]: cluster 2026-03-10T07:34:46.667407+0000 mgr.y (mgr.24407) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-10T07:34:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:48 vm00 bash[20701]: cluster 2026-03-10T07:34:46.667407+0000 mgr.y (mgr.24407) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-10T07:34:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:48 vm03 bash[23382]: cluster 2026-03-10T07:34:46.667407+0000 mgr.y (mgr.24407) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-10T07:34:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:48 vm03 bash[23382]: cluster 2026-03-10T07:34:46.667407+0000 mgr.y (mgr.24407) 378 : cluster [DBG] pgmap v603: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-10T07:34:50.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:50 vm00 bash[28005]: cluster 2026-03-10T07:34:48.667737+0000 mgr.y (mgr.24407) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:50.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:50 vm00 bash[28005]: cluster 2026-03-10T07:34:48.667737+0000 mgr.y (mgr.24407) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:50.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:50 vm00 bash[20701]: cluster 2026-03-10T07:34:48.667737+0000 mgr.y (mgr.24407) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:50.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:50 vm00 bash[20701]: cluster 2026-03-10T07:34:48.667737+0000 mgr.y (mgr.24407) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:50 vm03 bash[23382]: cluster 2026-03-10T07:34:48.667737+0000 mgr.y (mgr.24407) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:50 vm03 bash[23382]: cluster 2026-03-10T07:34:48.667737+0000 mgr.y (mgr.24407) 379 : cluster [DBG] pgmap v604: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:51.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:34:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:34:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:34:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:52 vm03 bash[23382]: cluster 2026-03-10T07:34:50.668507+0000 mgr.y (mgr.24407) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:52 vm03 bash[23382]: cluster 2026-03-10T07:34:50.668507+0000 mgr.y (mgr.24407) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:52 vm03 bash[23382]: audit 2026-03-10T07:34:51.414967+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:52 vm03 bash[23382]: audit 2026-03-10T07:34:51.414967+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:52 vm03 bash[23382]: audit 2026-03-10T07:34:51.416020+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:52 vm03 bash[23382]: audit 2026-03-10T07:34:51.416020+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:52 vm03 bash[23382]: audit 2026-03-10T07:34:51.416683+0000 mon.a (mon.0) 2627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:52 vm03 bash[23382]: audit 2026-03-10T07:34:51.416683+0000 mon.a (mon.0) 2627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:52 vm03 bash[23382]: audit 2026-03-10T07:34:51.417472+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:52 vm03 bash[23382]: audit 2026-03-10T07:34:51.417472+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:52 vm00 bash[28005]: cluster 2026-03-10T07:34:50.668507+0000 mgr.y (mgr.24407) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:52 vm00 bash[28005]: cluster 2026-03-10T07:34:50.668507+0000 mgr.y (mgr.24407) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:52 vm00 bash[28005]: audit 2026-03-10T07:34:51.414967+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:52 vm00 bash[28005]: audit 2026-03-10T07:34:51.414967+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:52 vm00 bash[28005]: audit 2026-03-10T07:34:51.416020+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:52 vm00 bash[28005]: audit 2026-03-10T07:34:51.416020+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:52 vm00 bash[28005]: audit 2026-03-10T07:34:51.416683+0000 mon.a (mon.0) 2627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:52 vm00 bash[28005]: audit 2026-03-10T07:34:51.416683+0000 mon.a (mon.0) 2627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:52 vm00 bash[28005]: audit 2026-03-10T07:34:51.417472+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:52 vm00 bash[28005]: audit 2026-03-10T07:34:51.417472+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:52 vm00 bash[20701]: cluster 2026-03-10T07:34:50.668507+0000 mgr.y (mgr.24407) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:52 vm00 bash[20701]: cluster 2026-03-10T07:34:50.668507+0000 mgr.y (mgr.24407) 380 : cluster [DBG] pgmap v605: 292 pgs: 292 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:52 vm00 bash[20701]: audit 2026-03-10T07:34:51.414967+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:52 vm00 bash[20701]: audit 2026-03-10T07:34:51.414967+0000 mon.b (mon.1) 432 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:52 vm00 bash[20701]: audit 2026-03-10T07:34:51.416020+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:52.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:52 vm00 bash[20701]: audit 2026-03-10T07:34:51.416020+0000 mon.b (mon.1) 433 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:52.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:52 vm00 bash[20701]: audit 2026-03-10T07:34:51.416683+0000 mon.a (mon.0) 2627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:52 vm00 bash[20701]: audit 2026-03-10T07:34:51.416683+0000 mon.a (mon.0) 2627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:34:52.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:52 vm00 bash[20701]: audit 2026-03-10T07:34:51.417472+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:52.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:52 vm00 bash[20701]: audit 2026-03-10T07:34:51.417472+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-71"}]: dispatch 2026-03-10T07:34:53.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:34:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:34:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:53 vm03 bash[23382]: cluster 2026-03-10T07:34:52.186151+0000 mon.a (mon.0) 2629 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T07:34:53.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:53 vm03 bash[23382]: cluster 2026-03-10T07:34:52.186151+0000 mon.a (mon.0) 2629 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T07:34:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:53 vm00 bash[28005]: cluster 2026-03-10T07:34:52.186151+0000 mon.a (mon.0) 2629 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T07:34:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:53 vm00 bash[28005]: cluster 2026-03-10T07:34:52.186151+0000 mon.a (mon.0) 2629 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T07:34:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:53 vm00 bash[20701]: cluster 2026-03-10T07:34:52.186151+0000 mon.a (mon.0) 2629 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T07:34:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:53 vm00 bash[20701]: cluster 2026-03-10T07:34:52.186151+0000 mon.a (mon.0) 2629 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T07:34:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: cluster 2026-03-10T07:34:52.668820+0000 mgr.y (mgr.24407) 381 : cluster [DBG] pgmap v607: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T07:34:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: cluster 2026-03-10T07:34:52.668820+0000 mgr.y (mgr.24407) 381 : cluster [DBG] pgmap v607: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T07:34:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: audit 2026-03-10T07:34:53.203305+0000 mon.b (mon.1) 434 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: audit 2026-03-10T07:34:53.203305+0000 mon.b (mon.1) 434 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: cluster 2026-03-10T07:34:53.203873+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T07:34:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: cluster 2026-03-10T07:34:53.203873+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T07:34:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: audit 2026-03-10T07:34:53.208394+0000 mon.a (mon.0) 2631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: audit 2026-03-10T07:34:53.208394+0000 mon.a (mon.0) 2631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: audit 2026-03-10T07:34:53.248485+0000 mgr.y (mgr.24407) 382 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: audit 2026-03-10T07:34:53.248485+0000 mgr.y (mgr.24407) 382 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: audit 2026-03-10T07:34:54.190366+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: audit 2026-03-10T07:34:54.190366+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: audit 2026-03-10T07:34:54.194254+0000 mon.b (mon.1) 435 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: audit 2026-03-10T07:34:54.194254+0000 mon.b (mon.1) 435 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: cluster 2026-03-10T07:34:54.206279+0000 mon.a (mon.0) 2633 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T07:34:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:54 vm03 bash[23382]: cluster 2026-03-10T07:34:54.206279+0000 mon.a (mon.0) 2633 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T07:34:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: cluster 2026-03-10T07:34:52.668820+0000 mgr.y (mgr.24407) 381 : cluster [DBG] pgmap v607: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T07:34:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: cluster 2026-03-10T07:34:52.668820+0000 mgr.y (mgr.24407) 381 : cluster [DBG] pgmap v607: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T07:34:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: audit 2026-03-10T07:34:53.203305+0000 mon.b (mon.1) 434 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: audit 2026-03-10T07:34:53.203305+0000 mon.b (mon.1) 434 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: cluster 2026-03-10T07:34:53.203873+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T07:34:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: cluster 2026-03-10T07:34:53.203873+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T07:34:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: audit 2026-03-10T07:34:53.208394+0000 mon.a (mon.0) 2631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: audit 2026-03-10T07:34:53.208394+0000 mon.a (mon.0) 2631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: audit 2026-03-10T07:34:53.248485+0000 mgr.y (mgr.24407) 382 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: audit 2026-03-10T07:34:53.248485+0000 mgr.y (mgr.24407) 382 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: audit 2026-03-10T07:34:54.190366+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: audit 2026-03-10T07:34:54.190366+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: audit 2026-03-10T07:34:54.194254+0000 mon.b (mon.1) 435 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: audit 2026-03-10T07:34:54.194254+0000 mon.b (mon.1) 435 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: cluster 2026-03-10T07:34:54.206279+0000 mon.a (mon.0) 2633 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:54 vm00 bash[28005]: cluster 2026-03-10T07:34:54.206279+0000 mon.a (mon.0) 2633 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: cluster 2026-03-10T07:34:52.668820+0000 mgr.y (mgr.24407) 381 : cluster [DBG] pgmap v607: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: cluster 2026-03-10T07:34:52.668820+0000 mgr.y (mgr.24407) 381 : cluster [DBG] pgmap v607: 260 pgs: 260 active+clean; 8.3 MiB data, 945 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: audit 2026-03-10T07:34:53.203305+0000 mon.b (mon.1) 434 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: audit 2026-03-10T07:34:53.203305+0000 mon.b (mon.1) 434 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: cluster 2026-03-10T07:34:53.203873+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: cluster 2026-03-10T07:34:53.203873+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: audit 2026-03-10T07:34:53.208394+0000 mon.a (mon.0) 2631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: audit 2026-03-10T07:34:53.208394+0000 mon.a (mon.0) 2631 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: audit 2026-03-10T07:34:53.248485+0000 mgr.y (mgr.24407) 382 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: audit 2026-03-10T07:34:53.248485+0000 mgr.y (mgr.24407) 382 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: audit 2026-03-10T07:34:54.190366+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: audit 2026-03-10T07:34:54.190366+0000 mon.a (mon.0) 2632 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: audit 2026-03-10T07:34:54.194254+0000 mon.b (mon.1) 435 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: audit 2026-03-10T07:34:54.194254+0000 mon.b (mon.1) 435 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: cluster 2026-03-10T07:34:54.206279+0000 mon.a (mon.0) 2633 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T07:34:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:54 vm00 bash[20701]: cluster 2026-03-10T07:34:54.206279+0000 mon.a (mon.0) 2633 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T07:34:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:55 vm03 bash[23382]: audit 2026-03-10T07:34:54.520455+0000 mon.c (mon.2) 303 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:55 vm03 bash[23382]: audit 2026-03-10T07:34:54.520455+0000 mon.c (mon.2) 303 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:55 vm03 bash[23382]: cluster 2026-03-10T07:34:55.197391+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T07:34:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:55 vm03 bash[23382]: cluster 2026-03-10T07:34:55.197391+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T07:34:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:55 vm00 bash[28005]: audit 2026-03-10T07:34:54.520455+0000 mon.c (mon.2) 303 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:55 vm00 bash[28005]: audit 2026-03-10T07:34:54.520455+0000 mon.c (mon.2) 303 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:55 vm00 bash[28005]: cluster 2026-03-10T07:34:55.197391+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T07:34:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:55 vm00 bash[28005]: cluster 2026-03-10T07:34:55.197391+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T07:34:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:55 vm00 bash[20701]: audit 2026-03-10T07:34:54.520455+0000 mon.c (mon.2) 303 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:55 vm00 bash[20701]: audit 2026-03-10T07:34:54.520455+0000 mon.c (mon.2) 303 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:34:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:55 vm00 bash[20701]: cluster 2026-03-10T07:34:55.197391+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T07:34:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:55 vm00 bash[20701]: cluster 2026-03-10T07:34:55.197391+0000 mon.a (mon.0) 2634 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T07:34:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:56 vm03 bash[23382]: cluster 2026-03-10T07:34:54.669401+0000 mgr.y (mgr.24407) 383 : cluster [DBG] pgmap v610: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:56 vm03 bash[23382]: cluster 2026-03-10T07:34:54.669401+0000 mgr.y (mgr.24407) 383 : cluster [DBG] pgmap v610: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:56 vm03 bash[23382]: cluster 2026-03-10T07:34:56.207337+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T07:34:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:56 vm03 bash[23382]: cluster 2026-03-10T07:34:56.207337+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T07:34:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:56 vm00 bash[28005]: cluster 2026-03-10T07:34:54.669401+0000 mgr.y (mgr.24407) 383 : cluster [DBG] pgmap v610: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:56 vm00 bash[28005]: cluster 2026-03-10T07:34:54.669401+0000 mgr.y (mgr.24407) 383 : cluster [DBG] pgmap v610: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:56 vm00 bash[28005]: cluster 2026-03-10T07:34:56.207337+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T07:34:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:56 vm00 bash[28005]: cluster 2026-03-10T07:34:56.207337+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T07:34:56.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:56 vm00 bash[20701]: cluster 2026-03-10T07:34:54.669401+0000 mgr.y (mgr.24407) 383 : cluster [DBG] pgmap v610: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:56.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:56 vm00 bash[20701]: cluster 2026-03-10T07:34:54.669401+0000 mgr.y (mgr.24407) 383 : cluster [DBG] pgmap v610: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:34:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:56 vm00 bash[20701]: cluster 2026-03-10T07:34:56.207337+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T07:34:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:56 vm00 bash[20701]: cluster 2026-03-10T07:34:56.207337+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T07:34:58.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:58 vm03 bash[23382]: cluster 2026-03-10T07:34:56.669768+0000 mgr.y (mgr.24407) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:34:58.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:34:58 vm03 bash[23382]: cluster 2026-03-10T07:34:56.669768+0000 mgr.y (mgr.24407) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:34:58.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:58 vm00 bash[28005]: cluster 2026-03-10T07:34:56.669768+0000 mgr.y (mgr.24407) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:34:58.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:34:58 vm00 bash[28005]: cluster 2026-03-10T07:34:56.669768+0000 mgr.y (mgr.24407) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:34:58.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:58 vm00 bash[20701]: cluster 2026-03-10T07:34:56.669768+0000 mgr.y (mgr.24407) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:34:58.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:34:58 vm00 bash[20701]: cluster 2026-03-10T07:34:56.669768+0000 mgr.y (mgr.24407) 384 : cluster [DBG] pgmap v613: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:35:00.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:00 vm03 bash[23382]: cluster 2026-03-10T07:34:58.670096+0000 mgr.y (mgr.24407) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 934 B/s rd, 934 B/s wr, 1 op/s 2026-03-10T07:35:00.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:00 vm03 bash[23382]: cluster 2026-03-10T07:34:58.670096+0000 mgr.y (mgr.24407) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 934 B/s rd, 934 B/s wr, 1 op/s 2026-03-10T07:35:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:00 vm00 bash[28005]: cluster 2026-03-10T07:34:58.670096+0000 mgr.y (mgr.24407) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 934 B/s rd, 934 B/s wr, 1 op/s 2026-03-10T07:35:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:00 vm00 bash[28005]: cluster 2026-03-10T07:34:58.670096+0000 mgr.y (mgr.24407) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 934 B/s rd, 934 B/s wr, 1 op/s 2026-03-10T07:35:00.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:00 vm00 bash[20701]: cluster 2026-03-10T07:34:58.670096+0000 mgr.y (mgr.24407) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 934 B/s rd, 934 B/s wr, 1 op/s 2026-03-10T07:35:00.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:00 vm00 bash[20701]: cluster 2026-03-10T07:34:58.670096+0000 mgr.y (mgr.24407) 385 : cluster [DBG] pgmap v614: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 934 B/s rd, 934 B/s wr, 1 op/s 2026-03-10T07:35:01.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:35:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:35:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:35:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:02 vm03 bash[23382]: cluster 2026-03-10T07:35:00.670850+0000 mgr.y (mgr.24407) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:35:02.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:02 vm03 bash[23382]: cluster 2026-03-10T07:35:00.670850+0000 mgr.y (mgr.24407) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:35:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:02 vm00 bash[28005]: cluster 2026-03-10T07:35:00.670850+0000 mgr.y (mgr.24407) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:35:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:02 vm00 bash[28005]: cluster 2026-03-10T07:35:00.670850+0000 mgr.y (mgr.24407) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:35:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:02 vm00 bash[20701]: cluster 2026-03-10T07:35:00.670850+0000 mgr.y (mgr.24407) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:35:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:02 vm00 bash[20701]: cluster 2026-03-10T07:35:00.670850+0000 mgr.y (mgr.24407) 386 : cluster [DBG] pgmap v615: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T07:35:03.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:35:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:35:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:04 vm03 bash[23382]: cluster 2026-03-10T07:35:02.671190+0000 mgr.y (mgr.24407) 387 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 383 B/s wr, 3 op/s 2026-03-10T07:35:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:04 vm03 bash[23382]: cluster 2026-03-10T07:35:02.671190+0000 mgr.y (mgr.24407) 387 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 383 B/s wr, 3 op/s 2026-03-10T07:35:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:04 vm03 bash[23382]: audit 2026-03-10T07:35:03.257702+0000 mgr.y (mgr.24407) 388 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:04 vm03 bash[23382]: audit 2026-03-10T07:35:03.257702+0000 mgr.y (mgr.24407) 388 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:04 vm00 bash[28005]: cluster 2026-03-10T07:35:02.671190+0000 mgr.y (mgr.24407) 387 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 383 B/s wr, 3 op/s 2026-03-10T07:35:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:04 vm00 bash[28005]: cluster 2026-03-10T07:35:02.671190+0000 mgr.y (mgr.24407) 387 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 383 B/s wr, 3 op/s 2026-03-10T07:35:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:04 vm00 bash[28005]: audit 2026-03-10T07:35:03.257702+0000 mgr.y (mgr.24407) 388 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:04 vm00 bash[28005]: audit 2026-03-10T07:35:03.257702+0000 mgr.y (mgr.24407) 388 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:04.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:04 vm00 bash[20701]: cluster 2026-03-10T07:35:02.671190+0000 mgr.y (mgr.24407) 387 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 383 B/s wr, 3 op/s 2026-03-10T07:35:04.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:04 vm00 bash[20701]: cluster 2026-03-10T07:35:02.671190+0000 mgr.y (mgr.24407) 387 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 383 B/s wr, 3 op/s 2026-03-10T07:35:04.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:04 vm00 bash[20701]: audit 2026-03-10T07:35:03.257702+0000 mgr.y (mgr.24407) 388 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:04.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:04 vm00 bash[20701]: audit 2026-03-10T07:35:03.257702+0000 mgr.y (mgr.24407) 388 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: cluster 2026-03-10T07:35:04.671676+0000 mgr.y (mgr.24407) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 2 op/s 2026-03-10T07:35:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: cluster 2026-03-10T07:35:04.671676+0000 mgr.y (mgr.24407) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 2 op/s 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: audit 2026-03-10T07:35:06.160072+0000 mon.c (mon.2) 304 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: audit 2026-03-10T07:35:06.160072+0000 mon.c (mon.2) 304 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: audit 2026-03-10T07:35:06.258651+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: audit 2026-03-10T07:35:06.258651+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: audit 2026-03-10T07:35:06.259384+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: audit 2026-03-10T07:35:06.259384+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: audit 2026-03-10T07:35:06.260084+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: audit 2026-03-10T07:35:06.260084+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: audit 2026-03-10T07:35:06.260727+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:06 vm00 bash[28005]: audit 2026-03-10T07:35:06.260727+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: cluster 2026-03-10T07:35:04.671676+0000 mgr.y (mgr.24407) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 2 op/s 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: cluster 2026-03-10T07:35:04.671676+0000 mgr.y (mgr.24407) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 2 op/s 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: audit 2026-03-10T07:35:06.160072+0000 mon.c (mon.2) 304 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: audit 2026-03-10T07:35:06.160072+0000 mon.c (mon.2) 304 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: audit 2026-03-10T07:35:06.258651+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: audit 2026-03-10T07:35:06.258651+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: audit 2026-03-10T07:35:06.259384+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: audit 2026-03-10T07:35:06.259384+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: audit 2026-03-10T07:35:06.260084+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: audit 2026-03-10T07:35:06.260084+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: audit 2026-03-10T07:35:06.260727+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:06 vm00 bash[20701]: audit 2026-03-10T07:35:06.260727+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: cluster 2026-03-10T07:35:04.671676+0000 mgr.y (mgr.24407) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 2 op/s 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: cluster 2026-03-10T07:35:04.671676+0000 mgr.y (mgr.24407) 389 : cluster [DBG] pgmap v617: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 2 op/s 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: audit 2026-03-10T07:35:06.160072+0000 mon.c (mon.2) 304 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: audit 2026-03-10T07:35:06.160072+0000 mon.c (mon.2) 304 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: audit 2026-03-10T07:35:06.258651+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: audit 2026-03-10T07:35:06.258651+0000 mon.b (mon.1) 436 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: audit 2026-03-10T07:35:06.259384+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: audit 2026-03-10T07:35:06.259384+0000 mon.b (mon.1) 437 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: audit 2026-03-10T07:35:06.260084+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: audit 2026-03-10T07:35:06.260084+0000 mon.a (mon.0) 2636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: audit 2026-03-10T07:35:06.260727+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:06.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:06 vm03 bash[23382]: audit 2026-03-10T07:35:06.260727+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-73"}]: dispatch 2026-03-10T07:35:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:07 vm00 bash[20701]: audit 2026-03-10T07:35:06.514380+0000 mon.c (mon.2) 305 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:35:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:07 vm00 bash[20701]: audit 2026-03-10T07:35:06.514380+0000 mon.c (mon.2) 305 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:35:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:07 vm00 bash[20701]: audit 2026-03-10T07:35:06.515572+0000 mon.c (mon.2) 306 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:35:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:07 vm00 bash[20701]: audit 2026-03-10T07:35:06.515572+0000 mon.c (mon.2) 306 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:35:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:07 vm00 bash[20701]: audit 2026-03-10T07:35:06.521897+0000 mon.a (mon.0) 2638 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:35:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:07 vm00 bash[20701]: audit 2026-03-10T07:35:06.521897+0000 mon.a (mon.0) 2638 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:35:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:07 vm00 bash[28005]: audit 2026-03-10T07:35:06.514380+0000 mon.c (mon.2) 305 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:35:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:07 vm00 bash[28005]: audit 2026-03-10T07:35:06.514380+0000 mon.c (mon.2) 305 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:35:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:07 vm00 bash[28005]: audit 2026-03-10T07:35:06.515572+0000 mon.c (mon.2) 306 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:35:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:07 vm00 bash[28005]: audit 2026-03-10T07:35:06.515572+0000 mon.c (mon.2) 306 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:35:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:07 vm00 bash[28005]: audit 2026-03-10T07:35:06.521897+0000 mon.a (mon.0) 2638 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:35:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:07 vm00 bash[28005]: audit 2026-03-10T07:35:06.521897+0000 mon.a (mon.0) 2638 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:35:07.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:07 vm03 bash[23382]: audit 2026-03-10T07:35:06.514380+0000 mon.c (mon.2) 305 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:35:07.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:07 vm03 bash[23382]: audit 2026-03-10T07:35:06.514380+0000 mon.c (mon.2) 305 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:35:07.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:07 vm03 bash[23382]: audit 2026-03-10T07:35:06.515572+0000 mon.c (mon.2) 306 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:35:07.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:07 vm03 bash[23382]: audit 2026-03-10T07:35:06.515572+0000 mon.c (mon.2) 306 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:35:07.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:07 vm03 bash[23382]: audit 2026-03-10T07:35:06.521897+0000 mon.a (mon.0) 2638 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:35:07.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:07 vm03 bash[23382]: audit 2026-03-10T07:35:06.521897+0000 mon.a (mon.0) 2638 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:35:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:08 vm00 bash[28005]: cluster 2026-03-10T07:35:06.672375+0000 mgr.y (mgr.24407) 390 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 195 B/s wr, 2 op/s 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:08 vm00 bash[28005]: cluster 2026-03-10T07:35:06.672375+0000 mgr.y (mgr.24407) 390 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 195 B/s wr, 2 op/s 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:08 vm00 bash[28005]: cluster 2026-03-10T07:35:07.291855+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:08 vm00 bash[28005]: cluster 2026-03-10T07:35:07.291855+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:08 vm00 bash[28005]: cluster 2026-03-10T07:35:08.286804+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:08 vm00 bash[28005]: cluster 2026-03-10T07:35:08.286804+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:08 vm00 bash[28005]: audit 2026-03-10T07:35:08.294015+0000 mon.b (mon.1) 438 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:08 vm00 bash[28005]: audit 2026-03-10T07:35:08.294015+0000 mon.b (mon.1) 438 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:08 vm00 bash[28005]: audit 2026-03-10T07:35:08.296826+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:08 vm00 bash[28005]: audit 2026-03-10T07:35:08.296826+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:08 vm00 bash[20701]: cluster 2026-03-10T07:35:06.672375+0000 mgr.y (mgr.24407) 390 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 195 B/s wr, 2 op/s 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:08 vm00 bash[20701]: cluster 2026-03-10T07:35:06.672375+0000 mgr.y (mgr.24407) 390 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 195 B/s wr, 2 op/s 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:08 vm00 bash[20701]: cluster 2026-03-10T07:35:07.291855+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:08 vm00 bash[20701]: cluster 2026-03-10T07:35:07.291855+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:08 vm00 bash[20701]: cluster 2026-03-10T07:35:08.286804+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:08 vm00 bash[20701]: cluster 2026-03-10T07:35:08.286804+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:08 vm00 bash[20701]: audit 2026-03-10T07:35:08.294015+0000 mon.b (mon.1) 438 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:08 vm00 bash[20701]: audit 2026-03-10T07:35:08.294015+0000 mon.b (mon.1) 438 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:08 vm00 bash[20701]: audit 2026-03-10T07:35:08.296826+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:08 vm00 bash[20701]: audit 2026-03-10T07:35:08.296826+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:08.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:08 vm03 bash[23382]: cluster 2026-03-10T07:35:06.672375+0000 mgr.y (mgr.24407) 390 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 195 B/s wr, 2 op/s 2026-03-10T07:35:08.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:08 vm03 bash[23382]: cluster 2026-03-10T07:35:06.672375+0000 mgr.y (mgr.24407) 390 : cluster [DBG] pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 195 B/s wr, 2 op/s 2026-03-10T07:35:08.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:08 vm03 bash[23382]: cluster 2026-03-10T07:35:07.291855+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T07:35:08.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:08 vm03 bash[23382]: cluster 2026-03-10T07:35:07.291855+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T07:35:08.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:08 vm03 bash[23382]: cluster 2026-03-10T07:35:08.286804+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T07:35:08.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:08 vm03 bash[23382]: cluster 2026-03-10T07:35:08.286804+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T07:35:08.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:08 vm03 bash[23382]: audit 2026-03-10T07:35:08.294015+0000 mon.b (mon.1) 438 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:08.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:08 vm03 bash[23382]: audit 2026-03-10T07:35:08.294015+0000 mon.b (mon.1) 438 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:08.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:08 vm03 bash[23382]: audit 2026-03-10T07:35:08.296826+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:08.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:08 vm03 bash[23382]: audit 2026-03-10T07:35:08.296826+0000 mon.a (mon.0) 2641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:10.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: cluster 2026-03-10T07:35:08.672747+0000 mgr.y (mgr.24407) 391 : cluster [DBG] pgmap v621: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: cluster 2026-03-10T07:35:08.672747+0000 mgr.y (mgr.24407) 391 : cluster [DBG] pgmap v621: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: audit 2026-03-10T07:35:09.285367+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: audit 2026-03-10T07:35:09.285367+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: cluster 2026-03-10T07:35:09.292354+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: cluster 2026-03-10T07:35:09.292354+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: audit 2026-03-10T07:35:09.301531+0000 mon.b (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: audit 2026-03-10T07:35:09.301531+0000 mon.b (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: cluster 2026-03-10T07:35:09.307971+0000 mon.a (mon.0) 2644 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: cluster 2026-03-10T07:35:09.307971+0000 mon.a (mon.0) 2644 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: audit 2026-03-10T07:35:09.527918+0000 mon.c (mon.2) 307 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:10 vm00 bash[28005]: audit 2026-03-10T07:35:09.527918+0000 mon.c (mon.2) 307 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: cluster 2026-03-10T07:35:08.672747+0000 mgr.y (mgr.24407) 391 : cluster [DBG] pgmap v621: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: cluster 2026-03-10T07:35:08.672747+0000 mgr.y (mgr.24407) 391 : cluster [DBG] pgmap v621: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: audit 2026-03-10T07:35:09.285367+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: audit 2026-03-10T07:35:09.285367+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: cluster 2026-03-10T07:35:09.292354+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: cluster 2026-03-10T07:35:09.292354+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: audit 2026-03-10T07:35:09.301531+0000 mon.b (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: audit 2026-03-10T07:35:09.301531+0000 mon.b (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: cluster 2026-03-10T07:35:09.307971+0000 mon.a (mon.0) 2644 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: cluster 2026-03-10T07:35:09.307971+0000 mon.a (mon.0) 2644 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: audit 2026-03-10T07:35:09.527918+0000 mon.c (mon.2) 307 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:10 vm00 bash[20701]: audit 2026-03-10T07:35:09.527918+0000 mon.c (mon.2) 307 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: cluster 2026-03-10T07:35:08.672747+0000 mgr.y (mgr.24407) 391 : cluster [DBG] pgmap v621: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: cluster 2026-03-10T07:35:08.672747+0000 mgr.y (mgr.24407) 391 : cluster [DBG] pgmap v621: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: audit 2026-03-10T07:35:09.285367+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: audit 2026-03-10T07:35:09.285367+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: cluster 2026-03-10T07:35:09.292354+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: cluster 2026-03-10T07:35:09.292354+0000 mon.a (mon.0) 2643 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: audit 2026-03-10T07:35:09.301531+0000 mon.b (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: audit 2026-03-10T07:35:09.301531+0000 mon.b (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: cluster 2026-03-10T07:35:09.307971+0000 mon.a (mon.0) 2644 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: cluster 2026-03-10T07:35:09.307971+0000 mon.a (mon.0) 2644 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: audit 2026-03-10T07:35:09.527918+0000 mon.c (mon.2) 307 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:10.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:10 vm03 bash[23382]: audit 2026-03-10T07:35:09.527918+0000 mon.c (mon.2) 307 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:11 vm00 bash[28005]: cluster 2026-03-10T07:35:10.292869+0000 mon.a (mon.0) 2645 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T07:35:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:11 vm00 bash[28005]: cluster 2026-03-10T07:35:10.292869+0000 mon.a (mon.0) 2645 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T07:35:11.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:35:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:35:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:35:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:11 vm00 bash[20701]: cluster 2026-03-10T07:35:10.292869+0000 mon.a (mon.0) 2645 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T07:35:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:11 vm00 bash[20701]: cluster 2026-03-10T07:35:10.292869+0000 mon.a (mon.0) 2645 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T07:35:11.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:11 vm03 bash[23382]: cluster 2026-03-10T07:35:10.292869+0000 mon.a (mon.0) 2645 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T07:35:11.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:11 vm03 bash[23382]: cluster 2026-03-10T07:35:10.292869+0000 mon.a (mon.0) 2645 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T07:35:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: cluster 2026-03-10T07:35:10.673140+0000 mgr.y (mgr.24407) 392 : cluster [DBG] pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: cluster 2026-03-10T07:35:10.673140+0000 mgr.y (mgr.24407) 392 : cluster [DBG] pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: cluster 2026-03-10T07:35:11.323673+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T07:35:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: cluster 2026-03-10T07:35:11.323673+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T07:35:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: audit 2026-03-10T07:35:11.375748+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: audit 2026-03-10T07:35:11.375748+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: audit 2026-03-10T07:35:11.376594+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: audit 2026-03-10T07:35:11.376594+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: audit 2026-03-10T07:35:11.377301+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: audit 2026-03-10T07:35:11.377301+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: audit 2026-03-10T07:35:11.378116+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:12 vm00 bash[28005]: audit 2026-03-10T07:35:11.378116+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: cluster 2026-03-10T07:35:10.673140+0000 mgr.y (mgr.24407) 392 : cluster [DBG] pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: cluster 2026-03-10T07:35:10.673140+0000 mgr.y (mgr.24407) 392 : cluster [DBG] pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: cluster 2026-03-10T07:35:11.323673+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: cluster 2026-03-10T07:35:11.323673+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: audit 2026-03-10T07:35:11.375748+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: audit 2026-03-10T07:35:11.375748+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: audit 2026-03-10T07:35:11.376594+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: audit 2026-03-10T07:35:11.376594+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: audit 2026-03-10T07:35:11.377301+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: audit 2026-03-10T07:35:11.377301+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: audit 2026-03-10T07:35:11.378116+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:12 vm00 bash[20701]: audit 2026-03-10T07:35:11.378116+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: cluster 2026-03-10T07:35:10.673140+0000 mgr.y (mgr.24407) 392 : cluster [DBG] pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: cluster 2026-03-10T07:35:10.673140+0000 mgr.y (mgr.24407) 392 : cluster [DBG] pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: cluster 2026-03-10T07:35:11.323673+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: cluster 2026-03-10T07:35:11.323673+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: audit 2026-03-10T07:35:11.375748+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: audit 2026-03-10T07:35:11.375748+0000 mon.b (mon.1) 440 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: audit 2026-03-10T07:35:11.376594+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: audit 2026-03-10T07:35:11.376594+0000 mon.b (mon.1) 441 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: audit 2026-03-10T07:35:11.377301+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: audit 2026-03-10T07:35:11.377301+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: audit 2026-03-10T07:35:11.378116+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:12 vm03 bash[23382]: audit 2026-03-10T07:35:11.378116+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-75"}]: dispatch 2026-03-10T07:35:13.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:13 vm03 bash[23382]: cluster 2026-03-10T07:35:12.331677+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T07:35:13.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:13 vm03 bash[23382]: cluster 2026-03-10T07:35:12.331677+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T07:35:13.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:35:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:35:13.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:13 vm00 bash[28005]: cluster 2026-03-10T07:35:12.331677+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T07:35:13.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:13 vm00 bash[28005]: cluster 2026-03-10T07:35:12.331677+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T07:35:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:13 vm00 bash[20701]: cluster 2026-03-10T07:35:12.331677+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T07:35:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:13 vm00 bash[20701]: cluster 2026-03-10T07:35:12.331677+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T07:35:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:14 vm03 bash[23382]: cluster 2026-03-10T07:35:12.673595+0000 mgr.y (mgr.24407) 393 : cluster [DBG] pgmap v627: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:35:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:14 vm03 bash[23382]: cluster 2026-03-10T07:35:12.673595+0000 mgr.y (mgr.24407) 393 : cluster [DBG] pgmap v627: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:35:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:14 vm03 bash[23382]: audit 2026-03-10T07:35:13.268222+0000 mgr.y (mgr.24407) 394 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:14 vm03 bash[23382]: audit 2026-03-10T07:35:13.268222+0000 mgr.y (mgr.24407) 394 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:14 vm03 bash[23382]: cluster 2026-03-10T07:35:13.340470+0000 mon.a (mon.0) 2650 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T07:35:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:14 vm03 bash[23382]: cluster 2026-03-10T07:35:13.340470+0000 mon.a (mon.0) 2650 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T07:35:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:14 vm03 bash[23382]: audit 2026-03-10T07:35:13.356223+0000 mon.b (mon.1) 442 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:14 vm03 bash[23382]: audit 2026-03-10T07:35:13.356223+0000 mon.b (mon.1) 442 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:14 vm03 bash[23382]: audit 2026-03-10T07:35:13.357653+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:14 vm03 bash[23382]: audit 2026-03-10T07:35:13.357653+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:14 vm00 bash[28005]: cluster 2026-03-10T07:35:12.673595+0000 mgr.y (mgr.24407) 393 : cluster [DBG] pgmap v627: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:35:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:14 vm00 bash[28005]: cluster 2026-03-10T07:35:12.673595+0000 mgr.y (mgr.24407) 393 : cluster [DBG] pgmap v627: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:35:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:14 vm00 bash[28005]: audit 2026-03-10T07:35:13.268222+0000 mgr.y (mgr.24407) 394 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:14 vm00 bash[28005]: audit 2026-03-10T07:35:13.268222+0000 mgr.y (mgr.24407) 394 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:14 vm00 bash[28005]: cluster 2026-03-10T07:35:13.340470+0000 mon.a (mon.0) 2650 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T07:35:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:14 vm00 bash[28005]: cluster 2026-03-10T07:35:13.340470+0000 mon.a (mon.0) 2650 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T07:35:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:14 vm00 bash[28005]: audit 2026-03-10T07:35:13.356223+0000 mon.b (mon.1) 442 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:14 vm00 bash[28005]: audit 2026-03-10T07:35:13.356223+0000 mon.b (mon.1) 442 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:14 vm00 bash[28005]: audit 2026-03-10T07:35:13.357653+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:14 vm00 bash[28005]: audit 2026-03-10T07:35:13.357653+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:14.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:14 vm00 bash[20701]: cluster 2026-03-10T07:35:12.673595+0000 mgr.y (mgr.24407) 393 : cluster [DBG] pgmap v627: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:35:14.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:14 vm00 bash[20701]: cluster 2026-03-10T07:35:12.673595+0000 mgr.y (mgr.24407) 393 : cluster [DBG] pgmap v627: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:35:14.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:14 vm00 bash[20701]: audit 2026-03-10T07:35:13.268222+0000 mgr.y (mgr.24407) 394 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:14.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:14 vm00 bash[20701]: audit 2026-03-10T07:35:13.268222+0000 mgr.y (mgr.24407) 394 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:14.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:14 vm00 bash[20701]: cluster 2026-03-10T07:35:13.340470+0000 mon.a (mon.0) 2650 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T07:35:14.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:14 vm00 bash[20701]: cluster 2026-03-10T07:35:13.340470+0000 mon.a (mon.0) 2650 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T07:35:14.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:14 vm00 bash[20701]: audit 2026-03-10T07:35:13.356223+0000 mon.b (mon.1) 442 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:14.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:14 vm00 bash[20701]: audit 2026-03-10T07:35:13.356223+0000 mon.b (mon.1) 442 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:14.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:14 vm00 bash[20701]: audit 2026-03-10T07:35:13.357653+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:14.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:14 vm00 bash[20701]: audit 2026-03-10T07:35:13.357653+0000 mon.a (mon.0) 2651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:15.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:15 vm00 bash[28005]: audit 2026-03-10T07:35:14.501883+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:15.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:15 vm00 bash[28005]: audit 2026-03-10T07:35:14.501883+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:15.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:15 vm00 bash[28005]: audit 2026-03-10T07:35:14.521450+0000 mon.b (mon.1) 443 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:15.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:15 vm00 bash[28005]: audit 2026-03-10T07:35:14.521450+0000 mon.b (mon.1) 443 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:15.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:15 vm00 bash[28005]: cluster 2026-03-10T07:35:14.526197+0000 mon.a (mon.0) 2653 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T07:35:15.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:15 vm00 bash[28005]: cluster 2026-03-10T07:35:14.526197+0000 mon.a (mon.0) 2653 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T07:35:15.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:15 vm00 bash[28005]: cluster 2026-03-10T07:35:14.674482+0000 mgr.y (mgr.24407) 395 : cluster [DBG] pgmap v630: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:15.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:15 vm00 bash[28005]: cluster 2026-03-10T07:35:14.674482+0000 mgr.y (mgr.24407) 395 : cluster [DBG] pgmap v630: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:15.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:15 vm00 bash[20701]: audit 2026-03-10T07:35:14.501883+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:15.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:15 vm00 bash[20701]: audit 2026-03-10T07:35:14.501883+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:15.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:15 vm00 bash[20701]: audit 2026-03-10T07:35:14.521450+0000 mon.b (mon.1) 443 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:15.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:15 vm00 bash[20701]: audit 2026-03-10T07:35:14.521450+0000 mon.b (mon.1) 443 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:15.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:15 vm00 bash[20701]: cluster 2026-03-10T07:35:14.526197+0000 mon.a (mon.0) 2653 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T07:35:15.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:15 vm00 bash[20701]: cluster 2026-03-10T07:35:14.526197+0000 mon.a (mon.0) 2653 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T07:35:15.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:15 vm00 bash[20701]: cluster 2026-03-10T07:35:14.674482+0000 mgr.y (mgr.24407) 395 : cluster [DBG] pgmap v630: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:15.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:15 vm00 bash[20701]: cluster 2026-03-10T07:35:14.674482+0000 mgr.y (mgr.24407) 395 : cluster [DBG] pgmap v630: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:16.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:15 vm03 bash[23382]: audit 2026-03-10T07:35:14.501883+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:16.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:15 vm03 bash[23382]: audit 2026-03-10T07:35:14.501883+0000 mon.a (mon.0) 2652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:16.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:15 vm03 bash[23382]: audit 2026-03-10T07:35:14.521450+0000 mon.b (mon.1) 443 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:16.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:15 vm03 bash[23382]: audit 2026-03-10T07:35:14.521450+0000 mon.b (mon.1) 443 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:16.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:15 vm03 bash[23382]: cluster 2026-03-10T07:35:14.526197+0000 mon.a (mon.0) 2653 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T07:35:16.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:15 vm03 bash[23382]: cluster 2026-03-10T07:35:14.526197+0000 mon.a (mon.0) 2653 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T07:35:16.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:15 vm03 bash[23382]: cluster 2026-03-10T07:35:14.674482+0000 mgr.y (mgr.24407) 395 : cluster [DBG] pgmap v630: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:16.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:15 vm03 bash[23382]: cluster 2026-03-10T07:35:14.674482+0000 mgr.y (mgr.24407) 395 : cluster [DBG] pgmap v630: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: cluster 2026-03-10T07:35:15.519548+0000 mon.a (mon.0) 2654 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: cluster 2026-03-10T07:35:15.519548+0000 mon.a (mon.0) 2654 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: audit 2026-03-10T07:35:15.564856+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: audit 2026-03-10T07:35:15.564856+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: audit 2026-03-10T07:35:15.565989+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: audit 2026-03-10T07:35:15.565989+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: audit 2026-03-10T07:35:15.566450+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: audit 2026-03-10T07:35:15.566450+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: audit 2026-03-10T07:35:15.567391+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: audit 2026-03-10T07:35:15.567391+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: cluster 2026-03-10T07:35:16.271162+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:16 vm00 bash[28005]: cluster 2026-03-10T07:35:16.271162+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:16.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: cluster 2026-03-10T07:35:15.519548+0000 mon.a (mon.0) 2654 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T07:35:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: cluster 2026-03-10T07:35:15.519548+0000 mon.a (mon.0) 2654 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T07:35:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: audit 2026-03-10T07:35:15.564856+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: audit 2026-03-10T07:35:15.564856+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: audit 2026-03-10T07:35:15.565989+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: audit 2026-03-10T07:35:15.565989+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: audit 2026-03-10T07:35:15.566450+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: audit 2026-03-10T07:35:15.566450+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: audit 2026-03-10T07:35:15.567391+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: audit 2026-03-10T07:35:15.567391+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: cluster 2026-03-10T07:35:16.271162+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:16 vm00 bash[20701]: cluster 2026-03-10T07:35:16.271162+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: cluster 2026-03-10T07:35:15.519548+0000 mon.a (mon.0) 2654 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: cluster 2026-03-10T07:35:15.519548+0000 mon.a (mon.0) 2654 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: audit 2026-03-10T07:35:15.564856+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: audit 2026-03-10T07:35:15.564856+0000 mon.b (mon.1) 444 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: audit 2026-03-10T07:35:15.565989+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: audit 2026-03-10T07:35:15.565989+0000 mon.b (mon.1) 445 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: audit 2026-03-10T07:35:15.566450+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: audit 2026-03-10T07:35:15.566450+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: audit 2026-03-10T07:35:15.567391+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: audit 2026-03-10T07:35:15.567391+0000 mon.a (mon.0) 2656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-77"}]: dispatch 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: cluster 2026-03-10T07:35:16.271162+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:17.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:16 vm03 bash[23382]: cluster 2026-03-10T07:35:16.271162+0000 mon.a (mon.0) 2657 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:17 vm00 bash[28005]: cluster 2026-03-10T07:35:16.534171+0000 mon.a (mon.0) 2658 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T07:35:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:17 vm00 bash[28005]: cluster 2026-03-10T07:35:16.534171+0000 mon.a (mon.0) 2658 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T07:35:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:17 vm00 bash[28005]: cluster 2026-03-10T07:35:16.674860+0000 mgr.y (mgr.24407) 396 : cluster [DBG] pgmap v633: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:35:17.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:17 vm00 bash[28005]: cluster 2026-03-10T07:35:16.674860+0000 mgr.y (mgr.24407) 396 : cluster [DBG] pgmap v633: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:35:17.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:17 vm00 bash[20701]: cluster 2026-03-10T07:35:16.534171+0000 mon.a (mon.0) 2658 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T07:35:17.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:17 vm00 bash[20701]: cluster 2026-03-10T07:35:16.534171+0000 mon.a (mon.0) 2658 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T07:35:17.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:17 vm00 bash[20701]: cluster 2026-03-10T07:35:16.674860+0000 mgr.y (mgr.24407) 396 : cluster [DBG] pgmap v633: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:35:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:17 vm00 bash[20701]: cluster 2026-03-10T07:35:16.674860+0000 mgr.y (mgr.24407) 396 : cluster [DBG] pgmap v633: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:35:18.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:17 vm03 bash[23382]: cluster 2026-03-10T07:35:16.534171+0000 mon.a (mon.0) 2658 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T07:35:18.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:17 vm03 bash[23382]: cluster 2026-03-10T07:35:16.534171+0000 mon.a (mon.0) 2658 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T07:35:18.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:17 vm03 bash[23382]: cluster 2026-03-10T07:35:16.674860+0000 mgr.y (mgr.24407) 396 : cluster [DBG] pgmap v633: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:35:18.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:17 vm03 bash[23382]: cluster 2026-03-10T07:35:16.674860+0000 mgr.y (mgr.24407) 396 : cluster [DBG] pgmap v633: 260 pgs: 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:35:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:18 vm03 bash[23382]: cluster 2026-03-10T07:35:17.558104+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T07:35:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:18 vm03 bash[23382]: cluster 2026-03-10T07:35:17.558104+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T07:35:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:18 vm03 bash[23382]: audit 2026-03-10T07:35:17.561117+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:18 vm03 bash[23382]: audit 2026-03-10T07:35:17.561117+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:18 vm03 bash[23382]: audit 2026-03-10T07:35:17.567567+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:18 vm03 bash[23382]: audit 2026-03-10T07:35:17.567567+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:19.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:18 vm00 bash[28005]: cluster 2026-03-10T07:35:17.558104+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T07:35:19.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:18 vm00 bash[28005]: cluster 2026-03-10T07:35:17.558104+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T07:35:19.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:18 vm00 bash[28005]: audit 2026-03-10T07:35:17.561117+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:19.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:18 vm00 bash[28005]: audit 2026-03-10T07:35:17.561117+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:19.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:18 vm00 bash[28005]: audit 2026-03-10T07:35:17.567567+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:19.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:18 vm00 bash[28005]: audit 2026-03-10T07:35:17.567567+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:18 vm00 bash[20701]: cluster 2026-03-10T07:35:17.558104+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T07:35:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:18 vm00 bash[20701]: cluster 2026-03-10T07:35:17.558104+0000 mon.a (mon.0) 2659 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T07:35:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:18 vm00 bash[20701]: audit 2026-03-10T07:35:17.561117+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:18 vm00 bash[20701]: audit 2026-03-10T07:35:17.561117+0000 mon.b (mon.1) 446 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:18 vm00 bash[20701]: audit 2026-03-10T07:35:17.567567+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:19.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:18 vm00 bash[20701]: audit 2026-03-10T07:35:17.567567+0000 mon.a (mon.0) 2660 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:19 vm03 bash[23382]: cluster 2026-03-10T07:35:18.675193+0000 mgr.y (mgr.24407) 397 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 736 B/s wr, 2 op/s 2026-03-10T07:35:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:19 vm03 bash[23382]: cluster 2026-03-10T07:35:18.675193+0000 mgr.y (mgr.24407) 397 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 736 B/s wr, 2 op/s 2026-03-10T07:35:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:19 vm03 bash[23382]: audit 2026-03-10T07:35:18.739870+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:19 vm03 bash[23382]: audit 2026-03-10T07:35:18.739870+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:19 vm03 bash[23382]: audit 2026-03-10T07:35:18.807988+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:19 vm03 bash[23382]: audit 2026-03-10T07:35:18.807988+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:19 vm03 bash[23382]: cluster 2026-03-10T07:35:18.809639+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T07:35:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:19 vm03 bash[23382]: cluster 2026-03-10T07:35:18.809639+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T07:35:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:19 vm03 bash[23382]: cluster 2026-03-10T07:35:19.750031+0000 mon.a (mon.0) 2663 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T07:35:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:19 vm03 bash[23382]: cluster 2026-03-10T07:35:19.750031+0000 mon.a (mon.0) 2663 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T07:35:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:19 vm00 bash[28005]: cluster 2026-03-10T07:35:18.675193+0000 mgr.y (mgr.24407) 397 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 736 B/s wr, 2 op/s 2026-03-10T07:35:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:19 vm00 bash[28005]: cluster 2026-03-10T07:35:18.675193+0000 mgr.y (mgr.24407) 397 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 736 B/s wr, 2 op/s 2026-03-10T07:35:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:19 vm00 bash[28005]: audit 2026-03-10T07:35:18.739870+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:19 vm00 bash[28005]: audit 2026-03-10T07:35:18.739870+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:19 vm00 bash[28005]: audit 2026-03-10T07:35:18.807988+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:19 vm00 bash[28005]: audit 2026-03-10T07:35:18.807988+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:19 vm00 bash[28005]: cluster 2026-03-10T07:35:18.809639+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T07:35:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:19 vm00 bash[28005]: cluster 2026-03-10T07:35:18.809639+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T07:35:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:19 vm00 bash[28005]: cluster 2026-03-10T07:35:19.750031+0000 mon.a (mon.0) 2663 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T07:35:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:19 vm00 bash[28005]: cluster 2026-03-10T07:35:19.750031+0000 mon.a (mon.0) 2663 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T07:35:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:19 vm00 bash[20701]: cluster 2026-03-10T07:35:18.675193+0000 mgr.y (mgr.24407) 397 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 736 B/s wr, 2 op/s 2026-03-10T07:35:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:19 vm00 bash[20701]: cluster 2026-03-10T07:35:18.675193+0000 mgr.y (mgr.24407) 397 : cluster [DBG] pgmap v635: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 968 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 736 B/s wr, 2 op/s 2026-03-10T07:35:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:19 vm00 bash[20701]: audit 2026-03-10T07:35:18.739870+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:19 vm00 bash[20701]: audit 2026-03-10T07:35:18.739870+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:19 vm00 bash[20701]: audit 2026-03-10T07:35:18.807988+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:19 vm00 bash[20701]: audit 2026-03-10T07:35:18.807988+0000 mon.b (mon.1) 447 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:19 vm00 bash[20701]: cluster 2026-03-10T07:35:18.809639+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T07:35:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:19 vm00 bash[20701]: cluster 2026-03-10T07:35:18.809639+0000 mon.a (mon.0) 2662 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T07:35:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:19 vm00 bash[20701]: cluster 2026-03-10T07:35:19.750031+0000 mon.a (mon.0) 2663 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T07:35:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:19 vm00 bash[20701]: cluster 2026-03-10T07:35:19.750031+0000 mon.a (mon.0) 2663 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T07:35:21.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:35:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:35:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:21 vm00 bash[28005]: cluster 2026-03-10T07:35:20.675559+0000 mgr.y (mgr.24407) 398 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 988 B/s wr, 2 op/s 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:21 vm00 bash[28005]: cluster 2026-03-10T07:35:20.675559+0000 mgr.y (mgr.24407) 398 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 988 B/s wr, 2 op/s 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:21 vm00 bash[28005]: cluster 2026-03-10T07:35:20.775954+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:21 vm00 bash[28005]: cluster 2026-03-10T07:35:20.775954+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:21 vm00 bash[28005]: audit 2026-03-10T07:35:20.803613+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:21 vm00 bash[28005]: audit 2026-03-10T07:35:20.803613+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:21 vm00 bash[28005]: audit 2026-03-10T07:35:20.804983+0000 mgr.y (mgr.24407) 399 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:21 vm00 bash[28005]: audit 2026-03-10T07:35:20.804983+0000 mgr.y (mgr.24407) 399 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:21 vm00 bash[20701]: cluster 2026-03-10T07:35:20.675559+0000 mgr.y (mgr.24407) 398 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 988 B/s wr, 2 op/s 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:21 vm00 bash[20701]: cluster 2026-03-10T07:35:20.675559+0000 mgr.y (mgr.24407) 398 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 988 B/s wr, 2 op/s 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:21 vm00 bash[20701]: cluster 2026-03-10T07:35:20.775954+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:21 vm00 bash[20701]: cluster 2026-03-10T07:35:20.775954+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:21 vm00 bash[20701]: audit 2026-03-10T07:35:20.803613+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:22.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:21 vm00 bash[20701]: audit 2026-03-10T07:35:20.803613+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:22.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:21 vm00 bash[20701]: audit 2026-03-10T07:35:20.804983+0000 mgr.y (mgr.24407) 399 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:22.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:21 vm00 bash[20701]: audit 2026-03-10T07:35:20.804983+0000 mgr.y (mgr.24407) 399 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:21 vm03 bash[23382]: cluster 2026-03-10T07:35:20.675559+0000 mgr.y (mgr.24407) 398 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 988 B/s wr, 2 op/s 2026-03-10T07:35:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:21 vm03 bash[23382]: cluster 2026-03-10T07:35:20.675559+0000 mgr.y (mgr.24407) 398 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 988 B/s wr, 2 op/s 2026-03-10T07:35:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:21 vm03 bash[23382]: cluster 2026-03-10T07:35:20.775954+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T07:35:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:21 vm03 bash[23382]: cluster 2026-03-10T07:35:20.775954+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T07:35:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:21 vm03 bash[23382]: audit 2026-03-10T07:35:20.803613+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:21 vm03 bash[23382]: audit 2026-03-10T07:35:20.803613+0000 mon.b (mon.1) 448 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:21 vm03 bash[23382]: audit 2026-03-10T07:35:20.804983+0000 mgr.y (mgr.24407) 399 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:22.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:21 vm03 bash[23382]: audit 2026-03-10T07:35:20.804983+0000 mgr.y (mgr.24407) 399 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.7"}]: dispatch 2026-03-10T07:35:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:22 vm00 bash[28005]: cluster 2026-03-10T07:35:21.263551+0000 osd.3 (osd.3) 13 : cluster [DBG] 297.7 deep-scrub starts 2026-03-10T07:35:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:22 vm00 bash[28005]: cluster 2026-03-10T07:35:21.263551+0000 osd.3 (osd.3) 13 : cluster [DBG] 297.7 deep-scrub starts 2026-03-10T07:35:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:22 vm00 bash[28005]: cluster 2026-03-10T07:35:21.264815+0000 osd.3 (osd.3) 14 : cluster [DBG] 297.7 deep-scrub ok 2026-03-10T07:35:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:22 vm00 bash[28005]: cluster 2026-03-10T07:35:21.264815+0000 osd.3 (osd.3) 14 : cluster [DBG] 297.7 deep-scrub ok 2026-03-10T07:35:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:22 vm00 bash[20701]: cluster 2026-03-10T07:35:21.263551+0000 osd.3 (osd.3) 13 : cluster [DBG] 297.7 deep-scrub starts 2026-03-10T07:35:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:22 vm00 bash[20701]: cluster 2026-03-10T07:35:21.263551+0000 osd.3 (osd.3) 13 : cluster [DBG] 297.7 deep-scrub starts 2026-03-10T07:35:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:22 vm00 bash[20701]: cluster 2026-03-10T07:35:21.264815+0000 osd.3 (osd.3) 14 : cluster [DBG] 297.7 deep-scrub ok 2026-03-10T07:35:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:22 vm00 bash[20701]: cluster 2026-03-10T07:35:21.264815+0000 osd.3 (osd.3) 14 : cluster [DBG] 297.7 deep-scrub ok 2026-03-10T07:35:23.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:22 vm03 bash[23382]: cluster 2026-03-10T07:35:21.263551+0000 osd.3 (osd.3) 13 : cluster [DBG] 297.7 deep-scrub starts 2026-03-10T07:35:23.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:22 vm03 bash[23382]: cluster 2026-03-10T07:35:21.263551+0000 osd.3 (osd.3) 13 : cluster [DBG] 297.7 deep-scrub starts 2026-03-10T07:35:23.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:22 vm03 bash[23382]: cluster 2026-03-10T07:35:21.264815+0000 osd.3 (osd.3) 14 : cluster [DBG] 297.7 deep-scrub ok 2026-03-10T07:35:23.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:22 vm03 bash[23382]: cluster 2026-03-10T07:35:21.264815+0000 osd.3 (osd.3) 14 : cluster [DBG] 297.7 deep-scrub ok 2026-03-10T07:35:23.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:35:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:35:24.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:24 vm00 bash[28005]: cluster 2026-03-10T07:35:22.675949+0000 mgr.y (mgr.24407) 400 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 799 B/s wr, 2 op/s 2026-03-10T07:35:24.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:24 vm00 bash[28005]: cluster 2026-03-10T07:35:22.675949+0000 mgr.y (mgr.24407) 400 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 799 B/s wr, 2 op/s 2026-03-10T07:35:24.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:24 vm00 bash[28005]: audit 2026-03-10T07:35:23.270022+0000 mgr.y (mgr.24407) 401 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:24.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:24 vm00 bash[28005]: audit 2026-03-10T07:35:23.270022+0000 mgr.y (mgr.24407) 401 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:24.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:23 vm00 bash[20701]: cluster 2026-03-10T07:35:22.675949+0000 mgr.y (mgr.24407) 400 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 799 B/s wr, 2 op/s 2026-03-10T07:35:24.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:23 vm00 bash[20701]: cluster 2026-03-10T07:35:22.675949+0000 mgr.y (mgr.24407) 400 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 799 B/s wr, 2 op/s 2026-03-10T07:35:24.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:23 vm00 bash[20701]: audit 2026-03-10T07:35:23.270022+0000 mgr.y (mgr.24407) 401 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:24.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:23 vm00 bash[20701]: audit 2026-03-10T07:35:23.270022+0000 mgr.y (mgr.24407) 401 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:24.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:24 vm03 bash[23382]: cluster 2026-03-10T07:35:22.675949+0000 mgr.y (mgr.24407) 400 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 799 B/s wr, 2 op/s 2026-03-10T07:35:24.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:24 vm03 bash[23382]: cluster 2026-03-10T07:35:22.675949+0000 mgr.y (mgr.24407) 400 : cluster [DBG] pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 799 B/s wr, 2 op/s 2026-03-10T07:35:24.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:24 vm03 bash[23382]: audit 2026-03-10T07:35:23.270022+0000 mgr.y (mgr.24407) 401 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:24.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:24 vm03 bash[23382]: audit 2026-03-10T07:35:23.270022+0000 mgr.y (mgr.24407) 401 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:25.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:24 vm03 bash[23382]: audit 2026-03-10T07:35:24.534558+0000 mon.c (mon.2) 308 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:25.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:24 vm03 bash[23382]: audit 2026-03-10T07:35:24.534558+0000 mon.c (mon.2) 308 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:25.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:24 vm00 bash[28005]: audit 2026-03-10T07:35:24.534558+0000 mon.c (mon.2) 308 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:25.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:24 vm00 bash[28005]: audit 2026-03-10T07:35:24.534558+0000 mon.c (mon.2) 308 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:25.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:24 vm00 bash[20701]: audit 2026-03-10T07:35:24.534558+0000 mon.c (mon.2) 308 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:25.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:24 vm00 bash[20701]: audit 2026-03-10T07:35:24.534558+0000 mon.c (mon.2) 308 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:26 vm00 bash[28005]: cluster 2026-03-10T07:35:24.676523+0000 mgr.y (mgr.24407) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 682 B/s wr, 2 op/s 2026-03-10T07:35:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:26 vm00 bash[28005]: cluster 2026-03-10T07:35:24.676523+0000 mgr.y (mgr.24407) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 682 B/s wr, 2 op/s 2026-03-10T07:35:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:26 vm00 bash[20701]: cluster 2026-03-10T07:35:24.676523+0000 mgr.y (mgr.24407) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 682 B/s wr, 2 op/s 2026-03-10T07:35:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:26 vm00 bash[20701]: cluster 2026-03-10T07:35:24.676523+0000 mgr.y (mgr.24407) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 682 B/s wr, 2 op/s 2026-03-10T07:35:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:26 vm03 bash[23382]: cluster 2026-03-10T07:35:24.676523+0000 mgr.y (mgr.24407) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 682 B/s wr, 2 op/s 2026-03-10T07:35:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:26 vm03 bash[23382]: cluster 2026-03-10T07:35:24.676523+0000 mgr.y (mgr.24407) 402 : cluster [DBG] pgmap v641: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 682 B/s wr, 2 op/s 2026-03-10T07:35:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:27 vm03 bash[23382]: cluster 2026-03-10T07:35:26.677126+0000 mgr.y (mgr.24407) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 791 B/s wr, 3 op/s 2026-03-10T07:35:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:27 vm03 bash[23382]: cluster 2026-03-10T07:35:26.677126+0000 mgr.y (mgr.24407) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 791 B/s wr, 3 op/s 2026-03-10T07:35:27.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:27 vm00 bash[28005]: cluster 2026-03-10T07:35:26.677126+0000 mgr.y (mgr.24407) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 791 B/s wr, 3 op/s 2026-03-10T07:35:27.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:27 vm00 bash[28005]: cluster 2026-03-10T07:35:26.677126+0000 mgr.y (mgr.24407) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 791 B/s wr, 3 op/s 2026-03-10T07:35:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:27 vm00 bash[20701]: cluster 2026-03-10T07:35:26.677126+0000 mgr.y (mgr.24407) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 791 B/s wr, 3 op/s 2026-03-10T07:35:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:27 vm00 bash[20701]: cluster 2026-03-10T07:35:26.677126+0000 mgr.y (mgr.24407) 403 : cluster [DBG] pgmap v642: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 791 B/s wr, 3 op/s 2026-03-10T07:35:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:29 vm00 bash[28005]: cluster 2026-03-10T07:35:28.677648+0000 mgr.y (mgr.24407) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 573 B/s rd, 229 B/s wr, 1 op/s 2026-03-10T07:35:30.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:29 vm00 bash[28005]: cluster 2026-03-10T07:35:28.677648+0000 mgr.y (mgr.24407) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 573 B/s rd, 229 B/s wr, 1 op/s 2026-03-10T07:35:30.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:29 vm00 bash[20701]: cluster 2026-03-10T07:35:28.677648+0000 mgr.y (mgr.24407) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 573 B/s rd, 229 B/s wr, 1 op/s 2026-03-10T07:35:30.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:29 vm00 bash[20701]: cluster 2026-03-10T07:35:28.677648+0000 mgr.y (mgr.24407) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 573 B/s rd, 229 B/s wr, 1 op/s 2026-03-10T07:35:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:29 vm03 bash[23382]: cluster 2026-03-10T07:35:28.677648+0000 mgr.y (mgr.24407) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 573 B/s rd, 229 B/s wr, 1 op/s 2026-03-10T07:35:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:29 vm03 bash[23382]: cluster 2026-03-10T07:35:28.677648+0000 mgr.y (mgr.24407) 404 : cluster [DBG] pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 573 B/s rd, 229 B/s wr, 1 op/s 2026-03-10T07:35:31.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:35:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:35:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:35:32.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:31 vm00 bash[28005]: cluster 2026-03-10T07:35:30.678585+0000 mgr.y (mgr.24407) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:35:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:31 vm00 bash[28005]: cluster 2026-03-10T07:35:30.678585+0000 mgr.y (mgr.24407) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:35:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:31 vm00 bash[20701]: cluster 2026-03-10T07:35:30.678585+0000 mgr.y (mgr.24407) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:35:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:31 vm00 bash[20701]: cluster 2026-03-10T07:35:30.678585+0000 mgr.y (mgr.24407) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:35:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:31 vm03 bash[23382]: cluster 2026-03-10T07:35:30.678585+0000 mgr.y (mgr.24407) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:35:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:31 vm03 bash[23382]: cluster 2026-03-10T07:35:30.678585+0000 mgr.y (mgr.24407) 405 : cluster [DBG] pgmap v644: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T07:35:33.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:35:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:35:34.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:33 vm00 bash[28005]: cluster 2026-03-10T07:35:32.678931+0000 mgr.y (mgr.24407) 406 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 859 B/s rd, 171 B/s wr, 1 op/s 2026-03-10T07:35:34.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:33 vm00 bash[28005]: cluster 2026-03-10T07:35:32.678931+0000 mgr.y (mgr.24407) 406 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 859 B/s rd, 171 B/s wr, 1 op/s 2026-03-10T07:35:34.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:33 vm00 bash[28005]: audit 2026-03-10T07:35:33.277214+0000 mgr.y (mgr.24407) 407 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:34.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:33 vm00 bash[28005]: audit 2026-03-10T07:35:33.277214+0000 mgr.y (mgr.24407) 407 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:34.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:33 vm00 bash[20701]: cluster 2026-03-10T07:35:32.678931+0000 mgr.y (mgr.24407) 406 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 859 B/s rd, 171 B/s wr, 1 op/s 2026-03-10T07:35:34.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:33 vm00 bash[20701]: cluster 2026-03-10T07:35:32.678931+0000 mgr.y (mgr.24407) 406 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 859 B/s rd, 171 B/s wr, 1 op/s 2026-03-10T07:35:34.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:33 vm00 bash[20701]: audit 2026-03-10T07:35:33.277214+0000 mgr.y (mgr.24407) 407 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:34.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:33 vm00 bash[20701]: audit 2026-03-10T07:35:33.277214+0000 mgr.y (mgr.24407) 407 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:33 vm03 bash[23382]: cluster 2026-03-10T07:35:32.678931+0000 mgr.y (mgr.24407) 406 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 859 B/s rd, 171 B/s wr, 1 op/s 2026-03-10T07:35:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:33 vm03 bash[23382]: cluster 2026-03-10T07:35:32.678931+0000 mgr.y (mgr.24407) 406 : cluster [DBG] pgmap v645: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 859 B/s rd, 171 B/s wr, 1 op/s 2026-03-10T07:35:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:33 vm03 bash[23382]: audit 2026-03-10T07:35:33.277214+0000 mgr.y (mgr.24407) 407 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:33 vm03 bash[23382]: audit 2026-03-10T07:35:33.277214+0000 mgr.y (mgr.24407) 407 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:36 vm00 bash[28005]: cluster 2026-03-10T07:35:34.679589+0000 mgr.y (mgr.24407) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:36.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:36 vm00 bash[28005]: cluster 2026-03-10T07:35:34.679589+0000 mgr.y (mgr.24407) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:36 vm00 bash[20701]: cluster 2026-03-10T07:35:34.679589+0000 mgr.y (mgr.24407) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:36.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:36 vm00 bash[20701]: cluster 2026-03-10T07:35:34.679589+0000 mgr.y (mgr.24407) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:36.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:36 vm03 bash[23382]: cluster 2026-03-10T07:35:34.679589+0000 mgr.y (mgr.24407) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:36.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:36 vm03 bash[23382]: cluster 2026-03-10T07:35:34.679589+0000 mgr.y (mgr.24407) 408 : cluster [DBG] pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:38.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:38 vm00 bash[28005]: cluster 2026-03-10T07:35:36.680258+0000 mgr.y (mgr.24407) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:38.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:38 vm00 bash[28005]: cluster 2026-03-10T07:35:36.680258+0000 mgr.y (mgr.24407) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:38.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:38 vm00 bash[20701]: cluster 2026-03-10T07:35:36.680258+0000 mgr.y (mgr.24407) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:38.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:38 vm00 bash[20701]: cluster 2026-03-10T07:35:36.680258+0000 mgr.y (mgr.24407) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:38 vm03 bash[23382]: cluster 2026-03-10T07:35:36.680258+0000 mgr.y (mgr.24407) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:38 vm03 bash[23382]: cluster 2026-03-10T07:35:36.680258+0000 mgr.y (mgr.24407) 409 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T07:35:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:40 vm00 bash[28005]: cluster 2026-03-10T07:35:38.680760+0000 mgr.y (mgr.24407) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:35:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:40 vm00 bash[28005]: cluster 2026-03-10T07:35:38.680760+0000 mgr.y (mgr.24407) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:35:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:40 vm00 bash[28005]: audit 2026-03-10T07:35:39.552533+0000 mon.c (mon.2) 309 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:40 vm00 bash[28005]: audit 2026-03-10T07:35:39.552533+0000 mon.c (mon.2) 309 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:40 vm00 bash[20701]: cluster 2026-03-10T07:35:38.680760+0000 mgr.y (mgr.24407) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:35:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:40 vm00 bash[20701]: cluster 2026-03-10T07:35:38.680760+0000 mgr.y (mgr.24407) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:35:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:40 vm00 bash[20701]: audit 2026-03-10T07:35:39.552533+0000 mon.c (mon.2) 309 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:40 vm00 bash[20701]: audit 2026-03-10T07:35:39.552533+0000 mon.c (mon.2) 309 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:40 vm03 bash[23382]: cluster 2026-03-10T07:35:38.680760+0000 mgr.y (mgr.24407) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:35:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:40 vm03 bash[23382]: cluster 2026-03-10T07:35:38.680760+0000 mgr.y (mgr.24407) 410 : cluster [DBG] pgmap v648: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:35:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:40 vm03 bash[23382]: audit 2026-03-10T07:35:39.552533+0000 mon.c (mon.2) 309 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:40 vm03 bash[23382]: audit 2026-03-10T07:35:39.552533+0000 mon.c (mon.2) 309 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:41.078 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 07:35:40 vm00 bash[49191]: debug 2026-03-10T07:35:40.810+0000 7ff9e6451640 -1 snap_mapper.add_oid found existing snaps mapped on 297:e60a330c:test-rados-api-vm00-59782-80::foo:2, removing 2026-03-10T07:35:41.078 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 07:35:40 vm00 bash[42909]: debug 2026-03-10T07:35:40.810+0000 7f4fe95f4640 -1 snap_mapper.add_oid found existing snaps mapped on 297:e60a330c:test-rados-api-vm00-59782-80::foo:2, removing 2026-03-10T07:35:41.140 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 07:35:40 vm03 bash[26632]: debug 2026-03-10T07:35:40.810+0000 7fad47697640 -1 snap_mapper.add_oid found existing snaps mapped on 297:e60a330c:test-rados-api-vm00-59782-80::foo:2, removing 2026-03-10T07:35:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:41 vm00 bash[28005]: audit 2026-03-10T07:35:40.814932+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:41 vm00 bash[28005]: audit 2026-03-10T07:35:40.814932+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:41 vm00 bash[28005]: audit 2026-03-10T07:35:40.815997+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:41 vm00 bash[28005]: audit 2026-03-10T07:35:40.815997+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:41 vm00 bash[28005]: audit 2026-03-10T07:35:40.816513+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:41 vm00 bash[28005]: audit 2026-03-10T07:35:40.816513+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:41 vm00 bash[28005]: audit 2026-03-10T07:35:40.817394+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:41 vm00 bash[28005]: audit 2026-03-10T07:35:40.817394+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:35:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:35:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:41 vm00 bash[20701]: audit 2026-03-10T07:35:40.814932+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:41 vm00 bash[20701]: audit 2026-03-10T07:35:40.814932+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:41 vm00 bash[20701]: audit 2026-03-10T07:35:40.815997+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:41 vm00 bash[20701]: audit 2026-03-10T07:35:40.815997+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:41 vm00 bash[20701]: audit 2026-03-10T07:35:40.816513+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:41 vm00 bash[20701]: audit 2026-03-10T07:35:40.816513+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:41 vm00 bash[20701]: audit 2026-03-10T07:35:40.817394+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:41 vm00 bash[20701]: audit 2026-03-10T07:35:40.817394+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:41 vm03 bash[23382]: audit 2026-03-10T07:35:40.814932+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:41 vm03 bash[23382]: audit 2026-03-10T07:35:40.814932+0000 mon.b (mon.1) 449 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:41 vm03 bash[23382]: audit 2026-03-10T07:35:40.815997+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:41 vm03 bash[23382]: audit 2026-03-10T07:35:40.815997+0000 mon.b (mon.1) 450 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:41 vm03 bash[23382]: audit 2026-03-10T07:35:40.816513+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:41 vm03 bash[23382]: audit 2026-03-10T07:35:40.816513+0000 mon.a (mon.0) 2665 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:41 vm03 bash[23382]: audit 2026-03-10T07:35:40.817394+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:41.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:41 vm03 bash[23382]: audit 2026-03-10T07:35:40.817394+0000 mon.a (mon.0) 2666 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-79"}]: dispatch 2026-03-10T07:35:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:42 vm03 bash[23382]: cluster 2026-03-10T07:35:40.681596+0000 mgr.y (mgr.24407) 411 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:42 vm03 bash[23382]: cluster 2026-03-10T07:35:40.681596+0000 mgr.y (mgr.24407) 411 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:42 vm03 bash[23382]: cluster 2026-03-10T07:35:41.164028+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T07:35:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:42 vm03 bash[23382]: cluster 2026-03-10T07:35:41.164028+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T07:35:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:42 vm00 bash[28005]: cluster 2026-03-10T07:35:40.681596+0000 mgr.y (mgr.24407) 411 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:42 vm00 bash[28005]: cluster 2026-03-10T07:35:40.681596+0000 mgr.y (mgr.24407) 411 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:42 vm00 bash[28005]: cluster 2026-03-10T07:35:41.164028+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T07:35:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:42 vm00 bash[28005]: cluster 2026-03-10T07:35:41.164028+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T07:35:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:42 vm00 bash[20701]: cluster 2026-03-10T07:35:40.681596+0000 mgr.y (mgr.24407) 411 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:42 vm00 bash[20701]: cluster 2026-03-10T07:35:40.681596+0000 mgr.y (mgr.24407) 411 : cluster [DBG] pgmap v649: 292 pgs: 292 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:42 vm00 bash[20701]: cluster 2026-03-10T07:35:41.164028+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T07:35:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:42 vm00 bash[20701]: cluster 2026-03-10T07:35:41.164028+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T07:35:43.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:35:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:35:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:43 vm03 bash[23382]: cluster 2026-03-10T07:35:42.172337+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T07:35:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:43 vm03 bash[23382]: cluster 2026-03-10T07:35:42.172337+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T07:35:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:43 vm03 bash[23382]: audit 2026-03-10T07:35:42.175369+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:43 vm03 bash[23382]: audit 2026-03-10T07:35:42.175369+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:43 vm03 bash[23382]: audit 2026-03-10T07:35:42.177432+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:43.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:43 vm03 bash[23382]: audit 2026-03-10T07:35:42.177432+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:43 vm00 bash[20701]: cluster 2026-03-10T07:35:42.172337+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T07:35:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:43 vm00 bash[20701]: cluster 2026-03-10T07:35:42.172337+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T07:35:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:43 vm00 bash[20701]: audit 2026-03-10T07:35:42.175369+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:43 vm00 bash[20701]: audit 2026-03-10T07:35:42.175369+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:43 vm00 bash[20701]: audit 2026-03-10T07:35:42.177432+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:43 vm00 bash[20701]: audit 2026-03-10T07:35:42.177432+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:43 vm00 bash[28005]: cluster 2026-03-10T07:35:42.172337+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T07:35:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:43 vm00 bash[28005]: cluster 2026-03-10T07:35:42.172337+0000 mon.a (mon.0) 2668 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T07:35:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:43 vm00 bash[28005]: audit 2026-03-10T07:35:42.175369+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:43 vm00 bash[28005]: audit 2026-03-10T07:35:42.175369+0000 mon.b (mon.1) 451 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:43 vm00 bash[28005]: audit 2026-03-10T07:35:42.177432+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:43 vm00 bash[28005]: audit 2026-03-10T07:35:42.177432+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: cluster 2026-03-10T07:35:42.682058+0000 mgr.y (mgr.24407) 412 : cluster [DBG] pgmap v652: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: cluster 2026-03-10T07:35:42.682058+0000 mgr.y (mgr.24407) 412 : cluster [DBG] pgmap v652: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: cluster 2026-03-10T07:35:43.157816+0000 mon.a (mon.0) 2670 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: cluster 2026-03-10T07:35:43.157816+0000 mon.a (mon.0) 2670 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:43.159578+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:43.159578+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: cluster 2026-03-10T07:35:43.175664+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: cluster 2026-03-10T07:35:43.175664+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:43.176559+0000 mon.b (mon.1) 452 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:43.176559+0000 mon.b (mon.1) 452 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:43.190666+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:43.190666+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:43.208863+0000 mon.a (mon.0) 2673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:43.208863+0000 mon.a (mon.0) 2673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:43.288095+0000 mgr.y (mgr.24407) 413 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:43.288095+0000 mgr.y (mgr.24407) 413 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:44.163042+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:44.163042+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:44.171559+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:44.171559+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: cluster 2026-03-10T07:35:44.177721+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: cluster 2026-03-10T07:35:44.177721+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:44.178297+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:44.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:44 vm03 bash[23382]: audit 2026-03-10T07:35:44.178297+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: cluster 2026-03-10T07:35:42.682058+0000 mgr.y (mgr.24407) 412 : cluster [DBG] pgmap v652: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: cluster 2026-03-10T07:35:42.682058+0000 mgr.y (mgr.24407) 412 : cluster [DBG] pgmap v652: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: cluster 2026-03-10T07:35:43.157816+0000 mon.a (mon.0) 2670 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: cluster 2026-03-10T07:35:43.157816+0000 mon.a (mon.0) 2670 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:43.159578+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:43.159578+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: cluster 2026-03-10T07:35:43.175664+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T07:35:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: cluster 2026-03-10T07:35:43.175664+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T07:35:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:43.176559+0000 mon.b (mon.1) 452 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:43.176559+0000 mon.b (mon.1) 452 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:43.190666+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:43.190666+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:43.208863+0000 mon.a (mon.0) 2673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:43.208863+0000 mon.a (mon.0) 2673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:43.288095+0000 mgr.y (mgr.24407) 413 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:43.288095+0000 mgr.y (mgr.24407) 413 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:44.163042+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:44.163042+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:44.171559+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:44.171559+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: cluster 2026-03-10T07:35:44.177721+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: cluster 2026-03-10T07:35:44.177721+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:44.178297+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:44 vm00 bash[28005]: audit 2026-03-10T07:35:44.178297+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: cluster 2026-03-10T07:35:42.682058+0000 mgr.y (mgr.24407) 412 : cluster [DBG] pgmap v652: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: cluster 2026-03-10T07:35:42.682058+0000 mgr.y (mgr.24407) 412 : cluster [DBG] pgmap v652: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: cluster 2026-03-10T07:35:43.157816+0000 mon.a (mon.0) 2670 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: cluster 2026-03-10T07:35:43.157816+0000 mon.a (mon.0) 2670 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:43.159578+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:43.159578+0000 mon.a (mon.0) 2671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: cluster 2026-03-10T07:35:43.175664+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: cluster 2026-03-10T07:35:43.175664+0000 mon.a (mon.0) 2672 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:43.176559+0000 mon.b (mon.1) 452 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:43.176559+0000 mon.b (mon.1) 452 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:43.190666+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:43.190666+0000 mon.b (mon.1) 453 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:43.208863+0000 mon.a (mon.0) 2673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:43.208863+0000 mon.a (mon.0) 2673 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:43.288095+0000 mgr.y (mgr.24407) 413 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:43.288095+0000 mgr.y (mgr.24407) 413 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:44.163042+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:44.163042+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:44.171559+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:44.171559+0000 mon.b (mon.1) 454 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: cluster 2026-03-10T07:35:44.177721+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: cluster 2026-03-10T07:35:44.177721+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:44.178297+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:44 vm00 bash[20701]: audit 2026-03-10T07:35:44.178297+0000 mon.a (mon.0) 2676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:46 vm00 bash[28005]: cluster 2026-03-10T07:35:44.682538+0000 mgr.y (mgr.24407) 414 : cluster [DBG] pgmap v655: 292 pgs: 16 unknown, 276 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:35:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:46 vm00 bash[28005]: cluster 2026-03-10T07:35:44.682538+0000 mgr.y (mgr.24407) 414 : cluster [DBG] pgmap v655: 292 pgs: 16 unknown, 276 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:35:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:46 vm00 bash[28005]: audit 2026-03-10T07:35:45.252416+0000 mon.a (mon.0) 2677 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:46 vm00 bash[28005]: audit 2026-03-10T07:35:45.252416+0000 mon.a (mon.0) 2677 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:46 vm00 bash[28005]: cluster 2026-03-10T07:35:45.256386+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T07:35:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:46 vm00 bash[28005]: cluster 2026-03-10T07:35:45.256386+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T07:35:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:46 vm00 bash[28005]: audit 2026-03-10T07:35:45.261968+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:46 vm00 bash[28005]: audit 2026-03-10T07:35:45.261968+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:46 vm00 bash[28005]: audit 2026-03-10T07:35:45.271790+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:46 vm00 bash[28005]: audit 2026-03-10T07:35:45.271790+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:46 vm00 bash[20701]: cluster 2026-03-10T07:35:44.682538+0000 mgr.y (mgr.24407) 414 : cluster [DBG] pgmap v655: 292 pgs: 16 unknown, 276 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:35:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:46 vm00 bash[20701]: cluster 2026-03-10T07:35:44.682538+0000 mgr.y (mgr.24407) 414 : cluster [DBG] pgmap v655: 292 pgs: 16 unknown, 276 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:35:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:46 vm00 bash[20701]: audit 2026-03-10T07:35:45.252416+0000 mon.a (mon.0) 2677 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:46 vm00 bash[20701]: audit 2026-03-10T07:35:45.252416+0000 mon.a (mon.0) 2677 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:46 vm00 bash[20701]: cluster 2026-03-10T07:35:45.256386+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T07:35:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:46 vm00 bash[20701]: cluster 2026-03-10T07:35:45.256386+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T07:35:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:46 vm00 bash[20701]: audit 2026-03-10T07:35:45.261968+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:46 vm00 bash[20701]: audit 2026-03-10T07:35:45.261968+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:46 vm00 bash[20701]: audit 2026-03-10T07:35:45.271790+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:46 vm00 bash[20701]: audit 2026-03-10T07:35:45.271790+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:46 vm03 bash[23382]: cluster 2026-03-10T07:35:44.682538+0000 mgr.y (mgr.24407) 414 : cluster [DBG] pgmap v655: 292 pgs: 16 unknown, 276 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:35:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:46 vm03 bash[23382]: cluster 2026-03-10T07:35:44.682538+0000 mgr.y (mgr.24407) 414 : cluster [DBG] pgmap v655: 292 pgs: 16 unknown, 276 active+clean; 8.3 MiB data, 969 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T07:35:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:46 vm03 bash[23382]: audit 2026-03-10T07:35:45.252416+0000 mon.a (mon.0) 2677 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:46 vm03 bash[23382]: audit 2026-03-10T07:35:45.252416+0000 mon.a (mon.0) 2677 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:46 vm03 bash[23382]: cluster 2026-03-10T07:35:45.256386+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T07:35:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:46 vm03 bash[23382]: cluster 2026-03-10T07:35:45.256386+0000 mon.a (mon.0) 2678 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T07:35:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:46 vm03 bash[23382]: audit 2026-03-10T07:35:45.261968+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:46 vm03 bash[23382]: audit 2026-03-10T07:35:45.261968+0000 mon.b (mon.1) 455 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:46 vm03 bash[23382]: audit 2026-03-10T07:35:45.271790+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:46 vm03 bash[23382]: audit 2026-03-10T07:35:45.271790+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:47 vm00 bash[28005]: audit 2026-03-10T07:35:46.369967+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:47 vm00 bash[28005]: audit 2026-03-10T07:35:46.369967+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:47 vm00 bash[28005]: cluster 2026-03-10T07:35:46.373524+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T07:35:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:47 vm00 bash[28005]: cluster 2026-03-10T07:35:46.373524+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:47 vm00 bash[28005]: audit 2026-03-10T07:35:46.374935+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:47 vm00 bash[28005]: audit 2026-03-10T07:35:46.374935+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:47 vm00 bash[28005]: audit 2026-03-10T07:35:46.376497+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:47 vm00 bash[28005]: audit 2026-03-10T07:35:46.376497+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:47 vm00 bash[28005]: cluster 2026-03-10T07:35:46.682860+0000 mgr.y (mgr.24407) 415 : cluster [DBG] pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:47 vm00 bash[28005]: cluster 2026-03-10T07:35:46.682860+0000 mgr.y (mgr.24407) 415 : cluster [DBG] pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:47 vm00 bash[20701]: audit 2026-03-10T07:35:46.369967+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:47 vm00 bash[20701]: audit 2026-03-10T07:35:46.369967+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:47 vm00 bash[20701]: cluster 2026-03-10T07:35:46.373524+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:47 vm00 bash[20701]: cluster 2026-03-10T07:35:46.373524+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:47 vm00 bash[20701]: audit 2026-03-10T07:35:46.374935+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:47 vm00 bash[20701]: audit 2026-03-10T07:35:46.374935+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:47 vm00 bash[20701]: audit 2026-03-10T07:35:46.376497+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:47 vm00 bash[20701]: audit 2026-03-10T07:35:46.376497+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:47 vm00 bash[20701]: cluster 2026-03-10T07:35:46.682860+0000 mgr.y (mgr.24407) 415 : cluster [DBG] pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s 2026-03-10T07:35:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:47 vm00 bash[20701]: cluster 2026-03-10T07:35:46.682860+0000 mgr.y (mgr.24407) 415 : cluster [DBG] pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s 2026-03-10T07:35:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:47 vm03 bash[23382]: audit 2026-03-10T07:35:46.369967+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:47 vm03 bash[23382]: audit 2026-03-10T07:35:46.369967+0000 mon.a (mon.0) 2680 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:47 vm03 bash[23382]: cluster 2026-03-10T07:35:46.373524+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T07:35:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:47 vm03 bash[23382]: cluster 2026-03-10T07:35:46.373524+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T07:35:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:47 vm03 bash[23382]: audit 2026-03-10T07:35:46.374935+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:47 vm03 bash[23382]: audit 2026-03-10T07:35:46.374935+0000 mon.b (mon.1) 456 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:47 vm03 bash[23382]: audit 2026-03-10T07:35:46.376497+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:47 vm03 bash[23382]: audit 2026-03-10T07:35:46.376497+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:47.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:47 vm03 bash[23382]: cluster 2026-03-10T07:35:46.682860+0000 mgr.y (mgr.24407) 415 : cluster [DBG] pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s 2026-03-10T07:35:47.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:47 vm03 bash[23382]: cluster 2026-03-10T07:35:46.682860+0000 mgr.y (mgr.24407) 415 : cluster [DBG] pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s 2026-03-10T07:35:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:48 vm03 bash[23382]: audit 2026-03-10T07:35:47.379125+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:48 vm03 bash[23382]: audit 2026-03-10T07:35:47.379125+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:48 vm03 bash[23382]: cluster 2026-03-10T07:35:47.397891+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T07:35:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:48 vm03 bash[23382]: cluster 2026-03-10T07:35:47.397891+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T07:35:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:48 vm03 bash[23382]: audit 2026-03-10T07:35:47.427872+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:48 vm03 bash[23382]: audit 2026-03-10T07:35:47.427872+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:48 vm03 bash[23382]: audit 2026-03-10T07:35:47.429443+0000 mon.a (mon.0) 2685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:48 vm03 bash[23382]: audit 2026-03-10T07:35:47.429443+0000 mon.a (mon.0) 2685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:48.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:48 vm00 bash[28005]: audit 2026-03-10T07:35:47.379125+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:48.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:48 vm00 bash[28005]: audit 2026-03-10T07:35:47.379125+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:48.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:48 vm00 bash[28005]: cluster 2026-03-10T07:35:47.397891+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T07:35:48.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:48 vm00 bash[28005]: cluster 2026-03-10T07:35:47.397891+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T07:35:48.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:48 vm00 bash[28005]: audit 2026-03-10T07:35:47.427872+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:48.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:48 vm00 bash[28005]: audit 2026-03-10T07:35:47.427872+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:48.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:48 vm00 bash[28005]: audit 2026-03-10T07:35:47.429443+0000 mon.a (mon.0) 2685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:48.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:48 vm00 bash[28005]: audit 2026-03-10T07:35:47.429443+0000 mon.a (mon.0) 2685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:48 vm00 bash[20701]: audit 2026-03-10T07:35:47.379125+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:48 vm00 bash[20701]: audit 2026-03-10T07:35:47.379125+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:48 vm00 bash[20701]: cluster 2026-03-10T07:35:47.397891+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T07:35:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:48 vm00 bash[20701]: cluster 2026-03-10T07:35:47.397891+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T07:35:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:48 vm00 bash[20701]: audit 2026-03-10T07:35:47.427872+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:48 vm00 bash[20701]: audit 2026-03-10T07:35:47.427872+0000 mon.b (mon.1) 457 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:48 vm00 bash[20701]: audit 2026-03-10T07:35:47.429443+0000 mon.a (mon.0) 2685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:48.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:48 vm00 bash[20701]: audit 2026-03-10T07:35:47.429443+0000 mon.a (mon.0) 2685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T07:35:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:49 vm03 bash[23382]: audit 2026-03-10T07:35:48.385272+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T07:35:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:49 vm03 bash[23382]: audit 2026-03-10T07:35:48.385272+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T07:35:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:49 vm03 bash[23382]: cluster 2026-03-10T07:35:48.390831+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T07:35:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:49 vm03 bash[23382]: cluster 2026-03-10T07:35:48.390831+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T07:35:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:49 vm03 bash[23382]: audit 2026-03-10T07:35:48.435138+0000 mon.b (mon.1) 458 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:49 vm03 bash[23382]: audit 2026-03-10T07:35:48.435138+0000 mon.b (mon.1) 458 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:49 vm03 bash[23382]: audit 2026-03-10T07:35:48.437439+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:49 vm03 bash[23382]: audit 2026-03-10T07:35:48.437439+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:49 vm03 bash[23382]: cluster 2026-03-10T07:35:48.683204+0000 mgr.y (mgr.24407) 416 : cluster [DBG] pgmap v661: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-10T07:35:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:49 vm03 bash[23382]: cluster 2026-03-10T07:35:48.683204+0000 mgr.y (mgr.24407) 416 : cluster [DBG] pgmap v661: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:49 vm00 bash[28005]: audit 2026-03-10T07:35:48.385272+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:49 vm00 bash[28005]: audit 2026-03-10T07:35:48.385272+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:49 vm00 bash[28005]: cluster 2026-03-10T07:35:48.390831+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:49 vm00 bash[28005]: cluster 2026-03-10T07:35:48.390831+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:49 vm00 bash[28005]: audit 2026-03-10T07:35:48.435138+0000 mon.b (mon.1) 458 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:49 vm00 bash[28005]: audit 2026-03-10T07:35:48.435138+0000 mon.b (mon.1) 458 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:49 vm00 bash[28005]: audit 2026-03-10T07:35:48.437439+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:49 vm00 bash[28005]: audit 2026-03-10T07:35:48.437439+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:49 vm00 bash[28005]: cluster 2026-03-10T07:35:48.683204+0000 mgr.y (mgr.24407) 416 : cluster [DBG] pgmap v661: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:49 vm00 bash[28005]: cluster 2026-03-10T07:35:48.683204+0000 mgr.y (mgr.24407) 416 : cluster [DBG] pgmap v661: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:49 vm00 bash[20701]: audit 2026-03-10T07:35:48.385272+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:49 vm00 bash[20701]: audit 2026-03-10T07:35:48.385272+0000 mon.a (mon.0) 2686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:49 vm00 bash[20701]: cluster 2026-03-10T07:35:48.390831+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:49 vm00 bash[20701]: cluster 2026-03-10T07:35:48.390831+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:49 vm00 bash[20701]: audit 2026-03-10T07:35:48.435138+0000 mon.b (mon.1) 458 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:49 vm00 bash[20701]: audit 2026-03-10T07:35:48.435138+0000 mon.b (mon.1) 458 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:49 vm00 bash[20701]: audit 2026-03-10T07:35:48.437439+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:49 vm00 bash[20701]: audit 2026-03-10T07:35:48.437439+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T07:35:49.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:49 vm00 bash[20701]: cluster 2026-03-10T07:35:48.683204+0000 mgr.y (mgr.24407) 416 : cluster [DBG] pgmap v661: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-10T07:35:49.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:49 vm00 bash[20701]: cluster 2026-03-10T07:35:48.683204+0000 mgr.y (mgr.24407) 416 : cluster [DBG] pgmap v661: 292 pgs: 292 active+clean; 8.3 MiB data, 970 MiB used, 159 GiB / 160 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-10T07:35:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:50 vm03 bash[23382]: audit 2026-03-10T07:35:49.409413+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T07:35:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:50 vm03 bash[23382]: audit 2026-03-10T07:35:49.409413+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T07:35:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:50 vm03 bash[23382]: cluster 2026-03-10T07:35:49.423026+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T07:35:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:50 vm03 bash[23382]: cluster 2026-03-10T07:35:49.423026+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T07:35:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:50 vm03 bash[23382]: audit 2026-03-10T07:35:49.438308+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:50 vm03 bash[23382]: audit 2026-03-10T07:35:49.438308+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:50 vm03 bash[23382]: audit 2026-03-10T07:35:49.439800+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:50.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:50 vm03 bash[23382]: audit 2026-03-10T07:35:49.439800+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:50.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:50 vm00 bash[28005]: audit 2026-03-10T07:35:49.409413+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T07:35:50.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:50 vm00 bash[28005]: audit 2026-03-10T07:35:49.409413+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T07:35:50.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:50 vm00 bash[28005]: cluster 2026-03-10T07:35:49.423026+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:50 vm00 bash[28005]: cluster 2026-03-10T07:35:49.423026+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:50 vm00 bash[28005]: audit 2026-03-10T07:35:49.438308+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:50 vm00 bash[28005]: audit 2026-03-10T07:35:49.438308+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:50 vm00 bash[28005]: audit 2026-03-10T07:35:49.439800+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:50 vm00 bash[28005]: audit 2026-03-10T07:35:49.439800+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:50 vm00 bash[20701]: audit 2026-03-10T07:35:49.409413+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:50 vm00 bash[20701]: audit 2026-03-10T07:35:49.409413+0000 mon.a (mon.0) 2689 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:50 vm00 bash[20701]: cluster 2026-03-10T07:35:49.423026+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:50 vm00 bash[20701]: cluster 2026-03-10T07:35:49.423026+0000 mon.a (mon.0) 2690 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:50 vm00 bash[20701]: audit 2026-03-10T07:35:49.438308+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:50 vm00 bash[20701]: audit 2026-03-10T07:35:49.438308+0000 mon.b (mon.1) 459 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:50 vm00 bash[20701]: audit 2026-03-10T07:35:49.439800+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:50.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:50 vm00 bash[20701]: audit 2026-03-10T07:35:49.439800+0000 mon.a (mon.0) 2691 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:51.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:35:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:35:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:35:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: audit 2026-03-10T07:35:50.413382+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: audit 2026-03-10T07:35:50.413382+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: cluster 2026-03-10T07:35:50.422172+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T07:35:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: cluster 2026-03-10T07:35:50.422172+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T07:35:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: audit 2026-03-10T07:35:50.471365+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: audit 2026-03-10T07:35:50.471365+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: audit 2026-03-10T07:35:50.472524+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: audit 2026-03-10T07:35:50.472524+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: audit 2026-03-10T07:35:50.473064+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: audit 2026-03-10T07:35:50.473064+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: audit 2026-03-10T07:35:50.474092+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: audit 2026-03-10T07:35:50.474092+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: cluster 2026-03-10T07:35:50.683587+0000 mgr.y (mgr.24407) 417 : cluster [DBG] pgmap v664: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 11 KiB/s wr, 188 op/s 2026-03-10T07:35:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: cluster 2026-03-10T07:35:50.683587+0000 mgr.y (mgr.24407) 417 : cluster [DBG] pgmap v664: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 11 KiB/s wr, 188 op/s 2026-03-10T07:35:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: cluster 2026-03-10T07:35:51.382318+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:51.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:51 vm03 bash[23382]: cluster 2026-03-10T07:35:51.382318+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: audit 2026-03-10T07:35:50.413382+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: audit 2026-03-10T07:35:50.413382+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: cluster 2026-03-10T07:35:50.422172+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: cluster 2026-03-10T07:35:50.422172+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: audit 2026-03-10T07:35:50.471365+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: audit 2026-03-10T07:35:50.471365+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: audit 2026-03-10T07:35:50.472524+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: audit 2026-03-10T07:35:50.472524+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: audit 2026-03-10T07:35:50.473064+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: audit 2026-03-10T07:35:50.473064+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: audit 2026-03-10T07:35:50.474092+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: audit 2026-03-10T07:35:50.474092+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: cluster 2026-03-10T07:35:50.683587+0000 mgr.y (mgr.24407) 417 : cluster [DBG] pgmap v664: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 11 KiB/s wr, 188 op/s 2026-03-10T07:35:51.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: cluster 2026-03-10T07:35:50.683587+0000 mgr.y (mgr.24407) 417 : cluster [DBG] pgmap v664: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 11 KiB/s wr, 188 op/s 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: cluster 2026-03-10T07:35:51.382318+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:51 vm00 bash[28005]: cluster 2026-03-10T07:35:51.382318+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: audit 2026-03-10T07:35:50.413382+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: audit 2026-03-10T07:35:50.413382+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: cluster 2026-03-10T07:35:50.422172+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: cluster 2026-03-10T07:35:50.422172+0000 mon.a (mon.0) 2693 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: audit 2026-03-10T07:35:50.471365+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: audit 2026-03-10T07:35:50.471365+0000 mon.b (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: audit 2026-03-10T07:35:50.472524+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: audit 2026-03-10T07:35:50.472524+0000 mon.b (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: audit 2026-03-10T07:35:50.473064+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: audit 2026-03-10T07:35:50.473064+0000 mon.a (mon.0) 2694 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: audit 2026-03-10T07:35:50.474092+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: audit 2026-03-10T07:35:50.474092+0000 mon.a (mon.0) 2695 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-81"}]: dispatch 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: cluster 2026-03-10T07:35:50.683587+0000 mgr.y (mgr.24407) 417 : cluster [DBG] pgmap v664: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 11 KiB/s wr, 188 op/s 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: cluster 2026-03-10T07:35:50.683587+0000 mgr.y (mgr.24407) 417 : cluster [DBG] pgmap v664: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 11 KiB/s wr, 188 op/s 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: cluster 2026-03-10T07:35:51.382318+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:51 vm00 bash[20701]: cluster 2026-03-10T07:35:51.382318+0000 mon.a (mon.0) 2696 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:35:52.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:52 vm03 bash[23382]: cluster 2026-03-10T07:35:51.448724+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T07:35:52.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:52 vm03 bash[23382]: cluster 2026-03-10T07:35:51.448724+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T07:35:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:52 vm00 bash[28005]: cluster 2026-03-10T07:35:51.448724+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T07:35:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:52 vm00 bash[28005]: cluster 2026-03-10T07:35:51.448724+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T07:35:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:52 vm00 bash[20701]: cluster 2026-03-10T07:35:51.448724+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T07:35:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:52 vm00 bash[20701]: cluster 2026-03-10T07:35:51.448724+0000 mon.a (mon.0) 2697 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T07:35:53.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:35:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:35:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:53 vm03 bash[23382]: cluster 2026-03-10T07:35:52.448882+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T07:35:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:53 vm03 bash[23382]: cluster 2026-03-10T07:35:52.448882+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T07:35:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:53 vm03 bash[23382]: audit 2026-03-10T07:35:52.461799+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:53 vm03 bash[23382]: audit 2026-03-10T07:35:52.461799+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:53 vm03 bash[23382]: audit 2026-03-10T07:35:52.466805+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:53 vm03 bash[23382]: audit 2026-03-10T07:35:52.466805+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:53 vm03 bash[23382]: cluster 2026-03-10T07:35:52.683939+0000 mgr.y (mgr.24407) 418 : cluster [DBG] pgmap v667: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 9.0 KiB/s wr, 187 op/s 2026-03-10T07:35:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:53 vm03 bash[23382]: cluster 2026-03-10T07:35:52.683939+0000 mgr.y (mgr.24407) 418 : cluster [DBG] pgmap v667: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 9.0 KiB/s wr, 187 op/s 2026-03-10T07:35:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:53 vm03 bash[23382]: audit 2026-03-10T07:35:53.298867+0000 mgr.y (mgr.24407) 419 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:53 vm03 bash[23382]: audit 2026-03-10T07:35:53.298867+0000 mgr.y (mgr.24407) 419 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:53 vm00 bash[28005]: cluster 2026-03-10T07:35:52.448882+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:53 vm00 bash[28005]: cluster 2026-03-10T07:35:52.448882+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:53 vm00 bash[28005]: audit 2026-03-10T07:35:52.461799+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:53 vm00 bash[28005]: audit 2026-03-10T07:35:52.461799+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:53 vm00 bash[28005]: audit 2026-03-10T07:35:52.466805+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:53 vm00 bash[28005]: audit 2026-03-10T07:35:52.466805+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:53 vm00 bash[28005]: cluster 2026-03-10T07:35:52.683939+0000 mgr.y (mgr.24407) 418 : cluster [DBG] pgmap v667: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 9.0 KiB/s wr, 187 op/s 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:53 vm00 bash[28005]: cluster 2026-03-10T07:35:52.683939+0000 mgr.y (mgr.24407) 418 : cluster [DBG] pgmap v667: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 9.0 KiB/s wr, 187 op/s 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:53 vm00 bash[28005]: audit 2026-03-10T07:35:53.298867+0000 mgr.y (mgr.24407) 419 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:53 vm00 bash[28005]: audit 2026-03-10T07:35:53.298867+0000 mgr.y (mgr.24407) 419 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:53 vm00 bash[20701]: cluster 2026-03-10T07:35:52.448882+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:53 vm00 bash[20701]: cluster 2026-03-10T07:35:52.448882+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:53 vm00 bash[20701]: audit 2026-03-10T07:35:52.461799+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:53 vm00 bash[20701]: audit 2026-03-10T07:35:52.461799+0000 mon.b (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:53 vm00 bash[20701]: audit 2026-03-10T07:35:52.466805+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:53 vm00 bash[20701]: audit 2026-03-10T07:35:52.466805+0000 mon.a (mon.0) 2699 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:53 vm00 bash[20701]: cluster 2026-03-10T07:35:52.683939+0000 mgr.y (mgr.24407) 418 : cluster [DBG] pgmap v667: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 9.0 KiB/s wr, 187 op/s 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:53 vm00 bash[20701]: cluster 2026-03-10T07:35:52.683939+0000 mgr.y (mgr.24407) 418 : cluster [DBG] pgmap v667: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 103 KiB/s rd, 9.0 KiB/s wr, 187 op/s 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:53 vm00 bash[20701]: audit 2026-03-10T07:35:53.298867+0000 mgr.y (mgr.24407) 419 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:53 vm00 bash[20701]: audit 2026-03-10T07:35:53.298867+0000 mgr.y (mgr.24407) 419 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:35:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:54 vm03 bash[23382]: audit 2026-03-10T07:35:53.453623+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:54 vm03 bash[23382]: audit 2026-03-10T07:35:53.453623+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:54 vm03 bash[23382]: audit 2026-03-10T07:35:53.463511+0000 mon.b (mon.1) 463 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:54.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:54 vm03 bash[23382]: audit 2026-03-10T07:35:53.463511+0000 mon.b (mon.1) 463 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:54.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:54 vm03 bash[23382]: cluster 2026-03-10T07:35:53.463898+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T07:35:54.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:54 vm03 bash[23382]: cluster 2026-03-10T07:35:53.463898+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T07:35:54.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:54 vm03 bash[23382]: audit 2026-03-10T07:35:53.473563+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:54.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:54 vm03 bash[23382]: audit 2026-03-10T07:35:53.473563+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:54.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:54 vm03 bash[23382]: audit 2026-03-10T07:35:53.475026+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:54.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:54 vm03 bash[23382]: audit 2026-03-10T07:35:53.475026+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:54 vm00 bash[28005]: audit 2026-03-10T07:35:53.453623+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:54 vm00 bash[28005]: audit 2026-03-10T07:35:53.453623+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:54 vm00 bash[28005]: audit 2026-03-10T07:35:53.463511+0000 mon.b (mon.1) 463 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:54 vm00 bash[28005]: audit 2026-03-10T07:35:53.463511+0000 mon.b (mon.1) 463 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:54 vm00 bash[28005]: cluster 2026-03-10T07:35:53.463898+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T07:35:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:54 vm00 bash[28005]: cluster 2026-03-10T07:35:53.463898+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T07:35:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:54 vm00 bash[28005]: audit 2026-03-10T07:35:53.473563+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:54 vm00 bash[28005]: audit 2026-03-10T07:35:53.473563+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:54 vm00 bash[28005]: audit 2026-03-10T07:35:53.475026+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:54 vm00 bash[28005]: audit 2026-03-10T07:35:53.475026+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:54 vm00 bash[20701]: audit 2026-03-10T07:35:53.453623+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:54 vm00 bash[20701]: audit 2026-03-10T07:35:53.453623+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:54 vm00 bash[20701]: audit 2026-03-10T07:35:53.463511+0000 mon.b (mon.1) 463 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:54 vm00 bash[20701]: audit 2026-03-10T07:35:53.463511+0000 mon.b (mon.1) 463 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:54 vm00 bash[20701]: cluster 2026-03-10T07:35:53.463898+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:54 vm00 bash[20701]: cluster 2026-03-10T07:35:53.463898+0000 mon.a (mon.0) 2701 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:54 vm00 bash[20701]: audit 2026-03-10T07:35:53.473563+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:54 vm00 bash[20701]: audit 2026-03-10T07:35:53.473563+0000 mon.b (mon.1) 464 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:54 vm00 bash[20701]: audit 2026-03-10T07:35:53.475026+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:54 vm00 bash[20701]: audit 2026-03-10T07:35:53.475026+0000 mon.a (mon.0) 2702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:35:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:54.457622+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:54.457622+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:54.460915+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:54.460915+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: cluster 2026-03-10T07:35:54.462563+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: cluster 2026-03-10T07:35:54.462563+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:54.466614+0000 mon.a (mon.0) 2705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:54.466614+0000 mon.a (mon.0) 2705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:54.560797+0000 mon.c (mon.2) 310 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:54.560797+0000 mon.c (mon.2) 310 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: cluster 2026-03-10T07:35:54.684393+0000 mgr.y (mgr.24407) 420 : cluster [DBG] pgmap v670: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 962 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: cluster 2026-03-10T07:35:54.684393+0000 mgr.y (mgr.24407) 420 : cluster [DBG] pgmap v670: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 962 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:55.466882+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:55.466882+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:55.469496+0000 mon.b (mon.1) 466 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: audit 2026-03-10T07:35:55.469496+0000 mon.b (mon.1) 466 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: cluster 2026-03-10T07:35:55.471453+0000 mon.a (mon.0) 2707 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T07:35:55.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:55 vm03 bash[23382]: cluster 2026-03-10T07:35:55.471453+0000 mon.a (mon.0) 2707 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T07:35:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:54.457622+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:54.457622+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:54.460915+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:54.460915+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: cluster 2026-03-10T07:35:54.462563+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: cluster 2026-03-10T07:35:54.462563+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:54.466614+0000 mon.a (mon.0) 2705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:54.466614+0000 mon.a (mon.0) 2705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:54.560797+0000 mon.c (mon.2) 310 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:54.560797+0000 mon.c (mon.2) 310 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: cluster 2026-03-10T07:35:54.684393+0000 mgr.y (mgr.24407) 420 : cluster [DBG] pgmap v670: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 962 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: cluster 2026-03-10T07:35:54.684393+0000 mgr.y (mgr.24407) 420 : cluster [DBG] pgmap v670: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 962 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:55.466882+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:55.466882+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:55.469496+0000 mon.b (mon.1) 466 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: audit 2026-03-10T07:35:55.469496+0000 mon.b (mon.1) 466 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: cluster 2026-03-10T07:35:55.471453+0000 mon.a (mon.0) 2707 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:55 vm00 bash[28005]: cluster 2026-03-10T07:35:55.471453+0000 mon.a (mon.0) 2707 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:54.457622+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:54.457622+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:54.460915+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:54.460915+0000 mon.b (mon.1) 465 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: cluster 2026-03-10T07:35:54.462563+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: cluster 2026-03-10T07:35:54.462563+0000 mon.a (mon.0) 2704 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:54.466614+0000 mon.a (mon.0) 2705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:54.466614+0000 mon.a (mon.0) 2705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:54.560797+0000 mon.c (mon.2) 310 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:54.560797+0000 mon.c (mon.2) 310 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: cluster 2026-03-10T07:35:54.684393+0000 mgr.y (mgr.24407) 420 : cluster [DBG] pgmap v670: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 962 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: cluster 2026-03-10T07:35:54.684393+0000 mgr.y (mgr.24407) 420 : cluster [DBG] pgmap v670: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 962 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:55.466882+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:55.466882+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:55.469496+0000 mon.b (mon.1) 466 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: audit 2026-03-10T07:35:55.469496+0000 mon.b (mon.1) 466 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: cluster 2026-03-10T07:35:55.471453+0000 mon.a (mon.0) 2707 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T07:35:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:55 vm00 bash[20701]: cluster 2026-03-10T07:35:55.471453+0000 mon.a (mon.0) 2707 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T07:35:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:56 vm03 bash[23382]: audit 2026-03-10T07:35:55.472132+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:56 vm03 bash[23382]: audit 2026-03-10T07:35:55.472132+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:56 vm03 bash[23382]: audit 2026-03-10T07:35:56.470992+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:56.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:56 vm03 bash[23382]: audit 2026-03-10T07:35:56.470992+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:56.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:56 vm03 bash[23382]: audit 2026-03-10T07:35:56.473529+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:56.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:56 vm03 bash[23382]: audit 2026-03-10T07:35:56.473529+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:56.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:56 vm03 bash[23382]: cluster 2026-03-10T07:35:56.479313+0000 mon.a (mon.0) 2710 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T07:35:56.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:56 vm03 bash[23382]: cluster 2026-03-10T07:35:56.479313+0000 mon.a (mon.0) 2710 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T07:35:56.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:56 vm03 bash[23382]: audit 2026-03-10T07:35:56.480009+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:56.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:56 vm03 bash[23382]: audit 2026-03-10T07:35:56.480009+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:56 vm00 bash[28005]: audit 2026-03-10T07:35:55.472132+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:56 vm00 bash[28005]: audit 2026-03-10T07:35:55.472132+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:56 vm00 bash[28005]: audit 2026-03-10T07:35:56.470992+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:56 vm00 bash[28005]: audit 2026-03-10T07:35:56.470992+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:56 vm00 bash[28005]: audit 2026-03-10T07:35:56.473529+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:56 vm00 bash[28005]: audit 2026-03-10T07:35:56.473529+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:56 vm00 bash[28005]: cluster 2026-03-10T07:35:56.479313+0000 mon.a (mon.0) 2710 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:56 vm00 bash[28005]: cluster 2026-03-10T07:35:56.479313+0000 mon.a (mon.0) 2710 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:56 vm00 bash[28005]: audit 2026-03-10T07:35:56.480009+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:56 vm00 bash[28005]: audit 2026-03-10T07:35:56.480009+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:56 vm00 bash[20701]: audit 2026-03-10T07:35:55.472132+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:56 vm00 bash[20701]: audit 2026-03-10T07:35:55.472132+0000 mon.a (mon.0) 2708 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:56 vm00 bash[20701]: audit 2026-03-10T07:35:56.470992+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:56 vm00 bash[20701]: audit 2026-03-10T07:35:56.470992+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:56 vm00 bash[20701]: audit 2026-03-10T07:35:56.473529+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:56 vm00 bash[20701]: audit 2026-03-10T07:35:56.473529+0000 mon.b (mon.1) 467 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:56 vm00 bash[20701]: cluster 2026-03-10T07:35:56.479313+0000 mon.a (mon.0) 2710 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:56 vm00 bash[20701]: cluster 2026-03-10T07:35:56.479313+0000 mon.a (mon.0) 2710 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:56 vm00 bash[20701]: audit 2026-03-10T07:35:56.480009+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:56.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:56 vm00 bash[20701]: audit 2026-03-10T07:35:56.480009+0000 mon.a (mon.0) 2711 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:35:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:57 vm03 bash[23382]: cluster 2026-03-10T07:35:56.684919+0000 mgr.y (mgr.24407) 421 : cluster [DBG] pgmap v673: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T07:35:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:57 vm03 bash[23382]: cluster 2026-03-10T07:35:56.684919+0000 mgr.y (mgr.24407) 421 : cluster [DBG] pgmap v673: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T07:35:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:57 vm03 bash[23382]: audit 2026-03-10T07:35:57.474684+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:57 vm03 bash[23382]: audit 2026-03-10T07:35:57.474684+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:57 vm03 bash[23382]: cluster 2026-03-10T07:35:57.477959+0000 mon.a (mon.0) 2713 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T07:35:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:57 vm03 bash[23382]: cluster 2026-03-10T07:35:57.477959+0000 mon.a (mon.0) 2713 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T07:35:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:57 vm00 bash[28005]: cluster 2026-03-10T07:35:56.684919+0000 mgr.y (mgr.24407) 421 : cluster [DBG] pgmap v673: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T07:35:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:57 vm00 bash[28005]: cluster 2026-03-10T07:35:56.684919+0000 mgr.y (mgr.24407) 421 : cluster [DBG] pgmap v673: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T07:35:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:57 vm00 bash[28005]: audit 2026-03-10T07:35:57.474684+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:57 vm00 bash[28005]: audit 2026-03-10T07:35:57.474684+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:57.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:57 vm00 bash[28005]: cluster 2026-03-10T07:35:57.477959+0000 mon.a (mon.0) 2713 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T07:35:57.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:57 vm00 bash[28005]: cluster 2026-03-10T07:35:57.477959+0000 mon.a (mon.0) 2713 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T07:35:57.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:57 vm00 bash[20701]: cluster 2026-03-10T07:35:56.684919+0000 mgr.y (mgr.24407) 421 : cluster [DBG] pgmap v673: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T07:35:57.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:57 vm00 bash[20701]: cluster 2026-03-10T07:35:56.684919+0000 mgr.y (mgr.24407) 421 : cluster [DBG] pgmap v673: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T07:35:57.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:57 vm00 bash[20701]: audit 2026-03-10T07:35:57.474684+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:57.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:57 vm00 bash[20701]: audit 2026-03-10T07:35:57.474684+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:35:57.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:57 vm00 bash[20701]: cluster 2026-03-10T07:35:57.477959+0000 mon.a (mon.0) 2713 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T07:35:57.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:57 vm00 bash[20701]: cluster 2026-03-10T07:35:57.477959+0000 mon.a (mon.0) 2713 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T07:35:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:59 vm00 bash[28005]: cluster 2026-03-10T07:35:58.510818+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T07:35:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:59 vm00 bash[28005]: cluster 2026-03-10T07:35:58.510818+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T07:35:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:59 vm00 bash[28005]: cluster 2026-03-10T07:35:58.685406+0000 mgr.y (mgr.24407) 422 : cluster [DBG] pgmap v676: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:35:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:35:59 vm00 bash[28005]: cluster 2026-03-10T07:35:58.685406+0000 mgr.y (mgr.24407) 422 : cluster [DBG] pgmap v676: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:35:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:59 vm00 bash[20701]: cluster 2026-03-10T07:35:58.510818+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T07:35:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:59 vm00 bash[20701]: cluster 2026-03-10T07:35:58.510818+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T07:35:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:59 vm00 bash[20701]: cluster 2026-03-10T07:35:58.685406+0000 mgr.y (mgr.24407) 422 : cluster [DBG] pgmap v676: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:35:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:35:59 vm00 bash[20701]: cluster 2026-03-10T07:35:58.685406+0000 mgr.y (mgr.24407) 422 : cluster [DBG] pgmap v676: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:36:00.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:59 vm03 bash[23382]: cluster 2026-03-10T07:35:58.510818+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T07:36:00.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:59 vm03 bash[23382]: cluster 2026-03-10T07:35:58.510818+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T07:36:00.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:59 vm03 bash[23382]: cluster 2026-03-10T07:35:58.685406+0000 mgr.y (mgr.24407) 422 : cluster [DBG] pgmap v676: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:36:00.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:35:59 vm03 bash[23382]: cluster 2026-03-10T07:35:58.685406+0000 mgr.y (mgr.24407) 422 : cluster [DBG] pgmap v676: 292 pgs: 292 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T07:36:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:00 vm00 bash[28005]: cluster 2026-03-10T07:35:59.505131+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T07:36:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:00 vm00 bash[28005]: cluster 2026-03-10T07:35:59.505131+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:00 vm00 bash[28005]: audit 2026-03-10T07:35:59.550094+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:00 vm00 bash[28005]: audit 2026-03-10T07:35:59.550094+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:00 vm00 bash[28005]: audit 2026-03-10T07:35:59.550861+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:00 vm00 bash[28005]: audit 2026-03-10T07:35:59.550861+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:00 vm00 bash[28005]: audit 2026-03-10T07:35:59.551768+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:00 vm00 bash[28005]: audit 2026-03-10T07:35:59.551768+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:00 vm00 bash[28005]: audit 2026-03-10T07:35:59.552384+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:00 vm00 bash[28005]: audit 2026-03-10T07:35:59.552384+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:00 vm00 bash[20701]: cluster 2026-03-10T07:35:59.505131+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:00 vm00 bash[20701]: cluster 2026-03-10T07:35:59.505131+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:00 vm00 bash[20701]: audit 2026-03-10T07:35:59.550094+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:00 vm00 bash[20701]: audit 2026-03-10T07:35:59.550094+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:00 vm00 bash[20701]: audit 2026-03-10T07:35:59.550861+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:00 vm00 bash[20701]: audit 2026-03-10T07:35:59.550861+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:00 vm00 bash[20701]: audit 2026-03-10T07:35:59.551768+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:00 vm00 bash[20701]: audit 2026-03-10T07:35:59.551768+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:00 vm00 bash[20701]: audit 2026-03-10T07:35:59.552384+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:00.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:00 vm00 bash[20701]: audit 2026-03-10T07:35:59.552384+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:01.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:00 vm03 bash[23382]: cluster 2026-03-10T07:35:59.505131+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T07:36:01.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:00 vm03 bash[23382]: cluster 2026-03-10T07:35:59.505131+0000 mon.a (mon.0) 2715 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T07:36:01.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:00 vm03 bash[23382]: audit 2026-03-10T07:35:59.550094+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:01.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:00 vm03 bash[23382]: audit 2026-03-10T07:35:59.550094+0000 mon.b (mon.1) 468 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:01.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:00 vm03 bash[23382]: audit 2026-03-10T07:35:59.550861+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:01.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:00 vm03 bash[23382]: audit 2026-03-10T07:35:59.550861+0000 mon.b (mon.1) 469 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:01.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:00 vm03 bash[23382]: audit 2026-03-10T07:35:59.551768+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:01.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:00 vm03 bash[23382]: audit 2026-03-10T07:35:59.551768+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:01.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:00 vm03 bash[23382]: audit 2026-03-10T07:35:59.552384+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:01.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:00 vm03 bash[23382]: audit 2026-03-10T07:35:59.552384+0000 mon.a (mon.0) 2717 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-83"}]: dispatch 2026-03-10T07:36:01.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:36:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:36:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:36:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:01 vm00 bash[28005]: cluster 2026-03-10T07:36:00.543342+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T07:36:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:01 vm00 bash[28005]: cluster 2026-03-10T07:36:00.543342+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T07:36:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:01 vm00 bash[28005]: cluster 2026-03-10T07:36:00.685836+0000 mgr.y (mgr.24407) 423 : cluster [DBG] pgmap v679: 260 pgs: 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:01 vm00 bash[28005]: cluster 2026-03-10T07:36:00.685836+0000 mgr.y (mgr.24407) 423 : cluster [DBG] pgmap v679: 260 pgs: 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:01 vm00 bash[28005]: cluster 2026-03-10T07:36:01.534405+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:01 vm00 bash[28005]: cluster 2026-03-10T07:36:01.534405+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:01 vm00 bash[28005]: audit 2026-03-10T07:36:01.535400+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:01 vm00 bash[28005]: audit 2026-03-10T07:36:01.535400+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:01 vm00 bash[28005]: audit 2026-03-10T07:36:01.537019+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:01 vm00 bash[28005]: audit 2026-03-10T07:36:01.537019+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:01 vm00 bash[20701]: cluster 2026-03-10T07:36:00.543342+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:01 vm00 bash[20701]: cluster 2026-03-10T07:36:00.543342+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:01 vm00 bash[20701]: cluster 2026-03-10T07:36:00.685836+0000 mgr.y (mgr.24407) 423 : cluster [DBG] pgmap v679: 260 pgs: 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:01 vm00 bash[20701]: cluster 2026-03-10T07:36:00.685836+0000 mgr.y (mgr.24407) 423 : cluster [DBG] pgmap v679: 260 pgs: 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:01 vm00 bash[20701]: cluster 2026-03-10T07:36:01.534405+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:01 vm00 bash[20701]: cluster 2026-03-10T07:36:01.534405+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:01 vm00 bash[20701]: audit 2026-03-10T07:36:01.535400+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:01 vm00 bash[20701]: audit 2026-03-10T07:36:01.535400+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:01 vm00 bash[20701]: audit 2026-03-10T07:36:01.537019+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:01.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:01 vm00 bash[20701]: audit 2026-03-10T07:36:01.537019+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:01 vm03 bash[23382]: cluster 2026-03-10T07:36:00.543342+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T07:36:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:01 vm03 bash[23382]: cluster 2026-03-10T07:36:00.543342+0000 mon.a (mon.0) 2718 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T07:36:02.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:01 vm03 bash[23382]: cluster 2026-03-10T07:36:00.685836+0000 mgr.y (mgr.24407) 423 : cluster [DBG] pgmap v679: 260 pgs: 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:02.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:01 vm03 bash[23382]: cluster 2026-03-10T07:36:00.685836+0000 mgr.y (mgr.24407) 423 : cluster [DBG] pgmap v679: 260 pgs: 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:02.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:01 vm03 bash[23382]: cluster 2026-03-10T07:36:01.534405+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T07:36:02.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:01 vm03 bash[23382]: cluster 2026-03-10T07:36:01.534405+0000 mon.a (mon.0) 2719 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T07:36:02.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:01 vm03 bash[23382]: audit 2026-03-10T07:36:01.535400+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:02.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:01 vm03 bash[23382]: audit 2026-03-10T07:36:01.535400+0000 mon.b (mon.1) 470 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:02.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:01 vm03 bash[23382]: audit 2026-03-10T07:36:01.537019+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:02.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:01 vm03 bash[23382]: audit 2026-03-10T07:36:01.537019+0000 mon.a (mon.0) 2720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:03.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:36:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:36:03.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: audit 2026-03-10T07:36:02.534152+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:03.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: audit 2026-03-10T07:36:02.534152+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:03.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: cluster 2026-03-10T07:36:02.539028+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T07:36:03.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: cluster 2026-03-10T07:36:02.539028+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T07:36:03.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: audit 2026-03-10T07:36:02.545268+0000 mon.b (mon.1) 471 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:03.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: audit 2026-03-10T07:36:02.545268+0000 mon.b (mon.1) 471 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: audit 2026-03-10T07:36:02.566992+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: audit 2026-03-10T07:36:02.566992+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: audit 2026-03-10T07:36:02.568589+0000 mon.a (mon.0) 2723 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: audit 2026-03-10T07:36:02.568589+0000 mon.a (mon.0) 2723 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: cluster 2026-03-10T07:36:02.686216+0000 mgr.y (mgr.24407) 424 : cluster [DBG] pgmap v682: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: cluster 2026-03-10T07:36:02.686216+0000 mgr.y (mgr.24407) 424 : cluster [DBG] pgmap v682: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: audit 2026-03-10T07:36:03.308076+0000 mgr.y (mgr.24407) 425 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:03 vm03 bash[23382]: audit 2026-03-10T07:36:03.308076+0000 mgr.y (mgr.24407) 425 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: audit 2026-03-10T07:36:02.534152+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: audit 2026-03-10T07:36:02.534152+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: cluster 2026-03-10T07:36:02.539028+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: cluster 2026-03-10T07:36:02.539028+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: audit 2026-03-10T07:36:02.545268+0000 mon.b (mon.1) 471 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: audit 2026-03-10T07:36:02.545268+0000 mon.b (mon.1) 471 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: audit 2026-03-10T07:36:02.566992+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: audit 2026-03-10T07:36:02.566992+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: audit 2026-03-10T07:36:02.568589+0000 mon.a (mon.0) 2723 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: audit 2026-03-10T07:36:02.568589+0000 mon.a (mon.0) 2723 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: cluster 2026-03-10T07:36:02.686216+0000 mgr.y (mgr.24407) 424 : cluster [DBG] pgmap v682: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: cluster 2026-03-10T07:36:02.686216+0000 mgr.y (mgr.24407) 424 : cluster [DBG] pgmap v682: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: audit 2026-03-10T07:36:03.308076+0000 mgr.y (mgr.24407) 425 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:03 vm00 bash[28005]: audit 2026-03-10T07:36:03.308076+0000 mgr.y (mgr.24407) 425 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: audit 2026-03-10T07:36:02.534152+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: audit 2026-03-10T07:36:02.534152+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: cluster 2026-03-10T07:36:02.539028+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: cluster 2026-03-10T07:36:02.539028+0000 mon.a (mon.0) 2722 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: audit 2026-03-10T07:36:02.545268+0000 mon.b (mon.1) 471 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: audit 2026-03-10T07:36:02.545268+0000 mon.b (mon.1) 471 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: audit 2026-03-10T07:36:02.566992+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: audit 2026-03-10T07:36:02.566992+0000 mon.b (mon.1) 472 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: audit 2026-03-10T07:36:02.568589+0000 mon.a (mon.0) 2723 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: audit 2026-03-10T07:36:02.568589+0000 mon.a (mon.0) 2723 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: cluster 2026-03-10T07:36:02.686216+0000 mgr.y (mgr.24407) 424 : cluster [DBG] pgmap v682: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: cluster 2026-03-10T07:36:02.686216+0000 mgr.y (mgr.24407) 424 : cluster [DBG] pgmap v682: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: audit 2026-03-10T07:36:03.308076+0000 mgr.y (mgr.24407) 425 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:03 vm00 bash[20701]: audit 2026-03-10T07:36:03.308076+0000 mgr.y (mgr.24407) 425 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:04 vm00 bash[28005]: audit 2026-03-10T07:36:03.542484+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:04 vm00 bash[28005]: audit 2026-03-10T07:36:03.542484+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:04 vm00 bash[28005]: audit 2026-03-10T07:36:03.548150+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:04 vm00 bash[28005]: audit 2026-03-10T07:36:03.548150+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:04 vm00 bash[28005]: cluster 2026-03-10T07:36:03.562544+0000 mon.a (mon.0) 2725 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T07:36:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:04 vm00 bash[28005]: cluster 2026-03-10T07:36:03.562544+0000 mon.a (mon.0) 2725 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T07:36:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:04 vm00 bash[28005]: audit 2026-03-10T07:36:03.563128+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:04 vm00 bash[28005]: audit 2026-03-10T07:36:03.563128+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:04 vm00 bash[20701]: audit 2026-03-10T07:36:03.542484+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:04 vm00 bash[20701]: audit 2026-03-10T07:36:03.542484+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:04 vm00 bash[20701]: audit 2026-03-10T07:36:03.548150+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:04.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:04 vm00 bash[20701]: audit 2026-03-10T07:36:03.548150+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:04.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:04 vm00 bash[20701]: cluster 2026-03-10T07:36:03.562544+0000 mon.a (mon.0) 2725 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T07:36:04.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:04 vm00 bash[20701]: cluster 2026-03-10T07:36:03.562544+0000 mon.a (mon.0) 2725 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T07:36:04.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:04 vm00 bash[20701]: audit 2026-03-10T07:36:03.563128+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:04.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:04 vm00 bash[20701]: audit 2026-03-10T07:36:03.563128+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:04 vm03 bash[23382]: audit 2026-03-10T07:36:03.542484+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:04 vm03 bash[23382]: audit 2026-03-10T07:36:03.542484+0000 mon.a (mon.0) 2724 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:04 vm03 bash[23382]: audit 2026-03-10T07:36:03.548150+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:04 vm03 bash[23382]: audit 2026-03-10T07:36:03.548150+0000 mon.b (mon.1) 473 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:04 vm03 bash[23382]: cluster 2026-03-10T07:36:03.562544+0000 mon.a (mon.0) 2725 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T07:36:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:04 vm03 bash[23382]: cluster 2026-03-10T07:36:03.562544+0000 mon.a (mon.0) 2725 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T07:36:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:04 vm03 bash[23382]: audit 2026-03-10T07:36:03.563128+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:04 vm03 bash[23382]: audit 2026-03-10T07:36:03.563128+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:05 vm00 bash[28005]: audit 2026-03-10T07:36:04.548882+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:36:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:05 vm00 bash[28005]: audit 2026-03-10T07:36:04.548882+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:36:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:05 vm00 bash[28005]: cluster 2026-03-10T07:36:04.551976+0000 mon.a (mon.0) 2728 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T07:36:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:05 vm00 bash[28005]: cluster 2026-03-10T07:36:04.551976+0000 mon.a (mon.0) 2728 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T07:36:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:05 vm00 bash[28005]: audit 2026-03-10T07:36:04.552050+0000 mon.b (mon.1) 474 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:05 vm00 bash[28005]: audit 2026-03-10T07:36:04.552050+0000 mon.b (mon.1) 474 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:05 vm00 bash[28005]: audit 2026-03-10T07:36:04.553941+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:05 vm00 bash[28005]: audit 2026-03-10T07:36:04.553941+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:05 vm00 bash[28005]: cluster 2026-03-10T07:36:04.686793+0000 mgr.y (mgr.24407) 426 : cluster [DBG] pgmap v685: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:05 vm00 bash[28005]: cluster 2026-03-10T07:36:04.686793+0000 mgr.y (mgr.24407) 426 : cluster [DBG] pgmap v685: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:05 vm00 bash[20701]: audit 2026-03-10T07:36:04.548882+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:05 vm00 bash[20701]: audit 2026-03-10T07:36:04.548882+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:05 vm00 bash[20701]: cluster 2026-03-10T07:36:04.551976+0000 mon.a (mon.0) 2728 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:05 vm00 bash[20701]: cluster 2026-03-10T07:36:04.551976+0000 mon.a (mon.0) 2728 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:05 vm00 bash[20701]: audit 2026-03-10T07:36:04.552050+0000 mon.b (mon.1) 474 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:05 vm00 bash[20701]: audit 2026-03-10T07:36:04.552050+0000 mon.b (mon.1) 474 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:05 vm00 bash[20701]: audit 2026-03-10T07:36:04.553941+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:05 vm00 bash[20701]: audit 2026-03-10T07:36:04.553941+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:05 vm00 bash[20701]: cluster 2026-03-10T07:36:04.686793+0000 mgr.y (mgr.24407) 426 : cluster [DBG] pgmap v685: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:06.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:05 vm00 bash[20701]: cluster 2026-03-10T07:36:04.686793+0000 mgr.y (mgr.24407) 426 : cluster [DBG] pgmap v685: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:05 vm03 bash[23382]: audit 2026-03-10T07:36:04.548882+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:36:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:05 vm03 bash[23382]: audit 2026-03-10T07:36:04.548882+0000 mon.a (mon.0) 2727 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_tier","val": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:36:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:05 vm03 bash[23382]: cluster 2026-03-10T07:36:04.551976+0000 mon.a (mon.0) 2728 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T07:36:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:05 vm03 bash[23382]: cluster 2026-03-10T07:36:04.551976+0000 mon.a (mon.0) 2728 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T07:36:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:05 vm03 bash[23382]: audit 2026-03-10T07:36:04.552050+0000 mon.b (mon.1) 474 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:05 vm03 bash[23382]: audit 2026-03-10T07:36:04.552050+0000 mon.b (mon.1) 474 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:05 vm03 bash[23382]: audit 2026-03-10T07:36:04.553941+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:05 vm03 bash[23382]: audit 2026-03-10T07:36:04.553941+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:05 vm03 bash[23382]: cluster 2026-03-10T07:36:04.686793+0000 mgr.y (mgr.24407) 426 : cluster [DBG] pgmap v685: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:06.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:05 vm03 bash[23382]: cluster 2026-03-10T07:36:04.686793+0000 mgr.y (mgr.24407) 426 : cluster [DBG] pgmap v685: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:07.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:06 vm03 bash[23382]: audit 2026-03-10T07:36:05.842082+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:07.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:06 vm03 bash[23382]: audit 2026-03-10T07:36:05.842082+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:07.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:06 vm03 bash[23382]: audit 2026-03-10T07:36:05.844938+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:06 vm03 bash[23382]: audit 2026-03-10T07:36:05.844938+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:06 vm03 bash[23382]: cluster 2026-03-10T07:36:05.846454+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T07:36:07.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:06 vm03 bash[23382]: cluster 2026-03-10T07:36:05.846454+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T07:36:07.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:06 vm03 bash[23382]: audit 2026-03-10T07:36:05.849032+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:06 vm03 bash[23382]: audit 2026-03-10T07:36:05.849032+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:06 vm03 bash[23382]: audit 2026-03-10T07:36:06.564333+0000 mon.c (mon.2) 311 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:36:07.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:06 vm03 bash[23382]: audit 2026-03-10T07:36:06.564333+0000 mon.c (mon.2) 311 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:06 vm00 bash[28005]: audit 2026-03-10T07:36:05.842082+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:06 vm00 bash[28005]: audit 2026-03-10T07:36:05.842082+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:06 vm00 bash[28005]: audit 2026-03-10T07:36:05.844938+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:06 vm00 bash[28005]: audit 2026-03-10T07:36:05.844938+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:06 vm00 bash[28005]: cluster 2026-03-10T07:36:05.846454+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:06 vm00 bash[28005]: cluster 2026-03-10T07:36:05.846454+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:06 vm00 bash[28005]: audit 2026-03-10T07:36:05.849032+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:06 vm00 bash[28005]: audit 2026-03-10T07:36:05.849032+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:06 vm00 bash[28005]: audit 2026-03-10T07:36:06.564333+0000 mon.c (mon.2) 311 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:06 vm00 bash[28005]: audit 2026-03-10T07:36:06.564333+0000 mon.c (mon.2) 311 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:06 vm00 bash[20701]: audit 2026-03-10T07:36:05.842082+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:06 vm00 bash[20701]: audit 2026-03-10T07:36:05.842082+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:06 vm00 bash[20701]: audit 2026-03-10T07:36:05.844938+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:06 vm00 bash[20701]: audit 2026-03-10T07:36:05.844938+0000 mon.b (mon.1) 475 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:06 vm00 bash[20701]: cluster 2026-03-10T07:36:05.846454+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:06 vm00 bash[20701]: cluster 2026-03-10T07:36:05.846454+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T07:36:07.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:06 vm00 bash[20701]: audit 2026-03-10T07:36:05.849032+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:06 vm00 bash[20701]: audit 2026-03-10T07:36:05.849032+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:07.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:06 vm00 bash[20701]: audit 2026-03-10T07:36:06.564333+0000 mon.c (mon.2) 311 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:36:07.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:06 vm00 bash[20701]: audit 2026-03-10T07:36:06.564333+0000 mon.c (mon.2) 311 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: cluster 2026-03-10T07:36:06.687164+0000 mgr.y (mgr.24407) 427 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: cluster 2026-03-10T07:36:06.687164+0000 mgr.y (mgr.24407) 427 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: audit 2026-03-10T07:36:06.880986+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: audit 2026-03-10T07:36:06.880986+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: cluster 2026-03-10T07:36:06.890016+0000 mon.a (mon.0) 2734 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: cluster 2026-03-10T07:36:06.890016+0000 mon.a (mon.0) 2734 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: audit 2026-03-10T07:36:06.930753+0000 mon.c (mon.2) 312 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: audit 2026-03-10T07:36:06.930753+0000 mon.c (mon.2) 312 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: audit 2026-03-10T07:36:06.931858+0000 mon.c (mon.2) 313 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: audit 2026-03-10T07:36:06.931858+0000 mon.c (mon.2) 313 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: audit 2026-03-10T07:36:06.936394+0000 mon.a (mon.0) 2735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:36:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:07 vm03 bash[23382]: audit 2026-03-10T07:36:06.936394+0000 mon.a (mon.0) 2735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: cluster 2026-03-10T07:36:06.687164+0000 mgr.y (mgr.24407) 427 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: cluster 2026-03-10T07:36:06.687164+0000 mgr.y (mgr.24407) 427 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: audit 2026-03-10T07:36:06.880986+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: audit 2026-03-10T07:36:06.880986+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: cluster 2026-03-10T07:36:06.890016+0000 mon.a (mon.0) 2734 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: cluster 2026-03-10T07:36:06.890016+0000 mon.a (mon.0) 2734 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: audit 2026-03-10T07:36:06.930753+0000 mon.c (mon.2) 312 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: audit 2026-03-10T07:36:06.930753+0000 mon.c (mon.2) 312 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: audit 2026-03-10T07:36:06.931858+0000 mon.c (mon.2) 313 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: audit 2026-03-10T07:36:06.931858+0000 mon.c (mon.2) 313 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: audit 2026-03-10T07:36:06.936394+0000 mon.a (mon.0) 2735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:07 vm00 bash[28005]: audit 2026-03-10T07:36:06.936394+0000 mon.a (mon.0) 2735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: cluster 2026-03-10T07:36:06.687164+0000 mgr.y (mgr.24407) 427 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: cluster 2026-03-10T07:36:06.687164+0000 mgr.y (mgr.24407) 427 : cluster [DBG] pgmap v687: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:08.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: audit 2026-03-10T07:36:06.880986+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: audit 2026-03-10T07:36:06.880986+0000 mon.a (mon.0) 2733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: cluster 2026-03-10T07:36:06.890016+0000 mon.a (mon.0) 2734 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T07:36:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: cluster 2026-03-10T07:36:06.890016+0000 mon.a (mon.0) 2734 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T07:36:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: audit 2026-03-10T07:36:06.930753+0000 mon.c (mon.2) 312 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:36:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: audit 2026-03-10T07:36:06.930753+0000 mon.c (mon.2) 312 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:36:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: audit 2026-03-10T07:36:06.931858+0000 mon.c (mon.2) 313 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:36:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: audit 2026-03-10T07:36:06.931858+0000 mon.c (mon.2) 313 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:36:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: audit 2026-03-10T07:36:06.936394+0000 mon.a (mon.0) 2735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:36:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:07 vm00 bash[20701]: audit 2026-03-10T07:36:06.936394+0000 mon.a (mon.0) 2735 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:36:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:08 vm03 bash[23382]: cluster 2026-03-10T07:36:07.909361+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T07:36:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:08 vm03 bash[23382]: cluster 2026-03-10T07:36:07.909361+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T07:36:09.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:08 vm00 bash[28005]: cluster 2026-03-10T07:36:07.909361+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T07:36:09.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:08 vm00 bash[28005]: cluster 2026-03-10T07:36:07.909361+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T07:36:09.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:08 vm00 bash[20701]: cluster 2026-03-10T07:36:07.909361+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T07:36:09.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:08 vm00 bash[20701]: cluster 2026-03-10T07:36:07.909361+0000 mon.a (mon.0) 2736 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: cluster 2026-03-10T07:36:08.687526+0000 mgr.y (mgr.24407) 428 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: cluster 2026-03-10T07:36:08.687526+0000 mgr.y (mgr.24407) 428 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: cluster 2026-03-10T07:36:08.930218+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: cluster 2026-03-10T07:36:08.930218+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:08.979354+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:08.979354+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:08.980105+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:08.980105+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:08.981081+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:08.981081+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:08.981617+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:08.981617+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:09.567661+0000 mon.c (mon.2) 314 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:09.567661+0000 mon.c (mon.2) 314 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:09.866860+0000 mon.c (mon.2) 315 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:09.866860+0000 mon.c (mon.2) 315 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:09.867292+0000 mon.a (mon.0) 2740 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:10.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:09 vm03 bash[23382]: audit 2026-03-10T07:36:09.867292+0000 mon.a (mon.0) 2740 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: cluster 2026-03-10T07:36:08.687526+0000 mgr.y (mgr.24407) 428 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: cluster 2026-03-10T07:36:08.687526+0000 mgr.y (mgr.24407) 428 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: cluster 2026-03-10T07:36:08.930218+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: cluster 2026-03-10T07:36:08.930218+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:08.979354+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:08.979354+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:08.980105+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:08.980105+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:08.981081+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:08.981081+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:08.981617+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:08.981617+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:09.567661+0000 mon.c (mon.2) 314 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:09.567661+0000 mon.c (mon.2) 314 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:10.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:09.866860+0000 mon.c (mon.2) 315 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:09.866860+0000 mon.c (mon.2) 315 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:09.867292+0000 mon.a (mon.0) 2740 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:09 vm00 bash[28005]: audit 2026-03-10T07:36:09.867292+0000 mon.a (mon.0) 2740 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: cluster 2026-03-10T07:36:08.687526+0000 mgr.y (mgr.24407) 428 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: cluster 2026-03-10T07:36:08.687526+0000 mgr.y (mgr.24407) 428 : cluster [DBG] pgmap v690: 292 pgs: 292 active+clean; 8.3 MiB data, 964 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: cluster 2026-03-10T07:36:08.930218+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: cluster 2026-03-10T07:36:08.930218+0000 mon.a (mon.0) 2737 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:08.979354+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:08.979354+0000 mon.b (mon.1) 476 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:08.980105+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:08.980105+0000 mon.b (mon.1) 477 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:08.981081+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:08.981081+0000 mon.a (mon.0) 2738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:08.981617+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:08.981617+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-85"}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:09.567661+0000 mon.c (mon.2) 314 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:09.567661+0000 mon.c (mon.2) 314 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:09.866860+0000 mon.c (mon.2) 315 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:09.866860+0000 mon.c (mon.2) 315 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:09.867292+0000 mon.a (mon.0) 2740 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:10.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:09 vm00 bash[20701]: audit 2026-03-10T07:36:09.867292+0000 mon.a (mon.0) 2740 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]: dispatch 2026-03-10T07:36:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:10 vm03 bash[23382]: audit 2026-03-10T07:36:09.957174+0000 mon.a (mon.0) 2741 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]': finished 2026-03-10T07:36:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:10 vm03 bash[23382]: audit 2026-03-10T07:36:09.957174+0000 mon.a (mon.0) 2741 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]': finished 2026-03-10T07:36:11.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:10 vm03 bash[23382]: cluster 2026-03-10T07:36:09.962482+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T07:36:11.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:10 vm03 bash[23382]: cluster 2026-03-10T07:36:09.962482+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T07:36:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:10 vm00 bash[28005]: audit 2026-03-10T07:36:09.957174+0000 mon.a (mon.0) 2741 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]': finished 2026-03-10T07:36:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:10 vm00 bash[28005]: audit 2026-03-10T07:36:09.957174+0000 mon.a (mon.0) 2741 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]': finished 2026-03-10T07:36:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:10 vm00 bash[28005]: cluster 2026-03-10T07:36:09.962482+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T07:36:11.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:10 vm00 bash[28005]: cluster 2026-03-10T07:36:09.962482+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T07:36:11.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:36:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:36:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:36:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:10 vm00 bash[20701]: audit 2026-03-10T07:36:09.957174+0000 mon.a (mon.0) 2741 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]': finished 2026-03-10T07:36:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:10 vm00 bash[20701]: audit 2026-03-10T07:36:09.957174+0000 mon.a (mon.0) 2741 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "300.17", "id": [5, 3]}]': finished 2026-03-10T07:36:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:10 vm00 bash[20701]: cluster 2026-03-10T07:36:09.962482+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T07:36:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:10 vm00 bash[20701]: cluster 2026-03-10T07:36:09.962482+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: OK ] LibRadosTwoPoolsPP.ProxyRead (17392 ms) 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.CachePin 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.CachePin (22758 ms) 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.SetRedirectRead 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.SetRedirectRead (3003 ms) 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestPromoteRead 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestPromoteRead (3010 ms) 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRefRead 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRefRead (3174 ms) 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestUnset 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestUnset (3012 ms) 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestDedupRefRead 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestDedupRefRead (4084 ms) 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount (39940 ms) 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 (16898 ms) 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestTestSnapCreate 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestTestSnapCreate (3189 ms) 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote (3035 ms) 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification 2026-03-10T07:36:12.012 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification (24862 ms) 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapIncCount 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapIncCount (15104 ms) 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvict 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvict (5040 ms) 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictPromote 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictPromote (4204 ms) 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: waiting for scrubs... 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: done waiting 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch (24628 ms) 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.DedupFlushRead 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.DedupFlushRead (10285 ms) 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushSnap 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushSnap (9095 ms) 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushDupCount 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushDupCount (9422 ms) 2026-03-10T07:36:12.013 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringFlush 2026-03-10T07:36:12.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:11 vm03 bash[23382]: cluster 2026-03-10T07:36:10.687960+0000 mgr.y (mgr.24407) 429 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:12.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:12 vm03 bash[23382]: cluster 2026-03-10T07:36:10.687960+0000 mgr.y (mgr.24407) 429 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:12.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:12 vm03 bash[23382]: cluster 2026-03-10T07:36:10.988758+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T07:36:12.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:12 vm03 bash[23382]: cluster 2026-03-10T07:36:10.988758+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T07:36:12.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:12 vm03 bash[23382]: audit 2026-03-10T07:36:10.991730+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:12.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:12 vm03 bash[23382]: audit 2026-03-10T07:36:10.991730+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:12.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:12 vm03 bash[23382]: audit 2026-03-10T07:36:10.994099+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:12.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:12 vm03 bash[23382]: audit 2026-03-10T07:36:10.994099+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:12 vm00 bash[28005]: cluster 2026-03-10T07:36:10.687960+0000 mgr.y (mgr.24407) 429 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:12 vm00 bash[28005]: cluster 2026-03-10T07:36:10.687960+0000 mgr.y (mgr.24407) 429 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:12 vm00 bash[28005]: cluster 2026-03-10T07:36:10.988758+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T07:36:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:12 vm00 bash[28005]: cluster 2026-03-10T07:36:10.988758+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T07:36:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:12 vm00 bash[28005]: audit 2026-03-10T07:36:10.991730+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:12 vm00 bash[28005]: audit 2026-03-10T07:36:10.991730+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:12 vm00 bash[28005]: audit 2026-03-10T07:36:10.994099+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:12.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:12 vm00 bash[28005]: audit 2026-03-10T07:36:10.994099+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:12.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:11 vm00 bash[20701]: cluster 2026-03-10T07:36:10.687960+0000 mgr.y (mgr.24407) 429 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:12.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:12 vm00 bash[20701]: cluster 2026-03-10T07:36:10.687960+0000 mgr.y (mgr.24407) 429 : cluster [DBG] pgmap v693: 260 pgs: 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:12.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:12 vm00 bash[20701]: cluster 2026-03-10T07:36:10.988758+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T07:36:12.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:12 vm00 bash[20701]: cluster 2026-03-10T07:36:10.988758+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T07:36:12.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:12 vm00 bash[20701]: audit 2026-03-10T07:36:10.991730+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:12.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:12 vm00 bash[20701]: audit 2026-03-10T07:36:10.991730+0000 mon.b (mon.1) 478 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:12.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:12 vm00 bash[20701]: audit 2026-03-10T07:36:10.994099+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:12.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:12 vm00 bash[20701]: audit 2026-03-10T07:36:10.994099+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:13.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:13 vm03 bash[23382]: audit 2026-03-10T07:36:11.972338+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:13.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:13 vm03 bash[23382]: audit 2026-03-10T07:36:11.972338+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:13.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:13 vm03 bash[23382]: cluster 2026-03-10T07:36:11.982312+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T07:36:13.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:13 vm03 bash[23382]: cluster 2026-03-10T07:36:11.982312+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T07:36:13.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:13 vm03 bash[23382]: audit 2026-03-10T07:36:12.011635+0000 mon.b (mon.1) 479 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:13.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:13 vm03 bash[23382]: audit 2026-03-10T07:36:12.011635+0000 mon.b (mon.1) 479 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:13.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:13 vm00 bash[28005]: audit 2026-03-10T07:36:11.972338+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:13.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:13 vm00 bash[28005]: audit 2026-03-10T07:36:11.972338+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:13.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:13 vm00 bash[28005]: cluster 2026-03-10T07:36:11.982312+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T07:36:13.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:13 vm00 bash[28005]: cluster 2026-03-10T07:36:11.982312+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T07:36:13.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:13 vm00 bash[28005]: audit 2026-03-10T07:36:12.011635+0000 mon.b (mon.1) 479 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:13.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:13 vm00 bash[28005]: audit 2026-03-10T07:36:12.011635+0000 mon.b (mon.1) 479 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:13.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:13 vm00 bash[20701]: audit 2026-03-10T07:36:11.972338+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:13.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:13 vm00 bash[20701]: audit 2026-03-10T07:36:11.972338+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:13.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:13 vm00 bash[20701]: cluster 2026-03-10T07:36:11.982312+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T07:36:13.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:13 vm00 bash[20701]: cluster 2026-03-10T07:36:11.982312+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T07:36:13.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:13 vm00 bash[20701]: audit 2026-03-10T07:36:12.011635+0000 mon.b (mon.1) 479 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:13.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:13 vm00 bash[20701]: audit 2026-03-10T07:36:12.011635+0000 mon.b (mon.1) 479 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:13.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:36:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:36:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:14 vm00 bash[28005]: cluster 2026-03-10T07:36:12.688332+0000 mgr.y (mgr.24407) 430 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:14 vm00 bash[28005]: cluster 2026-03-10T07:36:12.688332+0000 mgr.y (mgr.24407) 430 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:14 vm00 bash[28005]: cluster 2026-03-10T07:36:12.986327+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T07:36:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:14 vm00 bash[28005]: cluster 2026-03-10T07:36:12.986327+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T07:36:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:14 vm00 bash[28005]: audit 2026-03-10T07:36:13.010733+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:14 vm00 bash[28005]: audit 2026-03-10T07:36:13.010733+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:14 vm00 bash[28005]: audit 2026-03-10T07:36:13.022293+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:14 vm00 bash[28005]: audit 2026-03-10T07:36:13.022293+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:14 vm00 bash[28005]: audit 2026-03-10T07:36:13.318985+0000 mgr.y (mgr.24407) 431 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:14 vm00 bash[28005]: audit 2026-03-10T07:36:13.318985+0000 mgr.y (mgr.24407) 431 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:14 vm00 bash[20701]: cluster 2026-03-10T07:36:12.688332+0000 mgr.y (mgr.24407) 430 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:14 vm00 bash[20701]: cluster 2026-03-10T07:36:12.688332+0000 mgr.y (mgr.24407) 430 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:14 vm00 bash[20701]: cluster 2026-03-10T07:36:12.986327+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T07:36:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:14 vm00 bash[20701]: cluster 2026-03-10T07:36:12.986327+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T07:36:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:14 vm00 bash[20701]: audit 2026-03-10T07:36:13.010733+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:14 vm00 bash[20701]: audit 2026-03-10T07:36:13.010733+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:14 vm00 bash[20701]: audit 2026-03-10T07:36:13.022293+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:14 vm00 bash[20701]: audit 2026-03-10T07:36:13.022293+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:14 vm00 bash[20701]: audit 2026-03-10T07:36:13.318985+0000 mgr.y (mgr.24407) 431 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:14 vm00 bash[20701]: audit 2026-03-10T07:36:13.318985+0000 mgr.y (mgr.24407) 431 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:14 vm03 bash[23382]: cluster 2026-03-10T07:36:12.688332+0000 mgr.y (mgr.24407) 430 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:14 vm03 bash[23382]: cluster 2026-03-10T07:36:12.688332+0000 mgr.y (mgr.24407) 430 : cluster [DBG] pgmap v696: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T07:36:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:14 vm03 bash[23382]: cluster 2026-03-10T07:36:12.986327+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T07:36:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:14 vm03 bash[23382]: cluster 2026-03-10T07:36:12.986327+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T07:36:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:14 vm03 bash[23382]: audit 2026-03-10T07:36:13.010733+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:14 vm03 bash[23382]: audit 2026-03-10T07:36:13.010733+0000 mon.b (mon.1) 480 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:14 vm03 bash[23382]: audit 2026-03-10T07:36:13.022293+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:14 vm03 bash[23382]: audit 2026-03-10T07:36:13.022293+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:14 vm03 bash[23382]: audit 2026-03-10T07:36:13.318985+0000 mgr.y (mgr.24407) 431 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:14 vm03 bash[23382]: audit 2026-03-10T07:36:13.318985+0000 mgr.y (mgr.24407) 431 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:13.980400+0000 mon.a (mon.0) 2749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:13.980400+0000 mon.a (mon.0) 2749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: cluster 2026-03-10T07:36:13.987007+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: cluster 2026-03-10T07:36:13.987007+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:14.009131+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:14.009131+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:14.018044+0000 mon.a (mon.0) 2751 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:14.018044+0000 mon.a (mon.0) 2751 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:14.984398+0000 mon.a (mon.0) 2752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]': finished 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:14.984398+0000 mon.a (mon.0) 2752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]': finished 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: cluster 2026-03-10T07:36:14.990609+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: cluster 2026-03-10T07:36:14.990609+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:14.992782+0000 mon.b (mon.1) 482 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:14.992782+0000 mon.b (mon.1) 482 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:14.994491+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:15 vm00 bash[28005]: audit 2026-03-10T07:36:14.994491+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:13.980400+0000 mon.a (mon.0) 2749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:13.980400+0000 mon.a (mon.0) 2749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: cluster 2026-03-10T07:36:13.987007+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: cluster 2026-03-10T07:36:13.987007+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:14.009131+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:14.009131+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:14.018044+0000 mon.a (mon.0) 2751 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:14.018044+0000 mon.a (mon.0) 2751 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:14.984398+0000 mon.a (mon.0) 2752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]': finished 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:14.984398+0000 mon.a (mon.0) 2752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]': finished 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: cluster 2026-03-10T07:36:14.990609+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: cluster 2026-03-10T07:36:14.990609+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:14.992782+0000 mon.b (mon.1) 482 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:14.992782+0000 mon.b (mon.1) 482 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:14.994491+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:15.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:15 vm00 bash[20701]: audit 2026-03-10T07:36:14.994491+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:13.980400+0000 mon.a (mon.0) 2749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:13.980400+0000 mon.a (mon.0) 2749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: cluster 2026-03-10T07:36:13.987007+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: cluster 2026-03-10T07:36:13.987007+0000 mon.a (mon.0) 2750 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:14.009131+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:14.009131+0000 mon.b (mon.1) 481 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:14.018044+0000 mon.a (mon.0) 2751 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:14.018044+0000 mon.a (mon.0) 2751 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]: dispatch 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:14.984398+0000 mon.a (mon.0) 2752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]': finished 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:14.984398+0000 mon.a (mon.0) 2752 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_tier","val": "test-rados-api-vm00-59782-89-test-flush"}]': finished 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: cluster 2026-03-10T07:36:14.990609+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: cluster 2026-03-10T07:36:14.990609+0000 mon.a (mon.0) 2753 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:14.992782+0000 mon.b (mon.1) 482 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:14.992782+0000 mon.b (mon.1) 482 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:14.994491+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:15 vm03 bash[23382]: audit 2026-03-10T07:36:14.994491+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:36:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:16 vm03 bash[23382]: cluster 2026-03-10T07:36:14.689024+0000 mgr.y (mgr.24407) 432 : cluster [DBG] pgmap v699: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:16 vm03 bash[23382]: cluster 2026-03-10T07:36:14.689024+0000 mgr.y (mgr.24407) 432 : cluster [DBG] pgmap v699: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:16 vm03 bash[23382]: cluster 2026-03-10T07:36:15.019389+0000 mon.a (mon.0) 2755 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:16 vm03 bash[23382]: cluster 2026-03-10T07:36:15.019389+0000 mon.a (mon.0) 2755 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:16 vm00 bash[28005]: cluster 2026-03-10T07:36:14.689024+0000 mgr.y (mgr.24407) 432 : cluster [DBG] pgmap v699: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:16 vm00 bash[28005]: cluster 2026-03-10T07:36:14.689024+0000 mgr.y (mgr.24407) 432 : cluster [DBG] pgmap v699: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:16 vm00 bash[28005]: cluster 2026-03-10T07:36:15.019389+0000 mon.a (mon.0) 2755 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:16 vm00 bash[28005]: cluster 2026-03-10T07:36:15.019389+0000 mon.a (mon.0) 2755 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:16 vm00 bash[20701]: cluster 2026-03-10T07:36:14.689024+0000 mgr.y (mgr.24407) 432 : cluster [DBG] pgmap v699: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:16 vm00 bash[20701]: cluster 2026-03-10T07:36:14.689024+0000 mgr.y (mgr.24407) 432 : cluster [DBG] pgmap v699: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:16 vm00 bash[20701]: cluster 2026-03-10T07:36:15.019389+0000 mon.a (mon.0) 2755 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:16 vm00 bash[20701]: cluster 2026-03-10T07:36:15.019389+0000 mon.a (mon.0) 2755 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: audit 2026-03-10T07:36:16.131988+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: audit 2026-03-10T07:36:16.131988+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: audit 2026-03-10T07:36:16.138863+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: audit 2026-03-10T07:36:16.138863+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: cluster 2026-03-10T07:36:16.142418+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T07:36:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: cluster 2026-03-10T07:36:16.142418+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T07:36:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: audit 2026-03-10T07:36:16.144655+0000 mon.a (mon.0) 2758 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: audit 2026-03-10T07:36:16.144655+0000 mon.a (mon.0) 2758 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: audit 2026-03-10T07:36:17.135302+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: audit 2026-03-10T07:36:17.135302+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: cluster 2026-03-10T07:36:17.138468+0000 mon.a (mon.0) 2760 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T07:36:17.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:17 vm03 bash[23382]: cluster 2026-03-10T07:36:17.138468+0000 mon.a (mon.0) 2760 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T07:36:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: audit 2026-03-10T07:36:16.131988+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: audit 2026-03-10T07:36:16.131988+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: audit 2026-03-10T07:36:16.138863+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: audit 2026-03-10T07:36:16.138863+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: cluster 2026-03-10T07:36:16.142418+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T07:36:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: cluster 2026-03-10T07:36:16.142418+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T07:36:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: audit 2026-03-10T07:36:16.144655+0000 mon.a (mon.0) 2758 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: audit 2026-03-10T07:36:16.144655+0000 mon.a (mon.0) 2758 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: audit 2026-03-10T07:36:17.135302+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: audit 2026-03-10T07:36:17.135302+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: cluster 2026-03-10T07:36:17.138468+0000 mon.a (mon.0) 2760 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:17 vm00 bash[28005]: cluster 2026-03-10T07:36:17.138468+0000 mon.a (mon.0) 2760 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: audit 2026-03-10T07:36:16.131988+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: audit 2026-03-10T07:36:16.131988+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: audit 2026-03-10T07:36:16.138863+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: audit 2026-03-10T07:36:16.138863+0000 mon.b (mon.1) 483 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: cluster 2026-03-10T07:36:16.142418+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: cluster 2026-03-10T07:36:16.142418+0000 mon.a (mon.0) 2757 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: audit 2026-03-10T07:36:16.144655+0000 mon.a (mon.0) 2758 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: audit 2026-03-10T07:36:16.144655+0000 mon.a (mon.0) 2758 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: audit 2026-03-10T07:36:17.135302+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: audit 2026-03-10T07:36:17.135302+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: cluster 2026-03-10T07:36:17.138468+0000 mon.a (mon.0) 2760 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T07:36:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:17 vm00 bash[20701]: cluster 2026-03-10T07:36:17.138468+0000 mon.a (mon.0) 2760 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T07:36:18.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:18 vm03 bash[23382]: cluster 2026-03-10T07:36:16.689446+0000 mgr.y (mgr.24407) 433 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:18 vm03 bash[23382]: cluster 2026-03-10T07:36:16.689446+0000 mgr.y (mgr.24407) 433 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:18 vm00 bash[28005]: cluster 2026-03-10T07:36:16.689446+0000 mgr.y (mgr.24407) 433 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:18 vm00 bash[28005]: cluster 2026-03-10T07:36:16.689446+0000 mgr.y (mgr.24407) 433 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:18 vm00 bash[20701]: cluster 2026-03-10T07:36:16.689446+0000 mgr.y (mgr.24407) 433 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:18 vm00 bash[20701]: cluster 2026-03-10T07:36:16.689446+0000 mgr.y (mgr.24407) 433 : cluster [DBG] pgmap v702: 324 pgs: 324 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:19 vm03 bash[23382]: cluster 2026-03-10T07:36:18.171084+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T07:36:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:19 vm03 bash[23382]: cluster 2026-03-10T07:36:18.171084+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T07:36:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:19 vm03 bash[23382]: audit 2026-03-10T07:36:18.233938+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:19 vm03 bash[23382]: audit 2026-03-10T07:36:18.233938+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:19 vm03 bash[23382]: audit 2026-03-10T07:36:18.235040+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:19 vm03 bash[23382]: audit 2026-03-10T07:36:18.235040+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:19 vm03 bash[23382]: audit 2026-03-10T07:36:18.235763+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:19 vm03 bash[23382]: audit 2026-03-10T07:36:18.235763+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:19 vm03 bash[23382]: audit 2026-03-10T07:36:18.236709+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:19 vm03 bash[23382]: audit 2026-03-10T07:36:18.236709+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:19 vm00 bash[28005]: cluster 2026-03-10T07:36:18.171084+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:19 vm00 bash[28005]: cluster 2026-03-10T07:36:18.171084+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:19 vm00 bash[28005]: audit 2026-03-10T07:36:18.233938+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:19 vm00 bash[28005]: audit 2026-03-10T07:36:18.233938+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:19 vm00 bash[28005]: audit 2026-03-10T07:36:18.235040+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:19 vm00 bash[28005]: audit 2026-03-10T07:36:18.235040+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:19 vm00 bash[28005]: audit 2026-03-10T07:36:18.235763+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:19 vm00 bash[28005]: audit 2026-03-10T07:36:18.235763+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:19 vm00 bash[28005]: audit 2026-03-10T07:36:18.236709+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:19 vm00 bash[28005]: audit 2026-03-10T07:36:18.236709+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:19 vm00 bash[20701]: cluster 2026-03-10T07:36:18.171084+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:19 vm00 bash[20701]: cluster 2026-03-10T07:36:18.171084+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:19 vm00 bash[20701]: audit 2026-03-10T07:36:18.233938+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:19 vm00 bash[20701]: audit 2026-03-10T07:36:18.233938+0000 mon.b (mon.1) 484 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:19 vm00 bash[20701]: audit 2026-03-10T07:36:18.235040+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:19 vm00 bash[20701]: audit 2026-03-10T07:36:18.235040+0000 mon.b (mon.1) 485 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:19 vm00 bash[20701]: audit 2026-03-10T07:36:18.235763+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:19 vm00 bash[20701]: audit 2026-03-10T07:36:18.235763+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:19 vm00 bash[20701]: audit 2026-03-10T07:36:18.236709+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:19 vm00 bash[20701]: audit 2026-03-10T07:36:18.236709+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-87"}]: dispatch 2026-03-10T07:36:20.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:20 vm03 bash[23382]: cluster 2026-03-10T07:36:18.689855+0000 mgr.y (mgr.24407) 434 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:20 vm03 bash[23382]: cluster 2026-03-10T07:36:18.689855+0000 mgr.y (mgr.24407) 434 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:20 vm03 bash[23382]: cluster 2026-03-10T07:36:19.221932+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T07:36:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:20 vm03 bash[23382]: cluster 2026-03-10T07:36:19.221932+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T07:36:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:20 vm00 bash[28005]: cluster 2026-03-10T07:36:18.689855+0000 mgr.y (mgr.24407) 434 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:20 vm00 bash[28005]: cluster 2026-03-10T07:36:18.689855+0000 mgr.y (mgr.24407) 434 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:20 vm00 bash[28005]: cluster 2026-03-10T07:36:19.221932+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T07:36:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:20 vm00 bash[28005]: cluster 2026-03-10T07:36:19.221932+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T07:36:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:20 vm00 bash[20701]: cluster 2026-03-10T07:36:18.689855+0000 mgr.y (mgr.24407) 434 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:20 vm00 bash[20701]: cluster 2026-03-10T07:36:18.689855+0000 mgr.y (mgr.24407) 434 : cluster [DBG] pgmap v705: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:36:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:20 vm00 bash[20701]: cluster 2026-03-10T07:36:19.221932+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T07:36:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:20 vm00 bash[20701]: cluster 2026-03-10T07:36:19.221932+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T07:36:21.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:21 vm00 bash[28005]: cluster 2026-03-10T07:36:20.201986+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T07:36:21.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:21 vm00 bash[28005]: cluster 2026-03-10T07:36:20.201986+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T07:36:21.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:21 vm00 bash[28005]: audit 2026-03-10T07:36:20.206033+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:21 vm00 bash[28005]: audit 2026-03-10T07:36:20.206033+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:21 vm00 bash[28005]: audit 2026-03-10T07:36:20.208438+0000 mon.a (mon.0) 2766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:21 vm00 bash[28005]: audit 2026-03-10T07:36:20.208438+0000 mon.a (mon.0) 2766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:21 vm00 bash[28005]: audit 2026-03-10T07:36:21.202399+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:21.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:21 vm00 bash[28005]: audit 2026-03-10T07:36:21.202399+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:21.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:21 vm00 bash[28005]: cluster 2026-03-10T07:36:21.208271+0000 mon.a (mon.0) 2768 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T07:36:21.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:21 vm00 bash[28005]: cluster 2026-03-10T07:36:21.208271+0000 mon.a (mon.0) 2768 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T07:36:21.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:36:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:36:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:36:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:21 vm00 bash[20701]: cluster 2026-03-10T07:36:20.201986+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T07:36:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:21 vm00 bash[20701]: cluster 2026-03-10T07:36:20.201986+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T07:36:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:21 vm00 bash[20701]: audit 2026-03-10T07:36:20.206033+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:21 vm00 bash[20701]: audit 2026-03-10T07:36:20.206033+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:21 vm00 bash[20701]: audit 2026-03-10T07:36:20.208438+0000 mon.a (mon.0) 2766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:21 vm00 bash[20701]: audit 2026-03-10T07:36:20.208438+0000 mon.a (mon.0) 2766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:21 vm00 bash[20701]: audit 2026-03-10T07:36:21.202399+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:21 vm00 bash[20701]: audit 2026-03-10T07:36:21.202399+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:21 vm00 bash[20701]: cluster 2026-03-10T07:36:21.208271+0000 mon.a (mon.0) 2768 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T07:36:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:21 vm00 bash[20701]: cluster 2026-03-10T07:36:21.208271+0000 mon.a (mon.0) 2768 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T07:36:21.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:21 vm03 bash[23382]: cluster 2026-03-10T07:36:20.201986+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T07:36:21.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:21 vm03 bash[23382]: cluster 2026-03-10T07:36:20.201986+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T07:36:21.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:21 vm03 bash[23382]: audit 2026-03-10T07:36:20.206033+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:21 vm03 bash[23382]: audit 2026-03-10T07:36:20.206033+0000 mon.b (mon.1) 486 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:21 vm03 bash[23382]: audit 2026-03-10T07:36:20.208438+0000 mon.a (mon.0) 2766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:21 vm03 bash[23382]: audit 2026-03-10T07:36:20.208438+0000 mon.a (mon.0) 2766 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:21.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:21 vm03 bash[23382]: audit 2026-03-10T07:36:21.202399+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:21.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:21 vm03 bash[23382]: audit 2026-03-10T07:36:21.202399+0000 mon.a (mon.0) 2767 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:21.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:21 vm03 bash[23382]: cluster 2026-03-10T07:36:21.208271+0000 mon.a (mon.0) 2768 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T07:36:21.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:21 vm03 bash[23382]: cluster 2026-03-10T07:36:21.208271+0000 mon.a (mon.0) 2768 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T07:36:22.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:22 vm03 bash[23382]: cluster 2026-03-10T07:36:20.690189+0000 mgr.y (mgr.24407) 435 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:22.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:22 vm03 bash[23382]: cluster 2026-03-10T07:36:20.690189+0000 mgr.y (mgr.24407) 435 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:22 vm03 bash[23382]: cluster 2026-03-10T07:36:21.217405+0000 mon.a (mon.0) 2769 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:22 vm03 bash[23382]: cluster 2026-03-10T07:36:21.217405+0000 mon.a (mon.0) 2769 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:22 vm03 bash[23382]: audit 2026-03-10T07:36:21.222174+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:22 vm03 bash[23382]: audit 2026-03-10T07:36:21.222174+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:22 vm03 bash[23382]: audit 2026-03-10T07:36:21.266481+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:22 vm03 bash[23382]: audit 2026-03-10T07:36:21.266481+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:22 vm03 bash[23382]: audit 2026-03-10T07:36:21.270698+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:22.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:22 vm03 bash[23382]: audit 2026-03-10T07:36:21.270698+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:22 vm00 bash[28005]: cluster 2026-03-10T07:36:20.690189+0000 mgr.y (mgr.24407) 435 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:22 vm00 bash[28005]: cluster 2026-03-10T07:36:20.690189+0000 mgr.y (mgr.24407) 435 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:22 vm00 bash[28005]: cluster 2026-03-10T07:36:21.217405+0000 mon.a (mon.0) 2769 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:22 vm00 bash[28005]: cluster 2026-03-10T07:36:21.217405+0000 mon.a (mon.0) 2769 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:22 vm00 bash[28005]: audit 2026-03-10T07:36:21.222174+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:22 vm00 bash[28005]: audit 2026-03-10T07:36:21.222174+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:22 vm00 bash[28005]: audit 2026-03-10T07:36:21.266481+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:22 vm00 bash[28005]: audit 2026-03-10T07:36:21.266481+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:22 vm00 bash[28005]: audit 2026-03-10T07:36:21.270698+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:22 vm00 bash[28005]: audit 2026-03-10T07:36:21.270698+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:22 vm00 bash[20701]: cluster 2026-03-10T07:36:20.690189+0000 mgr.y (mgr.24407) 435 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:22 vm00 bash[20701]: cluster 2026-03-10T07:36:20.690189+0000 mgr.y (mgr.24407) 435 : cluster [DBG] pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:22 vm00 bash[20701]: cluster 2026-03-10T07:36:21.217405+0000 mon.a (mon.0) 2769 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:22 vm00 bash[20701]: cluster 2026-03-10T07:36:21.217405+0000 mon.a (mon.0) 2769 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:22 vm00 bash[20701]: audit 2026-03-10T07:36:21.222174+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:22 vm00 bash[20701]: audit 2026-03-10T07:36:21.222174+0000 mon.b (mon.1) 487 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:22 vm00 bash[20701]: audit 2026-03-10T07:36:21.266481+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:22 vm00 bash[20701]: audit 2026-03-10T07:36:21.266481+0000 mon.b (mon.1) 488 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:22 vm00 bash[20701]: audit 2026-03-10T07:36:21.270698+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:22 vm00 bash[20701]: audit 2026-03-10T07:36:21.270698+0000 mon.a (mon.0) 2770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:36:23.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:36:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:36:23.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:23 vm03 bash[23382]: audit 2026-03-10T07:36:22.230006+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:23.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:23 vm03 bash[23382]: audit 2026-03-10T07:36:22.230006+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:23.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:23 vm03 bash[23382]: cluster 2026-03-10T07:36:22.233519+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T07:36:23.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:23 vm03 bash[23382]: cluster 2026-03-10T07:36:22.233519+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T07:36:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:23 vm00 bash[28005]: audit 2026-03-10T07:36:22.230006+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:23 vm00 bash[28005]: audit 2026-03-10T07:36:22.230006+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:23 vm00 bash[28005]: cluster 2026-03-10T07:36:22.233519+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T07:36:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:23 vm00 bash[28005]: cluster 2026-03-10T07:36:22.233519+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T07:36:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:23 vm00 bash[20701]: audit 2026-03-10T07:36:22.230006+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:23 vm00 bash[20701]: audit 2026-03-10T07:36:22.230006+0000 mon.a (mon.0) 2771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:36:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:23 vm00 bash[20701]: cluster 2026-03-10T07:36:22.233519+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T07:36:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:23 vm00 bash[20701]: cluster 2026-03-10T07:36:22.233519+0000 mon.a (mon.0) 2772 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T07:36:24.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:24 vm03 bash[23382]: cluster 2026-03-10T07:36:22.690562+0000 mgr.y (mgr.24407) 436 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:24.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:24 vm03 bash[23382]: cluster 2026-03-10T07:36:22.690562+0000 mgr.y (mgr.24407) 436 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:24.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:24 vm03 bash[23382]: cluster 2026-03-10T07:36:23.257575+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T07:36:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:24 vm03 bash[23382]: cluster 2026-03-10T07:36:23.257575+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T07:36:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:24 vm03 bash[23382]: audit 2026-03-10T07:36:23.329175+0000 mgr.y (mgr.24407) 437 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:24.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:24 vm03 bash[23382]: audit 2026-03-10T07:36:23.329175+0000 mgr.y (mgr.24407) 437 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:24 vm00 bash[28005]: cluster 2026-03-10T07:36:22.690562+0000 mgr.y (mgr.24407) 436 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:24 vm00 bash[28005]: cluster 2026-03-10T07:36:22.690562+0000 mgr.y (mgr.24407) 436 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:24 vm00 bash[28005]: cluster 2026-03-10T07:36:23.257575+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T07:36:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:24 vm00 bash[28005]: cluster 2026-03-10T07:36:23.257575+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T07:36:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:24 vm00 bash[28005]: audit 2026-03-10T07:36:23.329175+0000 mgr.y (mgr.24407) 437 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:24 vm00 bash[28005]: audit 2026-03-10T07:36:23.329175+0000 mgr.y (mgr.24407) 437 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:24 vm00 bash[20701]: cluster 2026-03-10T07:36:22.690562+0000 mgr.y (mgr.24407) 436 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:24 vm00 bash[20701]: cluster 2026-03-10T07:36:22.690562+0000 mgr.y (mgr.24407) 436 : cluster [DBG] pgmap v711: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:36:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:24 vm00 bash[20701]: cluster 2026-03-10T07:36:23.257575+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T07:36:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:24 vm00 bash[20701]: cluster 2026-03-10T07:36:23.257575+0000 mon.a (mon.0) 2773 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T07:36:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:24 vm00 bash[20701]: audit 2026-03-10T07:36:23.329175+0000 mgr.y (mgr.24407) 437 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:24 vm00 bash[20701]: audit 2026-03-10T07:36:23.329175+0000 mgr.y (mgr.24407) 437 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: cluster 2026-03-10T07:36:24.272142+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: cluster 2026-03-10T07:36:24.272142+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: audit 2026-03-10T07:36:24.315966+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: audit 2026-03-10T07:36:24.315966+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: audit 2026-03-10T07:36:24.317019+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: audit 2026-03-10T07:36:24.317019+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: audit 2026-03-10T07:36:24.317860+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: audit 2026-03-10T07:36:24.317860+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: audit 2026-03-10T07:36:24.318717+0000 mon.a (mon.0) 2776 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: audit 2026-03-10T07:36:24.318717+0000 mon.a (mon.0) 2776 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: audit 2026-03-10T07:36:24.574875+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:25 vm00 bash[28005]: audit 2026-03-10T07:36:24.574875+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: cluster 2026-03-10T07:36:24.272142+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: cluster 2026-03-10T07:36:24.272142+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: audit 2026-03-10T07:36:24.315966+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: audit 2026-03-10T07:36:24.315966+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: audit 2026-03-10T07:36:24.317019+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: audit 2026-03-10T07:36:24.317019+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: audit 2026-03-10T07:36:24.317860+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: audit 2026-03-10T07:36:24.317860+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: audit 2026-03-10T07:36:24.318717+0000 mon.a (mon.0) 2776 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: audit 2026-03-10T07:36:24.318717+0000 mon.a (mon.0) 2776 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: audit 2026-03-10T07:36:24.574875+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:25 vm00 bash[20701]: audit 2026-03-10T07:36:24.574875+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: cluster 2026-03-10T07:36:24.272142+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: cluster 2026-03-10T07:36:24.272142+0000 mon.a (mon.0) 2774 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: audit 2026-03-10T07:36:24.315966+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: audit 2026-03-10T07:36:24.315966+0000 mon.b (mon.1) 489 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: audit 2026-03-10T07:36:24.317019+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: audit 2026-03-10T07:36:24.317019+0000 mon.b (mon.1) 490 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: audit 2026-03-10T07:36:24.317860+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: audit 2026-03-10T07:36:24.317860+0000 mon.a (mon.0) 2775 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: audit 2026-03-10T07:36:24.318717+0000 mon.a (mon.0) 2776 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: audit 2026-03-10T07:36:24.318717+0000 mon.a (mon.0) 2776 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-90"}]: dispatch 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: audit 2026-03-10T07:36:24.574875+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:25 vm03 bash[23382]: audit 2026-03-10T07:36:24.574875+0000 mon.c (mon.2) 316 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:26 vm00 bash[28005]: cluster 2026-03-10T07:36:24.691167+0000 mgr.y (mgr.24407) 438 : cluster [DBG] pgmap v714: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1023 B/s wr, 3 op/s 2026-03-10T07:36:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:26 vm00 bash[28005]: cluster 2026-03-10T07:36:24.691167+0000 mgr.y (mgr.24407) 438 : cluster [DBG] pgmap v714: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1023 B/s wr, 3 op/s 2026-03-10T07:36:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:26 vm00 bash[28005]: cluster 2026-03-10T07:36:25.275340+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T07:36:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:26 vm00 bash[28005]: cluster 2026-03-10T07:36:25.275340+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T07:36:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:26 vm00 bash[20701]: cluster 2026-03-10T07:36:24.691167+0000 mgr.y (mgr.24407) 438 : cluster [DBG] pgmap v714: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1023 B/s wr, 3 op/s 2026-03-10T07:36:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:26 vm00 bash[20701]: cluster 2026-03-10T07:36:24.691167+0000 mgr.y (mgr.24407) 438 : cluster [DBG] pgmap v714: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1023 B/s wr, 3 op/s 2026-03-10T07:36:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:26 vm00 bash[20701]: cluster 2026-03-10T07:36:25.275340+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T07:36:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:26 vm00 bash[20701]: cluster 2026-03-10T07:36:25.275340+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T07:36:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:26 vm03 bash[23382]: cluster 2026-03-10T07:36:24.691167+0000 mgr.y (mgr.24407) 438 : cluster [DBG] pgmap v714: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1023 B/s wr, 3 op/s 2026-03-10T07:36:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:26 vm03 bash[23382]: cluster 2026-03-10T07:36:24.691167+0000 mgr.y (mgr.24407) 438 : cluster [DBG] pgmap v714: 292 pgs: 15 unknown, 277 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1023 B/s wr, 3 op/s 2026-03-10T07:36:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:26 vm03 bash[23382]: cluster 2026-03-10T07:36:25.275340+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T07:36:26.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:26 vm03 bash[23382]: cluster 2026-03-10T07:36:25.275340+0000 mon.a (mon.0) 2777 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T07:36:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:27 vm03 bash[23382]: cluster 2026-03-10T07:36:26.283546+0000 mon.a (mon.0) 2778 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T07:36:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:27 vm03 bash[23382]: cluster 2026-03-10T07:36:26.283546+0000 mon.a (mon.0) 2778 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T07:36:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:27 vm03 bash[23382]: audit 2026-03-10T07:36:26.297548+0000 mon.b (mon.1) 491 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:27 vm03 bash[23382]: audit 2026-03-10T07:36:26.297548+0000 mon.b (mon.1) 491 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:27 vm03 bash[23382]: audit 2026-03-10T07:36:26.299971+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:27 vm03 bash[23382]: audit 2026-03-10T07:36:26.299971+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:27 vm03 bash[23382]: cluster 2026-03-10T07:36:26.387052+0000 mon.a (mon.0) 2780 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:27 vm03 bash[23382]: cluster 2026-03-10T07:36:26.387052+0000 mon.a (mon.0) 2780 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:27 vm00 bash[28005]: cluster 2026-03-10T07:36:26.283546+0000 mon.a (mon.0) 2778 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:27 vm00 bash[28005]: cluster 2026-03-10T07:36:26.283546+0000 mon.a (mon.0) 2778 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:27 vm00 bash[28005]: audit 2026-03-10T07:36:26.297548+0000 mon.b (mon.1) 491 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:27 vm00 bash[28005]: audit 2026-03-10T07:36:26.297548+0000 mon.b (mon.1) 491 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:27 vm00 bash[28005]: audit 2026-03-10T07:36:26.299971+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:27 vm00 bash[28005]: audit 2026-03-10T07:36:26.299971+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:27 vm00 bash[28005]: cluster 2026-03-10T07:36:26.387052+0000 mon.a (mon.0) 2780 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:27 vm00 bash[28005]: cluster 2026-03-10T07:36:26.387052+0000 mon.a (mon.0) 2780 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:27 vm00 bash[20701]: cluster 2026-03-10T07:36:26.283546+0000 mon.a (mon.0) 2778 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:27 vm00 bash[20701]: cluster 2026-03-10T07:36:26.283546+0000 mon.a (mon.0) 2778 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:27 vm00 bash[20701]: audit 2026-03-10T07:36:26.297548+0000 mon.b (mon.1) 491 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:27 vm00 bash[20701]: audit 2026-03-10T07:36:26.297548+0000 mon.b (mon.1) 491 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:27 vm00 bash[20701]: audit 2026-03-10T07:36:26.299971+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:27 vm00 bash[20701]: audit 2026-03-10T07:36:26.299971+0000 mon.a (mon.0) 2779 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:27 vm00 bash[20701]: cluster 2026-03-10T07:36:26.387052+0000 mon.a (mon.0) 2780 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:27 vm00 bash[20701]: cluster 2026-03-10T07:36:26.387052+0000 mon.a (mon.0) 2780 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:36:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:28 vm03 bash[23382]: cluster 2026-03-10T07:36:26.691573+0000 mgr.y (mgr.24407) 439 : cluster [DBG] pgmap v717: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:36:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:28 vm03 bash[23382]: cluster 2026-03-10T07:36:26.691573+0000 mgr.y (mgr.24407) 439 : cluster [DBG] pgmap v717: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:36:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:28 vm03 bash[23382]: audit 2026-03-10T07:36:27.391568+0000 mon.a (mon.0) 2781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:28 vm03 bash[23382]: audit 2026-03-10T07:36:27.391568+0000 mon.a (mon.0) 2781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:28 vm03 bash[23382]: audit 2026-03-10T07:36:27.398567+0000 mon.b (mon.1) 492 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:28 vm03 bash[23382]: audit 2026-03-10T07:36:27.398567+0000 mon.b (mon.1) 492 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:28 vm03 bash[23382]: cluster 2026-03-10T07:36:27.402188+0000 mon.a (mon.0) 2782 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T07:36:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:28 vm03 bash[23382]: cluster 2026-03-10T07:36:27.402188+0000 mon.a (mon.0) 2782 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T07:36:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:28 vm00 bash[28005]: cluster 2026-03-10T07:36:26.691573+0000 mgr.y (mgr.24407) 439 : cluster [DBG] pgmap v717: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:36:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:28 vm00 bash[28005]: cluster 2026-03-10T07:36:26.691573+0000 mgr.y (mgr.24407) 439 : cluster [DBG] pgmap v717: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:36:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:28 vm00 bash[28005]: audit 2026-03-10T07:36:27.391568+0000 mon.a (mon.0) 2781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:28 vm00 bash[28005]: audit 2026-03-10T07:36:27.391568+0000 mon.a (mon.0) 2781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:28 vm00 bash[28005]: audit 2026-03-10T07:36:27.398567+0000 mon.b (mon.1) 492 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:28 vm00 bash[28005]: audit 2026-03-10T07:36:27.398567+0000 mon.b (mon.1) 492 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:28 vm00 bash[28005]: cluster 2026-03-10T07:36:27.402188+0000 mon.a (mon.0) 2782 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T07:36:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:28 vm00 bash[28005]: cluster 2026-03-10T07:36:27.402188+0000 mon.a (mon.0) 2782 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T07:36:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:28 vm00 bash[20701]: cluster 2026-03-10T07:36:26.691573+0000 mgr.y (mgr.24407) 439 : cluster [DBG] pgmap v717: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:36:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:28 vm00 bash[20701]: cluster 2026-03-10T07:36:26.691573+0000 mgr.y (mgr.24407) 439 : cluster [DBG] pgmap v717: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:36:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:28 vm00 bash[20701]: audit 2026-03-10T07:36:27.391568+0000 mon.a (mon.0) 2781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:28 vm00 bash[20701]: audit 2026-03-10T07:36:27.391568+0000 mon.a (mon.0) 2781 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:28 vm00 bash[20701]: audit 2026-03-10T07:36:27.398567+0000 mon.b (mon.1) 492 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:28 vm00 bash[20701]: audit 2026-03-10T07:36:27.398567+0000 mon.b (mon.1) 492 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:28 vm00 bash[20701]: cluster 2026-03-10T07:36:27.402188+0000 mon.a (mon.0) 2782 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T07:36:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:28 vm00 bash[20701]: cluster 2026-03-10T07:36:27.402188+0000 mon.a (mon.0) 2782 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T07:36:29.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:29 vm03 bash[23382]: cluster 2026-03-10T07:36:28.441931+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T07:36:29.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:29 vm03 bash[23382]: cluster 2026-03-10T07:36:28.441931+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T07:36:29.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:29 vm03 bash[23382]: cluster 2026-03-10T07:36:28.692000+0000 mgr.y (mgr.24407) 440 : cluster [DBG] pgmap v720: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:36:29.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:29 vm03 bash[23382]: cluster 2026-03-10T07:36:28.692000+0000 mgr.y (mgr.24407) 440 : cluster [DBG] pgmap v720: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:36:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:29 vm00 bash[28005]: cluster 2026-03-10T07:36:28.441931+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T07:36:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:29 vm00 bash[28005]: cluster 2026-03-10T07:36:28.441931+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T07:36:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:29 vm00 bash[28005]: cluster 2026-03-10T07:36:28.692000+0000 mgr.y (mgr.24407) 440 : cluster [DBG] pgmap v720: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:36:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:29 vm00 bash[28005]: cluster 2026-03-10T07:36:28.692000+0000 mgr.y (mgr.24407) 440 : cluster [DBG] pgmap v720: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:36:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:29 vm00 bash[20701]: cluster 2026-03-10T07:36:28.441931+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T07:36:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:29 vm00 bash[20701]: cluster 2026-03-10T07:36:28.441931+0000 mon.a (mon.0) 2783 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T07:36:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:29 vm00 bash[20701]: cluster 2026-03-10T07:36:28.692000+0000 mgr.y (mgr.24407) 440 : cluster [DBG] pgmap v720: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:36:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:29 vm00 bash[20701]: cluster 2026-03-10T07:36:28.692000+0000 mgr.y (mgr.24407) 440 : cluster [DBG] pgmap v720: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:36:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:30 vm03 bash[23382]: cluster 2026-03-10T07:36:29.449492+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T07:36:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:30 vm03 bash[23382]: cluster 2026-03-10T07:36:29.449492+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T07:36:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:30 vm03 bash[23382]: audit 2026-03-10T07:36:29.494225+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:30 vm03 bash[23382]: audit 2026-03-10T07:36:29.494225+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:30 vm03 bash[23382]: audit 2026-03-10T07:36:29.495011+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:30 vm03 bash[23382]: audit 2026-03-10T07:36:29.495011+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:30 vm03 bash[23382]: audit 2026-03-10T07:36:29.496077+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:30 vm03 bash[23382]: audit 2026-03-10T07:36:29.496077+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:30 vm03 bash[23382]: audit 2026-03-10T07:36:29.496682+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:30 vm03 bash[23382]: audit 2026-03-10T07:36:29.496682+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:30 vm00 bash[28005]: cluster 2026-03-10T07:36:29.449492+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T07:36:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:30 vm00 bash[28005]: cluster 2026-03-10T07:36:29.449492+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T07:36:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:30 vm00 bash[28005]: audit 2026-03-10T07:36:29.494225+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:30 vm00 bash[28005]: audit 2026-03-10T07:36:29.494225+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:30 vm00 bash[28005]: audit 2026-03-10T07:36:29.495011+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:30 vm00 bash[28005]: audit 2026-03-10T07:36:29.495011+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:30 vm00 bash[28005]: audit 2026-03-10T07:36:29.496077+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:30 vm00 bash[28005]: audit 2026-03-10T07:36:29.496077+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:30 vm00 bash[28005]: audit 2026-03-10T07:36:29.496682+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:30 vm00 bash[28005]: audit 2026-03-10T07:36:29.496682+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:30 vm00 bash[20701]: cluster 2026-03-10T07:36:29.449492+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T07:36:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:30 vm00 bash[20701]: cluster 2026-03-10T07:36:29.449492+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T07:36:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:30 vm00 bash[20701]: audit 2026-03-10T07:36:29.494225+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:30 vm00 bash[20701]: audit 2026-03-10T07:36:29.494225+0000 mon.b (mon.1) 493 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:30 vm00 bash[20701]: audit 2026-03-10T07:36:29.495011+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:30 vm00 bash[20701]: audit 2026-03-10T07:36:29.495011+0000 mon.b (mon.1) 494 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:30 vm00 bash[20701]: audit 2026-03-10T07:36:29.496077+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:30 vm00 bash[20701]: audit 2026-03-10T07:36:29.496077+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:30 vm00 bash[20701]: audit 2026-03-10T07:36:29.496682+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:30 vm00 bash[20701]: audit 2026-03-10T07:36:29.496682+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-92"}]: dispatch 2026-03-10T07:36:31.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:36:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:36:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:36:31.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:31 vm03 bash[23382]: cluster 2026-03-10T07:36:30.458006+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T07:36:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:31 vm03 bash[23382]: cluster 2026-03-10T07:36:30.458006+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T07:36:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:31 vm03 bash[23382]: cluster 2026-03-10T07:36:30.693765+0000 mgr.y (mgr.24407) 441 : cluster [DBG] pgmap v723: 260 pgs: 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:31 vm03 bash[23382]: cluster 2026-03-10T07:36:30.693765+0000 mgr.y (mgr.24407) 441 : cluster [DBG] pgmap v723: 260 pgs: 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:31 vm00 bash[28005]: cluster 2026-03-10T07:36:30.458006+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T07:36:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:31 vm00 bash[28005]: cluster 2026-03-10T07:36:30.458006+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T07:36:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:31 vm00 bash[28005]: cluster 2026-03-10T07:36:30.693765+0000 mgr.y (mgr.24407) 441 : cluster [DBG] pgmap v723: 260 pgs: 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:31 vm00 bash[28005]: cluster 2026-03-10T07:36:30.693765+0000 mgr.y (mgr.24407) 441 : cluster [DBG] pgmap v723: 260 pgs: 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:31 vm00 bash[20701]: cluster 2026-03-10T07:36:30.458006+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T07:36:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:31 vm00 bash[20701]: cluster 2026-03-10T07:36:30.458006+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T07:36:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:31 vm00 bash[20701]: cluster 2026-03-10T07:36:30.693765+0000 mgr.y (mgr.24407) 441 : cluster [DBG] pgmap v723: 260 pgs: 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:31 vm00 bash[20701]: cluster 2026-03-10T07:36:30.693765+0000 mgr.y (mgr.24407) 441 : cluster [DBG] pgmap v723: 260 pgs: 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:32 vm03 bash[23382]: cluster 2026-03-10T07:36:31.473814+0000 mon.a (mon.0) 2788 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T07:36:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:32 vm03 bash[23382]: cluster 2026-03-10T07:36:31.473814+0000 mon.a (mon.0) 2788 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T07:36:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:32 vm03 bash[23382]: audit 2026-03-10T07:36:31.490521+0000 mon.b (mon.1) 495 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:32 vm03 bash[23382]: audit 2026-03-10T07:36:31.490521+0000 mon.b (mon.1) 495 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:32 vm03 bash[23382]: audit 2026-03-10T07:36:31.492260+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:32 vm03 bash[23382]: audit 2026-03-10T07:36:31.492260+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:32.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:32 vm00 bash[28005]: cluster 2026-03-10T07:36:31.473814+0000 mon.a (mon.0) 2788 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T07:36:32.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:32 vm00 bash[28005]: cluster 2026-03-10T07:36:31.473814+0000 mon.a (mon.0) 2788 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T07:36:32.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:32 vm00 bash[28005]: audit 2026-03-10T07:36:31.490521+0000 mon.b (mon.1) 495 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:32.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:32 vm00 bash[28005]: audit 2026-03-10T07:36:31.490521+0000 mon.b (mon.1) 495 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:32.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:32 vm00 bash[28005]: audit 2026-03-10T07:36:31.492260+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:32.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:32 vm00 bash[28005]: audit 2026-03-10T07:36:31.492260+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:32.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:32 vm00 bash[20701]: cluster 2026-03-10T07:36:31.473814+0000 mon.a (mon.0) 2788 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T07:36:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:32 vm00 bash[20701]: cluster 2026-03-10T07:36:31.473814+0000 mon.a (mon.0) 2788 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T07:36:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:32 vm00 bash[20701]: audit 2026-03-10T07:36:31.490521+0000 mon.b (mon.1) 495 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:32 vm00 bash[20701]: audit 2026-03-10T07:36:31.490521+0000 mon.b (mon.1) 495 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:32 vm00 bash[20701]: audit 2026-03-10T07:36:31.492260+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:32 vm00 bash[20701]: audit 2026-03-10T07:36:31.492260+0000 mon.a (mon.0) 2789 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:33.514 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:36:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:36:33.514 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:36:33 vm03 bash[51371]: logger=cleanup t=2026-03-10T07:36:33.109031317Z level=info msg="Completed cleanup jobs" duration=2.365351ms 2026-03-10T07:36:33.514 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:36:33 vm03 bash[51371]: logger=plugins.update.checker t=2026-03-10T07:36:33.258465699Z level=info msg="Update check succeeded" duration=56.785205ms 2026-03-10T07:36:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:34 vm00 bash[20701]: audit 2026-03-10T07:36:32.476838+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:34 vm00 bash[20701]: audit 2026-03-10T07:36:32.476838+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:34 vm00 bash[20701]: cluster 2026-03-10T07:36:32.481546+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T07:36:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:34 vm00 bash[20701]: cluster 2026-03-10T07:36:32.481546+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:34 vm00 bash[20701]: audit 2026-03-10T07:36:32.492681+0000 mon.b (mon.1) 496 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:34 vm00 bash[20701]: audit 2026-03-10T07:36:32.492681+0000 mon.b (mon.1) 496 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:34 vm00 bash[20701]: cluster 2026-03-10T07:36:32.694166+0000 mgr.y (mgr.24407) 442 : cluster [DBG] pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:34 vm00 bash[20701]: cluster 2026-03-10T07:36:32.694166+0000 mgr.y (mgr.24407) 442 : cluster [DBG] pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:34 vm00 bash[20701]: audit 2026-03-10T07:36:33.339883+0000 mgr.y (mgr.24407) 443 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:34 vm00 bash[20701]: audit 2026-03-10T07:36:33.339883+0000 mgr.y (mgr.24407) 443 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:34 vm00 bash[28005]: audit 2026-03-10T07:36:32.476838+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:34 vm00 bash[28005]: audit 2026-03-10T07:36:32.476838+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:34 vm00 bash[28005]: cluster 2026-03-10T07:36:32.481546+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:34 vm00 bash[28005]: cluster 2026-03-10T07:36:32.481546+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:34 vm00 bash[28005]: audit 2026-03-10T07:36:32.492681+0000 mon.b (mon.1) 496 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:34 vm00 bash[28005]: audit 2026-03-10T07:36:32.492681+0000 mon.b (mon.1) 496 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:34 vm00 bash[28005]: cluster 2026-03-10T07:36:32.694166+0000 mgr.y (mgr.24407) 442 : cluster [DBG] pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:34 vm00 bash[28005]: cluster 2026-03-10T07:36:32.694166+0000 mgr.y (mgr.24407) 442 : cluster [DBG] pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:34 vm00 bash[28005]: audit 2026-03-10T07:36:33.339883+0000 mgr.y (mgr.24407) 443 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:34 vm00 bash[28005]: audit 2026-03-10T07:36:33.339883+0000 mgr.y (mgr.24407) 443 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:34.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:34 vm03 bash[23382]: audit 2026-03-10T07:36:32.476838+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:34.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:34 vm03 bash[23382]: audit 2026-03-10T07:36:32.476838+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:34.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:34 vm03 bash[23382]: cluster 2026-03-10T07:36:32.481546+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T07:36:34.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:34 vm03 bash[23382]: cluster 2026-03-10T07:36:32.481546+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T07:36:34.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:34 vm03 bash[23382]: audit 2026-03-10T07:36:32.492681+0000 mon.b (mon.1) 496 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:34.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:34 vm03 bash[23382]: audit 2026-03-10T07:36:32.492681+0000 mon.b (mon.1) 496 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:34.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:34 vm03 bash[23382]: cluster 2026-03-10T07:36:32.694166+0000 mgr.y (mgr.24407) 442 : cluster [DBG] pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:34.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:34 vm03 bash[23382]: cluster 2026-03-10T07:36:32.694166+0000 mgr.y (mgr.24407) 442 : cluster [DBG] pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T07:36:34.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:34 vm03 bash[23382]: audit 2026-03-10T07:36:33.339883+0000 mgr.y (mgr.24407) 443 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:34.769 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:34 vm03 bash[23382]: audit 2026-03-10T07:36:33.339883+0000 mgr.y (mgr.24407) 443 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:35 vm00 bash[20701]: cluster 2026-03-10T07:36:34.488775+0000 mon.a (mon.0) 2792 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T07:36:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:35 vm00 bash[20701]: cluster 2026-03-10T07:36:34.488775+0000 mon.a (mon.0) 2792 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T07:36:35.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:35 vm00 bash[28005]: cluster 2026-03-10T07:36:34.488775+0000 mon.a (mon.0) 2792 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T07:36:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:35 vm00 bash[28005]: cluster 2026-03-10T07:36:34.488775+0000 mon.a (mon.0) 2792 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T07:36:35.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:35 vm03 bash[23382]: cluster 2026-03-10T07:36:34.488775+0000 mon.a (mon.0) 2792 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T07:36:35.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:35 vm03 bash[23382]: cluster 2026-03-10T07:36:34.488775+0000 mon.a (mon.0) 2792 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T07:36:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:36 vm00 bash[20701]: cluster 2026-03-10T07:36:34.694833+0000 mgr.y (mgr.24407) 444 : cluster [DBG] pgmap v728: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-10T07:36:36.646 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:36 vm00 bash[20701]: cluster 2026-03-10T07:36:34.694833+0000 mgr.y (mgr.24407) 444 : cluster [DBG] pgmap v728: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-10T07:36:36.646 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:36 vm00 bash[20701]: cluster 2026-03-10T07:36:35.132845+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T07:36:36.646 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:36 vm00 bash[20701]: cluster 2026-03-10T07:36:35.132845+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T07:36:36.646 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:36 vm00 bash[28005]: cluster 2026-03-10T07:36:34.694833+0000 mgr.y (mgr.24407) 444 : cluster [DBG] pgmap v728: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-10T07:36:36.646 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:36 vm00 bash[28005]: cluster 2026-03-10T07:36:34.694833+0000 mgr.y (mgr.24407) 444 : cluster [DBG] pgmap v728: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-10T07:36:36.646 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:36 vm00 bash[28005]: cluster 2026-03-10T07:36:35.132845+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T07:36:36.646 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:36 vm00 bash[28005]: cluster 2026-03-10T07:36:35.132845+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T07:36:36.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:36 vm03 bash[23382]: cluster 2026-03-10T07:36:34.694833+0000 mgr.y (mgr.24407) 444 : cluster [DBG] pgmap v728: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-10T07:36:36.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:36 vm03 bash[23382]: cluster 2026-03-10T07:36:34.694833+0000 mgr.y (mgr.24407) 444 : cluster [DBG] pgmap v728: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 943 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 1 op/s 2026-03-10T07:36:36.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:36 vm03 bash[23382]: cluster 2026-03-10T07:36:35.132845+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T07:36:36.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:36 vm03 bash[23382]: cluster 2026-03-10T07:36:35.132845+0000 mon.a (mon.0) 2793 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T07:36:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:37 vm00 bash[20701]: cluster 2026-03-10T07:36:36.695209+0000 mgr.y (mgr.24407) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 979 B/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-10T07:36:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:37 vm00 bash[20701]: cluster 2026-03-10T07:36:36.695209+0000 mgr.y (mgr.24407) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 979 B/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-10T07:36:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:37 vm00 bash[28005]: cluster 2026-03-10T07:36:36.695209+0000 mgr.y (mgr.24407) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 979 B/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-10T07:36:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:37 vm00 bash[28005]: cluster 2026-03-10T07:36:36.695209+0000 mgr.y (mgr.24407) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 979 B/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-10T07:36:38.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:37 vm03 bash[23382]: cluster 2026-03-10T07:36:36.695209+0000 mgr.y (mgr.24407) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 979 B/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-10T07:36:38.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:37 vm03 bash[23382]: cluster 2026-03-10T07:36:36.695209+0000 mgr.y (mgr.24407) 445 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 979 B/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-10T07:36:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:39 vm03 bash[23382]: cluster 2026-03-10T07:36:38.695569+0000 mgr.y (mgr.24407) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 988 B/s wr, 1 op/s 2026-03-10T07:36:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:39 vm03 bash[23382]: cluster 2026-03-10T07:36:38.695569+0000 mgr.y (mgr.24407) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 988 B/s wr, 1 op/s 2026-03-10T07:36:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:39 vm03 bash[23382]: audit 2026-03-10T07:36:39.581811+0000 mon.c (mon.2) 317 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:40.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:39 vm03 bash[23382]: audit 2026-03-10T07:36:39.581811+0000 mon.c (mon.2) 317 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:40.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:39 vm00 bash[20701]: cluster 2026-03-10T07:36:38.695569+0000 mgr.y (mgr.24407) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 988 B/s wr, 1 op/s 2026-03-10T07:36:40.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:39 vm00 bash[20701]: cluster 2026-03-10T07:36:38.695569+0000 mgr.y (mgr.24407) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 988 B/s wr, 1 op/s 2026-03-10T07:36:40.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:39 vm00 bash[20701]: audit 2026-03-10T07:36:39.581811+0000 mon.c (mon.2) 317 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:40.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:39 vm00 bash[20701]: audit 2026-03-10T07:36:39.581811+0000 mon.c (mon.2) 317 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:39 vm00 bash[28005]: cluster 2026-03-10T07:36:38.695569+0000 mgr.y (mgr.24407) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 988 B/s wr, 1 op/s 2026-03-10T07:36:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:39 vm00 bash[28005]: cluster 2026-03-10T07:36:38.695569+0000 mgr.y (mgr.24407) 446 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 944 MiB used, 159 GiB / 160 GiB avail; 823 B/s rd, 988 B/s wr, 1 op/s 2026-03-10T07:36:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:39 vm00 bash[28005]: audit 2026-03-10T07:36:39.581811+0000 mon.c (mon.2) 317 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:40.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:39 vm00 bash[28005]: audit 2026-03-10T07:36:39.581811+0000 mon.c (mon.2) 317 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:41.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:36:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:36:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:36:42.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:41 vm03 bash[23382]: cluster 2026-03-10T07:36:40.696430+0000 mgr.y (mgr.24407) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T07:36:42.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:41 vm03 bash[23382]: cluster 2026-03-10T07:36:40.696430+0000 mgr.y (mgr.24407) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T07:36:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:41 vm00 bash[28005]: cluster 2026-03-10T07:36:40.696430+0000 mgr.y (mgr.24407) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T07:36:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:41 vm00 bash[28005]: cluster 2026-03-10T07:36:40.696430+0000 mgr.y (mgr.24407) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T07:36:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:41 vm00 bash[20701]: cluster 2026-03-10T07:36:40.696430+0000 mgr.y (mgr.24407) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T07:36:42.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:41 vm00 bash[20701]: cluster 2026-03-10T07:36:40.696430+0000 mgr.y (mgr.24407) 447 : cluster [DBG] pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T07:36:43.760 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:36:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:36:44.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:43 vm03 bash[23382]: cluster 2026-03-10T07:36:42.696808+0000 mgr.y (mgr.24407) 448 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 980 B/s wr, 4 op/s 2026-03-10T07:36:44.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:43 vm03 bash[23382]: cluster 2026-03-10T07:36:42.696808+0000 mgr.y (mgr.24407) 448 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 980 B/s wr, 4 op/s 2026-03-10T07:36:44.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:43 vm03 bash[23382]: audit 2026-03-10T07:36:43.348636+0000 mgr.y (mgr.24407) 449 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:44.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:43 vm03 bash[23382]: audit 2026-03-10T07:36:43.348636+0000 mgr.y (mgr.24407) 449 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:44.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:43 vm00 bash[28005]: cluster 2026-03-10T07:36:42.696808+0000 mgr.y (mgr.24407) 448 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 980 B/s wr, 4 op/s 2026-03-10T07:36:44.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:43 vm00 bash[28005]: cluster 2026-03-10T07:36:42.696808+0000 mgr.y (mgr.24407) 448 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 980 B/s wr, 4 op/s 2026-03-10T07:36:44.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:43 vm00 bash[28005]: audit 2026-03-10T07:36:43.348636+0000 mgr.y (mgr.24407) 449 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:44.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:43 vm00 bash[28005]: audit 2026-03-10T07:36:43.348636+0000 mgr.y (mgr.24407) 449 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:44.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:43 vm00 bash[20701]: cluster 2026-03-10T07:36:42.696808+0000 mgr.y (mgr.24407) 448 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 980 B/s wr, 4 op/s 2026-03-10T07:36:44.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:43 vm00 bash[20701]: cluster 2026-03-10T07:36:42.696808+0000 mgr.y (mgr.24407) 448 : cluster [DBG] pgmap v733: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 980 B/s wr, 4 op/s 2026-03-10T07:36:44.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:43 vm00 bash[20701]: audit 2026-03-10T07:36:43.348636+0000 mgr.y (mgr.24407) 449 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:44.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:43 vm00 bash[20701]: audit 2026-03-10T07:36:43.348636+0000 mgr.y (mgr.24407) 449 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:46.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:45 vm00 bash[28005]: cluster 2026-03-10T07:36:44.697547+0000 mgr.y (mgr.24407) 450 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 307 B/s wr, 3 op/s 2026-03-10T07:36:46.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:45 vm00 bash[28005]: cluster 2026-03-10T07:36:44.697547+0000 mgr.y (mgr.24407) 450 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 307 B/s wr, 3 op/s 2026-03-10T07:36:46.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:45 vm00 bash[20701]: cluster 2026-03-10T07:36:44.697547+0000 mgr.y (mgr.24407) 450 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 307 B/s wr, 3 op/s 2026-03-10T07:36:46.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:45 vm00 bash[20701]: cluster 2026-03-10T07:36:44.697547+0000 mgr.y (mgr.24407) 450 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 307 B/s wr, 3 op/s 2026-03-10T07:36:46.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:45 vm03 bash[23382]: cluster 2026-03-10T07:36:44.697547+0000 mgr.y (mgr.24407) 450 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 307 B/s wr, 3 op/s 2026-03-10T07:36:46.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:45 vm03 bash[23382]: cluster 2026-03-10T07:36:44.697547+0000 mgr.y (mgr.24407) 450 : cluster [DBG] pgmap v734: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 307 B/s wr, 3 op/s 2026-03-10T07:36:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:47 vm00 bash[28005]: cluster 2026-03-10T07:36:45.861200+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T07:36:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:47 vm00 bash[28005]: cluster 2026-03-10T07:36:45.861200+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T07:36:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:47 vm00 bash[20701]: cluster 2026-03-10T07:36:45.861200+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T07:36:47.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:47 vm00 bash[20701]: cluster 2026-03-10T07:36:45.861200+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T07:36:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:47 vm03 bash[23382]: cluster 2026-03-10T07:36:45.861200+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T07:36:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:47 vm03 bash[23382]: cluster 2026-03-10T07:36:45.861200+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T07:36:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:48 vm00 bash[28005]: cluster 2026-03-10T07:36:46.697863+0000 mgr.y (mgr.24407) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:48 vm00 bash[28005]: cluster 2026-03-10T07:36:46.697863+0000 mgr.y (mgr.24407) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:48 vm00 bash[20701]: cluster 2026-03-10T07:36:46.697863+0000 mgr.y (mgr.24407) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:48 vm00 bash[20701]: cluster 2026-03-10T07:36:46.697863+0000 mgr.y (mgr.24407) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:48 vm03 bash[23382]: cluster 2026-03-10T07:36:46.697863+0000 mgr.y (mgr.24407) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:48 vm03 bash[23382]: cluster 2026-03-10T07:36:46.697863+0000 mgr.y (mgr.24407) 451 : cluster [DBG] pgmap v736: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:50.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:50 vm00 bash[28005]: cluster 2026-03-10T07:36:48.698269+0000 mgr.y (mgr.24407) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:50.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:50 vm00 bash[28005]: cluster 2026-03-10T07:36:48.698269+0000 mgr.y (mgr.24407) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:50.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:50 vm00 bash[20701]: cluster 2026-03-10T07:36:48.698269+0000 mgr.y (mgr.24407) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:50.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:50 vm00 bash[20701]: cluster 2026-03-10T07:36:48.698269+0000 mgr.y (mgr.24407) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:50 vm03 bash[23382]: cluster 2026-03-10T07:36:48.698269+0000 mgr.y (mgr.24407) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:50 vm03 bash[23382]: cluster 2026-03-10T07:36:48.698269+0000 mgr.y (mgr.24407) 452 : cluster [DBG] pgmap v737: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 2 op/s 2026-03-10T07:36:51.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:36:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:36:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:36:52.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:52 vm00 bash[28005]: cluster 2026-03-10T07:36:50.699114+0000 mgr.y (mgr.24407) 453 : cluster [DBG] pgmap v738: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:52.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:52 vm00 bash[28005]: cluster 2026-03-10T07:36:50.699114+0000 mgr.y (mgr.24407) 453 : cluster [DBG] pgmap v738: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:52.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:52 vm00 bash[28005]: cluster 2026-03-10T07:36:51.482053+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T07:36:52.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:52 vm00 bash[28005]: cluster 2026-03-10T07:36:51.482053+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T07:36:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:52 vm00 bash[20701]: cluster 2026-03-10T07:36:50.699114+0000 mgr.y (mgr.24407) 453 : cluster [DBG] pgmap v738: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:52 vm00 bash[20701]: cluster 2026-03-10T07:36:50.699114+0000 mgr.y (mgr.24407) 453 : cluster [DBG] pgmap v738: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:52 vm00 bash[20701]: cluster 2026-03-10T07:36:51.482053+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T07:36:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:52 vm00 bash[20701]: cluster 2026-03-10T07:36:51.482053+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T07:36:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:52 vm03 bash[23382]: cluster 2026-03-10T07:36:50.699114+0000 mgr.y (mgr.24407) 453 : cluster [DBG] pgmap v738: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:52 vm03 bash[23382]: cluster 2026-03-10T07:36:50.699114+0000 mgr.y (mgr.24407) 453 : cluster [DBG] pgmap v738: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:52 vm03 bash[23382]: cluster 2026-03-10T07:36:51.482053+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T07:36:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:52 vm03 bash[23382]: cluster 2026-03-10T07:36:51.482053+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T07:36:53.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:36:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:36:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:54 vm00 bash[28005]: cluster 2026-03-10T07:36:52.699487+0000 mgr.y (mgr.24407) 454 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:54 vm00 bash[28005]: cluster 2026-03-10T07:36:52.699487+0000 mgr.y (mgr.24407) 454 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:54 vm00 bash[28005]: audit 2026-03-10T07:36:53.352863+0000 mgr.y (mgr.24407) 455 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:54 vm00 bash[28005]: audit 2026-03-10T07:36:53.352863+0000 mgr.y (mgr.24407) 455 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:54 vm00 bash[20701]: cluster 2026-03-10T07:36:52.699487+0000 mgr.y (mgr.24407) 454 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:54 vm00 bash[20701]: cluster 2026-03-10T07:36:52.699487+0000 mgr.y (mgr.24407) 454 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:54 vm00 bash[20701]: audit 2026-03-10T07:36:53.352863+0000 mgr.y (mgr.24407) 455 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:54 vm00 bash[20701]: audit 2026-03-10T07:36:53.352863+0000 mgr.y (mgr.24407) 455 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:54 vm03 bash[23382]: cluster 2026-03-10T07:36:52.699487+0000 mgr.y (mgr.24407) 454 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:54 vm03 bash[23382]: cluster 2026-03-10T07:36:52.699487+0000 mgr.y (mgr.24407) 454 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:54 vm03 bash[23382]: audit 2026-03-10T07:36:53.352863+0000 mgr.y (mgr.24407) 455 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:54 vm03 bash[23382]: audit 2026-03-10T07:36:53.352863+0000 mgr.y (mgr.24407) 455 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:36:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:55 vm00 bash[28005]: audit 2026-03-10T07:36:54.588722+0000 mon.c (mon.2) 318 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:55 vm00 bash[28005]: audit 2026-03-10T07:36:54.588722+0000 mon.c (mon.2) 318 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:55 vm00 bash[20701]: audit 2026-03-10T07:36:54.588722+0000 mon.c (mon.2) 318 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:55 vm00 bash[20701]: audit 2026-03-10T07:36:54.588722+0000 mon.c (mon.2) 318 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:55 vm03 bash[23382]: audit 2026-03-10T07:36:54.588722+0000 mon.c (mon.2) 318 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:55 vm03 bash[23382]: audit 2026-03-10T07:36:54.588722+0000 mon.c (mon.2) 318 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:36:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:56 vm00 bash[28005]: cluster 2026-03-10T07:36:54.700060+0000 mgr.y (mgr.24407) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 811 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:56 vm00 bash[28005]: cluster 2026-03-10T07:36:54.700060+0000 mgr.y (mgr.24407) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 811 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:56 vm00 bash[28005]: audit 2026-03-10T07:36:55.880680+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:56 vm00 bash[28005]: audit 2026-03-10T07:36:55.880680+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:56 vm00 bash[28005]: audit 2026-03-10T07:36:55.881313+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:56 vm00 bash[28005]: audit 2026-03-10T07:36:55.881313+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:56 vm00 bash[28005]: audit 2026-03-10T07:36:55.882198+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:56 vm00 bash[28005]: audit 2026-03-10T07:36:55.882198+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:56 vm00 bash[28005]: audit 2026-03-10T07:36:55.882663+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:56 vm00 bash[28005]: audit 2026-03-10T07:36:55.882663+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:56 vm00 bash[20701]: cluster 2026-03-10T07:36:54.700060+0000 mgr.y (mgr.24407) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 811 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:56 vm00 bash[20701]: cluster 2026-03-10T07:36:54.700060+0000 mgr.y (mgr.24407) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 811 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:56 vm00 bash[20701]: audit 2026-03-10T07:36:55.880680+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:56 vm00 bash[20701]: audit 2026-03-10T07:36:55.880680+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:56 vm00 bash[20701]: audit 2026-03-10T07:36:55.881313+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:56 vm00 bash[20701]: audit 2026-03-10T07:36:55.881313+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:56 vm00 bash[20701]: audit 2026-03-10T07:36:55.882198+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:56 vm00 bash[20701]: audit 2026-03-10T07:36:55.882198+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:56 vm00 bash[20701]: audit 2026-03-10T07:36:55.882663+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:56 vm00 bash[20701]: audit 2026-03-10T07:36:55.882663+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:56 vm03 bash[23382]: cluster 2026-03-10T07:36:54.700060+0000 mgr.y (mgr.24407) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 811 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:56 vm03 bash[23382]: cluster 2026-03-10T07:36:54.700060+0000 mgr.y (mgr.24407) 456 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 811 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:36:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:56 vm03 bash[23382]: audit 2026-03-10T07:36:55.880680+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:56 vm03 bash[23382]: audit 2026-03-10T07:36:55.880680+0000 mon.b (mon.1) 497 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:56 vm03 bash[23382]: audit 2026-03-10T07:36:55.881313+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:56 vm03 bash[23382]: audit 2026-03-10T07:36:55.881313+0000 mon.b (mon.1) 498 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:56 vm03 bash[23382]: audit 2026-03-10T07:36:55.882198+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:56 vm03 bash[23382]: audit 2026-03-10T07:36:55.882198+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:36:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:56 vm03 bash[23382]: audit 2026-03-10T07:36:55.882663+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:56.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:56 vm03 bash[23382]: audit 2026-03-10T07:36:55.882663+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-94"}]: dispatch 2026-03-10T07:36:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:57 vm00 bash[28005]: cluster 2026-03-10T07:36:56.372286+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T07:36:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:57 vm00 bash[28005]: cluster 2026-03-10T07:36:56.372286+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T07:36:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:57 vm00 bash[28005]: cluster 2026-03-10T07:36:56.700378+0000 mgr.y (mgr.24407) 457 : cluster [DBG] pgmap v743: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:36:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:57 vm00 bash[28005]: cluster 2026-03-10T07:36:56.700378+0000 mgr.y (mgr.24407) 457 : cluster [DBG] pgmap v743: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:36:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:57 vm00 bash[20701]: cluster 2026-03-10T07:36:56.372286+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T07:36:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:57 vm00 bash[20701]: cluster 2026-03-10T07:36:56.372286+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T07:36:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:57 vm00 bash[20701]: cluster 2026-03-10T07:36:56.700378+0000 mgr.y (mgr.24407) 457 : cluster [DBG] pgmap v743: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:36:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:57 vm00 bash[20701]: cluster 2026-03-10T07:36:56.700378+0000 mgr.y (mgr.24407) 457 : cluster [DBG] pgmap v743: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:36:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:57 vm03 bash[23382]: cluster 2026-03-10T07:36:56.372286+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T07:36:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:57 vm03 bash[23382]: cluster 2026-03-10T07:36:56.372286+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T07:36:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:57 vm03 bash[23382]: cluster 2026-03-10T07:36:56.700378+0000 mgr.y (mgr.24407) 457 : cluster [DBG] pgmap v743: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:36:57.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:57 vm03 bash[23382]: cluster 2026-03-10T07:36:56.700378+0000 mgr.y (mgr.24407) 457 : cluster [DBG] pgmap v743: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:36:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:58 vm03 bash[23382]: cluster 2026-03-10T07:36:57.386205+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T07:36:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:58 vm03 bash[23382]: cluster 2026-03-10T07:36:57.386205+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T07:36:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:58 vm03 bash[23382]: audit 2026-03-10T07:36:57.387822+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:58 vm03 bash[23382]: audit 2026-03-10T07:36:57.387822+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:58 vm03 bash[23382]: audit 2026-03-10T07:36:57.390136+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:58.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:58 vm03 bash[23382]: audit 2026-03-10T07:36:57.390136+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:58 vm00 bash[28005]: cluster 2026-03-10T07:36:57.386205+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T07:36:58.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:58 vm00 bash[28005]: cluster 2026-03-10T07:36:57.386205+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T07:36:58.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:58 vm00 bash[28005]: audit 2026-03-10T07:36:57.387822+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:58.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:58 vm00 bash[28005]: audit 2026-03-10T07:36:57.387822+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:58.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:58 vm00 bash[28005]: audit 2026-03-10T07:36:57.390136+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:58.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:58 vm00 bash[28005]: audit 2026-03-10T07:36:57.390136+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:58.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:58 vm00 bash[20701]: cluster 2026-03-10T07:36:57.386205+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T07:36:58.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:58 vm00 bash[20701]: cluster 2026-03-10T07:36:57.386205+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T07:36:58.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:58 vm00 bash[20701]: audit 2026-03-10T07:36:57.387822+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:58.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:58 vm00 bash[20701]: audit 2026-03-10T07:36:57.387822+0000 mon.b (mon.1) 499 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:58.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:58 vm00 bash[20701]: audit 2026-03-10T07:36:57.390136+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:58.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:58 vm00 bash[20701]: audit 2026-03-10T07:36:57.390136+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:36:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:59 vm00 bash[28005]: audit 2026-03-10T07:36:58.451496+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:59 vm00 bash[28005]: audit 2026-03-10T07:36:58.451496+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:59 vm00 bash[28005]: audit 2026-03-10T07:36:58.458564+0000 mon.b (mon.1) 500 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:59 vm00 bash[28005]: audit 2026-03-10T07:36:58.458564+0000 mon.b (mon.1) 500 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:59 vm00 bash[28005]: cluster 2026-03-10T07:36:58.464082+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:59 vm00 bash[28005]: cluster 2026-03-10T07:36:58.464082+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:59 vm00 bash[28005]: cluster 2026-03-10T07:36:58.700775+0000 mgr.y (mgr.24407) 458 : cluster [DBG] pgmap v746: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:59 vm00 bash[28005]: cluster 2026-03-10T07:36:58.700775+0000 mgr.y (mgr.24407) 458 : cluster [DBG] pgmap v746: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:59 vm00 bash[28005]: cluster 2026-03-10T07:36:59.475131+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:36:59 vm00 bash[28005]: cluster 2026-03-10T07:36:59.475131+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:59 vm00 bash[20701]: audit 2026-03-10T07:36:58.451496+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:59 vm00 bash[20701]: audit 2026-03-10T07:36:58.451496+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:59 vm00 bash[20701]: audit 2026-03-10T07:36:58.458564+0000 mon.b (mon.1) 500 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:59 vm00 bash[20701]: audit 2026-03-10T07:36:58.458564+0000 mon.b (mon.1) 500 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:59 vm00 bash[20701]: cluster 2026-03-10T07:36:58.464082+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:59 vm00 bash[20701]: cluster 2026-03-10T07:36:58.464082+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:59 vm00 bash[20701]: cluster 2026-03-10T07:36:58.700775+0000 mgr.y (mgr.24407) 458 : cluster [DBG] pgmap v746: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:59 vm00 bash[20701]: cluster 2026-03-10T07:36:58.700775+0000 mgr.y (mgr.24407) 458 : cluster [DBG] pgmap v746: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:59 vm00 bash[20701]: cluster 2026-03-10T07:36:59.475131+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T07:36:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:36:59 vm00 bash[20701]: cluster 2026-03-10T07:36:59.475131+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T07:37:00.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:59 vm03 bash[23382]: audit 2026-03-10T07:36:58.451496+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:00.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:59 vm03 bash[23382]: audit 2026-03-10T07:36:58.451496+0000 mon.a (mon.0) 2801 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:00.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:59 vm03 bash[23382]: audit 2026-03-10T07:36:58.458564+0000 mon.b (mon.1) 500 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:37:00.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:59 vm03 bash[23382]: audit 2026-03-10T07:36:58.458564+0000 mon.b (mon.1) 500 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:37:00.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:59 vm03 bash[23382]: cluster 2026-03-10T07:36:58.464082+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T07:37:00.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:59 vm03 bash[23382]: cluster 2026-03-10T07:36:58.464082+0000 mon.a (mon.0) 2802 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T07:37:00.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:59 vm03 bash[23382]: cluster 2026-03-10T07:36:58.700775+0000 mgr.y (mgr.24407) 458 : cluster [DBG] pgmap v746: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:00.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:59 vm03 bash[23382]: cluster 2026-03-10T07:36:58.700775+0000 mgr.y (mgr.24407) 458 : cluster [DBG] pgmap v746: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:00.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:59 vm03 bash[23382]: cluster 2026-03-10T07:36:59.475131+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T07:37:00.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:36:59 vm03 bash[23382]: cluster 2026-03-10T07:36:59.475131+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T07:37:01.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:37:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:37:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:37:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:01 vm03 bash[23382]: cluster 2026-03-10T07:37:00.701588+0000 mgr.y (mgr.24407) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T07:37:02.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:01 vm03 bash[23382]: cluster 2026-03-10T07:37:00.701588+0000 mgr.y (mgr.24407) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T07:37:02.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:01 vm00 bash[28005]: cluster 2026-03-10T07:37:00.701588+0000 mgr.y (mgr.24407) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T07:37:02.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:01 vm00 bash[28005]: cluster 2026-03-10T07:37:00.701588+0000 mgr.y (mgr.24407) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T07:37:02.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:01 vm00 bash[20701]: cluster 2026-03-10T07:37:00.701588+0000 mgr.y (mgr.24407) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T07:37:02.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:01 vm00 bash[20701]: cluster 2026-03-10T07:37:00.701588+0000 mgr.y (mgr.24407) 459 : cluster [DBG] pgmap v748: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T07:37:03.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:37:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:37:04.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:03 vm00 bash[28005]: cluster 2026-03-10T07:37:02.701905+0000 mgr.y (mgr.24407) 460 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:37:04.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:03 vm00 bash[28005]: cluster 2026-03-10T07:37:02.701905+0000 mgr.y (mgr.24407) 460 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:37:04.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:03 vm00 bash[20701]: cluster 2026-03-10T07:37:02.701905+0000 mgr.y (mgr.24407) 460 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:37:04.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:03 vm00 bash[20701]: cluster 2026-03-10T07:37:02.701905+0000 mgr.y (mgr.24407) 460 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:37:04.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:03 vm03 bash[23382]: cluster 2026-03-10T07:37:02.701905+0000 mgr.y (mgr.24407) 460 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:37:04.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:03 vm03 bash[23382]: cluster 2026-03-10T07:37:02.701905+0000 mgr.y (mgr.24407) 460 : cluster [DBG] pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:37:05.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:04 vm03 bash[23382]: audit 2026-03-10T07:37:03.360579+0000 mgr.y (mgr.24407) 461 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:05.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:04 vm03 bash[23382]: audit 2026-03-10T07:37:03.360579+0000 mgr.y (mgr.24407) 461 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:05.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:04 vm00 bash[28005]: audit 2026-03-10T07:37:03.360579+0000 mgr.y (mgr.24407) 461 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:05.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:04 vm00 bash[28005]: audit 2026-03-10T07:37:03.360579+0000 mgr.y (mgr.24407) 461 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:05.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:04 vm00 bash[20701]: audit 2026-03-10T07:37:03.360579+0000 mgr.y (mgr.24407) 461 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:05.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:04 vm00 bash[20701]: audit 2026-03-10T07:37:03.360579+0000 mgr.y (mgr.24407) 461 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:05 vm03 bash[23382]: cluster 2026-03-10T07:37:04.702574+0000 mgr.y (mgr.24407) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 698 B/s rd, 838 B/s wr, 2 op/s 2026-03-10T07:37:06.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:05 vm03 bash[23382]: cluster 2026-03-10T07:37:04.702574+0000 mgr.y (mgr.24407) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 698 B/s rd, 838 B/s wr, 2 op/s 2026-03-10T07:37:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:05 vm00 bash[28005]: cluster 2026-03-10T07:37:04.702574+0000 mgr.y (mgr.24407) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 698 B/s rd, 838 B/s wr, 2 op/s 2026-03-10T07:37:06.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:05 vm00 bash[28005]: cluster 2026-03-10T07:37:04.702574+0000 mgr.y (mgr.24407) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 698 B/s rd, 838 B/s wr, 2 op/s 2026-03-10T07:37:06.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:05 vm00 bash[20701]: cluster 2026-03-10T07:37:04.702574+0000 mgr.y (mgr.24407) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 698 B/s rd, 838 B/s wr, 2 op/s 2026-03-10T07:37:06.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:05 vm00 bash[20701]: cluster 2026-03-10T07:37:04.702574+0000 mgr.y (mgr.24407) 462 : cluster [DBG] pgmap v750: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 698 B/s rd, 838 B/s wr, 2 op/s 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: cluster 2026-03-10T07:37:06.703160+0000 mgr.y (mgr.24407) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 745 B/s wr, 2 op/s 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: cluster 2026-03-10T07:37:06.703160+0000 mgr.y (mgr.24407) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 745 B/s wr, 2 op/s 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:06.981718+0000 mon.c (mon.2) 319 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:06.981718+0000 mon.c (mon.2) 319 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.313273+0000 mon.c (mon.2) 320 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.313273+0000 mon.c (mon.2) 320 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.313762+0000 mon.a (mon.0) 2804 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.313762+0000 mon.a (mon.0) 2804 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.333495+0000 mon.c (mon.2) 321 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.333495+0000 mon.c (mon.2) 321 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.333931+0000 mon.a (mon.0) 2805 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.333931+0000 mon.a (mon.0) 2805 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.334824+0000 mon.c (mon.2) 322 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.334824+0000 mon.c (mon.2) 322 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.335496+0000 mon.c (mon.2) 323 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.335496+0000 mon.c (mon.2) 323 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.486335+0000 mon.a (mon.0) 2806 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:08 vm00 bash[20701]: audit 2026-03-10T07:37:07.486335+0000 mon.a (mon.0) 2806 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:08.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: cluster 2026-03-10T07:37:06.703160+0000 mgr.y (mgr.24407) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 745 B/s wr, 2 op/s 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: cluster 2026-03-10T07:37:06.703160+0000 mgr.y (mgr.24407) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 745 B/s wr, 2 op/s 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:06.981718+0000 mon.c (mon.2) 319 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:06.981718+0000 mon.c (mon.2) 319 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.313273+0000 mon.c (mon.2) 320 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.313273+0000 mon.c (mon.2) 320 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.313762+0000 mon.a (mon.0) 2804 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.313762+0000 mon.a (mon.0) 2804 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.333495+0000 mon.c (mon.2) 321 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.333495+0000 mon.c (mon.2) 321 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.333931+0000 mon.a (mon.0) 2805 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.333931+0000 mon.a (mon.0) 2805 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.334824+0000 mon.c (mon.2) 322 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.334824+0000 mon.c (mon.2) 322 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.335496+0000 mon.c (mon.2) 323 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.335496+0000 mon.c (mon.2) 323 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.486335+0000 mon.a (mon.0) 2806 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:08 vm00 bash[28005]: audit 2026-03-10T07:37:07.486335+0000 mon.a (mon.0) 2806 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: cluster 2026-03-10T07:37:06.703160+0000 mgr.y (mgr.24407) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 745 B/s wr, 2 op/s 2026-03-10T07:37:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: cluster 2026-03-10T07:37:06.703160+0000 mgr.y (mgr.24407) 463 : cluster [DBG] pgmap v751: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 745 B/s wr, 2 op/s 2026-03-10T07:37:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:06.981718+0000 mon.c (mon.2) 319 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:06.981718+0000 mon.c (mon.2) 319 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.313273+0000 mon.c (mon.2) 320 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.313273+0000 mon.c (mon.2) 320 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.313762+0000 mon.a (mon.0) 2804 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.313762+0000 mon.a (mon.0) 2804 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.333495+0000 mon.c (mon.2) 321 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.333495+0000 mon.c (mon.2) 321 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.333931+0000 mon.a (mon.0) 2805 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.333931+0000 mon.a (mon.0) 2805 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.334824+0000 mon.c (mon.2) 322 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.334824+0000 mon.c (mon.2) 322 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.335496+0000 mon.c (mon.2) 323 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.335496+0000 mon.c (mon.2) 323 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.486335+0000 mon.a (mon.0) 2806 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:08 vm03 bash[23382]: audit 2026-03-10T07:37:07.486335+0000 mon.a (mon.0) 2806 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:10.013 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 07:37:09 vm03 bash[44711]: debug 2026-03-10T07:37:09.614+0000 7ff4027f0640 -1 snap_mapper.add_oid found existing snaps mapped on 100:e887e4a4:test-rados-api-vm00-59782-97::foo:21, removing 2026-03-10T07:37:10.013 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 07:37:09 vm03 bash[38760]: debug 2026-03-10T07:37:09.614+0000 7f1e2561e640 -1 snap_mapper.add_oid found existing snaps mapped on 100:e887e4a4:test-rados-api-vm00-59782-97::foo:21, removing 2026-03-10T07:37:10.014 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 07:37:09 vm03 bash[32803]: debug 2026-03-10T07:37:09.614+0000 7f7126dcb640 -1 snap_mapper.add_oid found existing snaps mapped on 100:e887e4a4:test-rados-api-vm00-59782-97::foo:21, removing 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: cluster 2026-03-10T07:37:08.703438+0000 mgr.y (mgr.24407) 464 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 2 op/s 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: cluster 2026-03-10T07:37:08.703438+0000 mgr.y (mgr.24407) 464 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 2 op/s 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.595876+0000 mon.c (mon.2) 324 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.595876+0000 mon.c (mon.2) 324 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.628672+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.628672+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.629776+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.629776+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.630264+0000 mon.a (mon.0) 2807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.630264+0000 mon.a (mon.0) 2807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.631117+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.631117+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.976446+0000 mon.c (mon.2) 325 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.976446+0000 mon.c (mon.2) 325 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.976669+0000 mon.c (mon.2) 326 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.976669+0000 mon.c (mon.2) 326 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.977071+0000 mon.a (mon.0) 2809 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.977071+0000 mon.a (mon.0) 2809 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.977165+0000 mon.a (mon.0) 2810 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:10 vm03 bash[23382]: audit 2026-03-10T07:37:09.977165+0000 mon.a (mon.0) 2810 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: cluster 2026-03-10T07:37:08.703438+0000 mgr.y (mgr.24407) 464 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 2 op/s 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: cluster 2026-03-10T07:37:08.703438+0000 mgr.y (mgr.24407) 464 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 2 op/s 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.595876+0000 mon.c (mon.2) 324 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.595876+0000 mon.c (mon.2) 324 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.628672+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.628672+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.629776+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.629776+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.630264+0000 mon.a (mon.0) 2807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.630264+0000 mon.a (mon.0) 2807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.631117+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.631117+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.976446+0000 mon.c (mon.2) 325 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.976446+0000 mon.c (mon.2) 325 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.976669+0000 mon.c (mon.2) 326 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: cluster 2026-03-10T07:37:08.703438+0000 mgr.y (mgr.24407) 464 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 2 op/s 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: cluster 2026-03-10T07:37:08.703438+0000 mgr.y (mgr.24407) 464 : cluster [DBG] pgmap v752: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 2 op/s 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.595876+0000 mon.c (mon.2) 324 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.595876+0000 mon.c (mon.2) 324 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.628672+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.628672+0000 mon.b (mon.1) 501 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.629776+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.629776+0000 mon.b (mon.1) 502 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.630264+0000 mon.a (mon.0) 2807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.630264+0000 mon.a (mon.0) 2807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.631117+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.631117+0000 mon.a (mon.0) 2808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-96"}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.976446+0000 mon.c (mon.2) 325 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.976446+0000 mon.c (mon.2) 325 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.976669+0000 mon.c (mon.2) 326 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.976669+0000 mon.c (mon.2) 326 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.977071+0000 mon.a (mon.0) 2809 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.977071+0000 mon.a (mon.0) 2809 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.977165+0000 mon.a (mon.0) 2810 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:10 vm00 bash[28005]: audit 2026-03-10T07:37:09.977165+0000 mon.a (mon.0) 2810 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.976669+0000 mon.c (mon.2) 326 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.977071+0000 mon.a (mon.0) 2809 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.977071+0000 mon.a (mon.0) 2809 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]: dispatch 2026-03-10T07:37:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.977165+0000 mon.a (mon.0) 2810 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:10 vm00 bash[20701]: audit 2026-03-10T07:37:09.977165+0000 mon.a (mon.0) 2810 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]: dispatch 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:37:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:37:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: audit 2026-03-10T07:37:10.120761+0000 mon.a (mon.0) 2811 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]': finished 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: audit 2026-03-10T07:37:10.120761+0000 mon.a (mon.0) 2811 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]': finished 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: audit 2026-03-10T07:37:10.120883+0000 mon.a (mon.0) 2812 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]': finished 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: audit 2026-03-10T07:37:10.120883+0000 mon.a (mon.0) 2812 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]': finished 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: cluster 2026-03-10T07:37:10.136804+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: cluster 2026-03-10T07:37:10.136804+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: cluster 2026-03-10T07:37:11.130197+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: cluster 2026-03-10T07:37:11.130197+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: audit 2026-03-10T07:37:11.130428+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: audit 2026-03-10T07:37:11.130428+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: audit 2026-03-10T07:37:11.134096+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:11 vm00 bash[28005]: audit 2026-03-10T07:37:11.134096+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: audit 2026-03-10T07:37:10.120761+0000 mon.a (mon.0) 2811 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]': finished 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: audit 2026-03-10T07:37:10.120761+0000 mon.a (mon.0) 2811 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]': finished 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: audit 2026-03-10T07:37:10.120883+0000 mon.a (mon.0) 2812 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]': finished 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: audit 2026-03-10T07:37:10.120883+0000 mon.a (mon.0) 2812 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]': finished 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: cluster 2026-03-10T07:37:10.136804+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: cluster 2026-03-10T07:37:10.136804+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: cluster 2026-03-10T07:37:11.130197+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: cluster 2026-03-10T07:37:11.130197+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: audit 2026-03-10T07:37:11.130428+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: audit 2026-03-10T07:37:11.130428+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: audit 2026-03-10T07:37:11.134096+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:11 vm00 bash[20701]: audit 2026-03-10T07:37:11.134096+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: audit 2026-03-10T07:37:10.120761+0000 mon.a (mon.0) 2811 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]': finished 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: audit 2026-03-10T07:37:10.120761+0000 mon.a (mon.0) 2811 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.3", "id": [7, 2]}]': finished 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: audit 2026-03-10T07:37:10.120883+0000 mon.a (mon.0) 2812 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]': finished 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: audit 2026-03-10T07:37:10.120883+0000 mon.a (mon.0) 2812 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "306.c", "id": [7, 5]}]': finished 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: cluster 2026-03-10T07:37:10.136804+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: cluster 2026-03-10T07:37:10.136804+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: cluster 2026-03-10T07:37:11.130197+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: cluster 2026-03-10T07:37:11.130197+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: audit 2026-03-10T07:37:11.130428+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: audit 2026-03-10T07:37:11.130428+0000 mon.b (mon.1) 503 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: audit 2026-03-10T07:37:11.134096+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:11.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:11 vm03 bash[23382]: audit 2026-03-10T07:37:11.134096+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:12.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:12 vm03 bash[23382]: cluster 2026-03-10T07:37:10.703801+0000 mgr.y (mgr.24407) 465 : cluster [DBG] pgmap v754: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 op/s 2026-03-10T07:37:12.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:12 vm03 bash[23382]: cluster 2026-03-10T07:37:10.703801+0000 mgr.y (mgr.24407) 465 : cluster [DBG] pgmap v754: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 op/s 2026-03-10T07:37:12.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:12 vm03 bash[23382]: audit 2026-03-10T07:37:12.128156+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:12.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:12 vm03 bash[23382]: audit 2026-03-10T07:37:12.128156+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:12.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:12 vm03 bash[23382]: cluster 2026-03-10T07:37:12.131170+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T07:37:12.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:12 vm03 bash[23382]: cluster 2026-03-10T07:37:12.131170+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T07:37:12.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:12 vm03 bash[23382]: audit 2026-03-10T07:37:12.159414+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:12.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:12 vm03 bash[23382]: audit 2026-03-10T07:37:12.159414+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:12.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:12 vm03 bash[23382]: audit 2026-03-10T07:37:12.161074+0000 mon.a (mon.0) 2818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:12.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:12 vm03 bash[23382]: audit 2026-03-10T07:37:12.161074+0000 mon.a (mon.0) 2818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:12 vm00 bash[28005]: cluster 2026-03-10T07:37:10.703801+0000 mgr.y (mgr.24407) 465 : cluster [DBG] pgmap v754: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 op/s 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:12 vm00 bash[28005]: cluster 2026-03-10T07:37:10.703801+0000 mgr.y (mgr.24407) 465 : cluster [DBG] pgmap v754: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 op/s 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:12 vm00 bash[28005]: audit 2026-03-10T07:37:12.128156+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:12 vm00 bash[28005]: audit 2026-03-10T07:37:12.128156+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:12 vm00 bash[28005]: cluster 2026-03-10T07:37:12.131170+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:12 vm00 bash[28005]: cluster 2026-03-10T07:37:12.131170+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:12 vm00 bash[28005]: audit 2026-03-10T07:37:12.159414+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:12 vm00 bash[28005]: audit 2026-03-10T07:37:12.159414+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:12 vm00 bash[28005]: audit 2026-03-10T07:37:12.161074+0000 mon.a (mon.0) 2818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:12 vm00 bash[28005]: audit 2026-03-10T07:37:12.161074+0000 mon.a (mon.0) 2818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:12 vm00 bash[20701]: cluster 2026-03-10T07:37:10.703801+0000 mgr.y (mgr.24407) 465 : cluster [DBG] pgmap v754: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 op/s 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:12 vm00 bash[20701]: cluster 2026-03-10T07:37:10.703801+0000 mgr.y (mgr.24407) 465 : cluster [DBG] pgmap v754: 260 pgs: 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 op/s 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:12 vm00 bash[20701]: audit 2026-03-10T07:37:12.128156+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:12 vm00 bash[20701]: audit 2026-03-10T07:37:12.128156+0000 mon.a (mon.0) 2816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:12 vm00 bash[20701]: cluster 2026-03-10T07:37:12.131170+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:12 vm00 bash[20701]: cluster 2026-03-10T07:37:12.131170+0000 mon.a (mon.0) 2817 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:12 vm00 bash[20701]: audit 2026-03-10T07:37:12.159414+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:12 vm00 bash[20701]: audit 2026-03-10T07:37:12.159414+0000 mon.b (mon.1) 504 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:12.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:12 vm00 bash[20701]: audit 2026-03-10T07:37:12.161074+0000 mon.a (mon.0) 2818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:12.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:12 vm00 bash[20701]: audit 2026-03-10T07:37:12.161074+0000 mon.a (mon.0) 2818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:13.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:37:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:37:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:14 vm03 bash[23382]: cluster 2026-03-10T07:37:12.704144+0000 mgr.y (mgr.24407) 466 : cluster [DBG] pgmap v757: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:37:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:14 vm03 bash[23382]: cluster 2026-03-10T07:37:12.704144+0000 mgr.y (mgr.24407) 466 : cluster [DBG] pgmap v757: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:37:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:14 vm03 bash[23382]: audit 2026-03-10T07:37:13.131890+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:14 vm03 bash[23382]: audit 2026-03-10T07:37:13.131890+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:14 vm03 bash[23382]: audit 2026-03-10T07:37:13.134995+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:14 vm03 bash[23382]: audit 2026-03-10T07:37:13.134995+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:14 vm03 bash[23382]: cluster 2026-03-10T07:37:13.137905+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T07:37:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:14 vm03 bash[23382]: cluster 2026-03-10T07:37:13.137905+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T07:37:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:14 vm03 bash[23382]: audit 2026-03-10T07:37:13.143834+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:14.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:14 vm03 bash[23382]: audit 2026-03-10T07:37:13.143834+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:14 vm00 bash[28005]: cluster 2026-03-10T07:37:12.704144+0000 mgr.y (mgr.24407) 466 : cluster [DBG] pgmap v757: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:14 vm00 bash[28005]: cluster 2026-03-10T07:37:12.704144+0000 mgr.y (mgr.24407) 466 : cluster [DBG] pgmap v757: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:14 vm00 bash[28005]: audit 2026-03-10T07:37:13.131890+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:14 vm00 bash[28005]: audit 2026-03-10T07:37:13.131890+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:14 vm00 bash[28005]: audit 2026-03-10T07:37:13.134995+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:14 vm00 bash[28005]: audit 2026-03-10T07:37:13.134995+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:14 vm00 bash[28005]: cluster 2026-03-10T07:37:13.137905+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:14 vm00 bash[28005]: cluster 2026-03-10T07:37:13.137905+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:14 vm00 bash[28005]: audit 2026-03-10T07:37:13.143834+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:14 vm00 bash[28005]: audit 2026-03-10T07:37:13.143834+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:14 vm00 bash[20701]: cluster 2026-03-10T07:37:12.704144+0000 mgr.y (mgr.24407) 466 : cluster [DBG] pgmap v757: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:14 vm00 bash[20701]: cluster 2026-03-10T07:37:12.704144+0000 mgr.y (mgr.24407) 466 : cluster [DBG] pgmap v757: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:14 vm00 bash[20701]: audit 2026-03-10T07:37:13.131890+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:14 vm00 bash[20701]: audit 2026-03-10T07:37:13.131890+0000 mon.a (mon.0) 2819 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:14 vm00 bash[20701]: audit 2026-03-10T07:37:13.134995+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:14 vm00 bash[20701]: audit 2026-03-10T07:37:13.134995+0000 mon.b (mon.1) 505 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:14 vm00 bash[20701]: cluster 2026-03-10T07:37:13.137905+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:14 vm00 bash[20701]: cluster 2026-03-10T07:37:13.137905+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:14 vm00 bash[20701]: audit 2026-03-10T07:37:13.143834+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:14 vm00 bash[20701]: audit 2026-03-10T07:37:13.143834+0000 mon.a (mon.0) 2821 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]: dispatch 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: audit 2026-03-10T07:37:13.367589+0000 mgr.y (mgr.24407) 467 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: audit 2026-03-10T07:37:13.367589+0000 mgr.y (mgr.24407) 467 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: cluster 2026-03-10T07:37:14.131838+0000 mon.a (mon.0) 2822 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: cluster 2026-03-10T07:37:14.131838+0000 mon.a (mon.0) 2822 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: audit 2026-03-10T07:37:14.134354+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]': finished 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: audit 2026-03-10T07:37:14.134354+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]': finished 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: audit 2026-03-10T07:37:14.138340+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: audit 2026-03-10T07:37:14.138340+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: cluster 2026-03-10T07:37:14.139404+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: cluster 2026-03-10T07:37:14.139404+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: audit 2026-03-10T07:37:14.142934+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: audit 2026-03-10T07:37:14.142934+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: audit 2026-03-10T07:37:15.138422+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: audit 2026-03-10T07:37:15.138422+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: cluster 2026-03-10T07:37:15.148533+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T07:37:15.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:15 vm03 bash[23382]: cluster 2026-03-10T07:37:15.148533+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: audit 2026-03-10T07:37:13.367589+0000 mgr.y (mgr.24407) 467 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: audit 2026-03-10T07:37:13.367589+0000 mgr.y (mgr.24407) 467 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: cluster 2026-03-10T07:37:14.131838+0000 mon.a (mon.0) 2822 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: cluster 2026-03-10T07:37:14.131838+0000 mon.a (mon.0) 2822 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: audit 2026-03-10T07:37:14.134354+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]': finished 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: audit 2026-03-10T07:37:14.134354+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]': finished 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: audit 2026-03-10T07:37:14.138340+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: audit 2026-03-10T07:37:14.138340+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: cluster 2026-03-10T07:37:14.139404+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: cluster 2026-03-10T07:37:14.139404+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: audit 2026-03-10T07:37:14.142934+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: audit 2026-03-10T07:37:14.142934+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: audit 2026-03-10T07:37:15.138422+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: audit 2026-03-10T07:37:15.138422+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: cluster 2026-03-10T07:37:15.148533+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:15 vm00 bash[28005]: cluster 2026-03-10T07:37:15.148533+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: audit 2026-03-10T07:37:13.367589+0000 mgr.y (mgr.24407) 467 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: audit 2026-03-10T07:37:13.367589+0000 mgr.y (mgr.24407) 467 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: cluster 2026-03-10T07:37:14.131838+0000 mon.a (mon.0) 2822 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: cluster 2026-03-10T07:37:14.131838+0000 mon.a (mon.0) 2822 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: audit 2026-03-10T07:37:14.134354+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]': finished 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: audit 2026-03-10T07:37:14.134354+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-98", "mode": "writeback"}]': finished 2026-03-10T07:37:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: audit 2026-03-10T07:37:14.138340+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: audit 2026-03-10T07:37:14.138340+0000 mon.b (mon.1) 506 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: cluster 2026-03-10T07:37:14.139404+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T07:37:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: cluster 2026-03-10T07:37:14.139404+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T07:37:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: audit 2026-03-10T07:37:14.142934+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: audit 2026-03-10T07:37:14.142934+0000 mon.a (mon.0) 2825 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: audit 2026-03-10T07:37:15.138422+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: audit 2026-03-10T07:37:15.138422+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: cluster 2026-03-10T07:37:15.148533+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T07:37:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:15 vm00 bash[20701]: cluster 2026-03-10T07:37:15.148533+0000 mon.a (mon.0) 2827 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T07:37:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: cluster 2026-03-10T07:37:14.704610+0000 mgr.y (mgr.24407) 468 : cluster [DBG] pgmap v760: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: cluster 2026-03-10T07:37:14.704610+0000 mgr.y (mgr.24407) 468 : cluster [DBG] pgmap v760: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: audit 2026-03-10T07:37:15.147649+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: audit 2026-03-10T07:37:15.147649+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: audit 2026-03-10T07:37:15.150549+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: audit 2026-03-10T07:37:15.150549+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: cluster 2026-03-10T07:37:16.138467+0000 mon.a (mon.0) 2829 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: cluster 2026-03-10T07:37:16.138467+0000 mon.a (mon.0) 2829 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: audit 2026-03-10T07:37:16.142166+0000 mon.a (mon.0) 2830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: audit 2026-03-10T07:37:16.142166+0000 mon.a (mon.0) 2830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: audit 2026-03-10T07:37:16.151307+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: audit 2026-03-10T07:37:16.151307+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: cluster 2026-03-10T07:37:16.153481+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: cluster 2026-03-10T07:37:16.153481+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: audit 2026-03-10T07:37:16.154182+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:16.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:16 vm03 bash[23382]: audit 2026-03-10T07:37:16.154182+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: cluster 2026-03-10T07:37:14.704610+0000 mgr.y (mgr.24407) 468 : cluster [DBG] pgmap v760: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: cluster 2026-03-10T07:37:14.704610+0000 mgr.y (mgr.24407) 468 : cluster [DBG] pgmap v760: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: audit 2026-03-10T07:37:15.147649+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: audit 2026-03-10T07:37:15.147649+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: audit 2026-03-10T07:37:15.150549+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: audit 2026-03-10T07:37:15.150549+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: cluster 2026-03-10T07:37:16.138467+0000 mon.a (mon.0) 2829 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: cluster 2026-03-10T07:37:16.138467+0000 mon.a (mon.0) 2829 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: audit 2026-03-10T07:37:16.142166+0000 mon.a (mon.0) 2830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: audit 2026-03-10T07:37:16.142166+0000 mon.a (mon.0) 2830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: audit 2026-03-10T07:37:16.151307+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: audit 2026-03-10T07:37:16.151307+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: cluster 2026-03-10T07:37:16.153481+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: cluster 2026-03-10T07:37:16.153481+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: audit 2026-03-10T07:37:16.154182+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:16 vm00 bash[28005]: audit 2026-03-10T07:37:16.154182+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: cluster 2026-03-10T07:37:14.704610+0000 mgr.y (mgr.24407) 468 : cluster [DBG] pgmap v760: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: cluster 2026-03-10T07:37:14.704610+0000 mgr.y (mgr.24407) 468 : cluster [DBG] pgmap v760: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: audit 2026-03-10T07:37:15.147649+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: audit 2026-03-10T07:37:15.147649+0000 mon.b (mon.1) 507 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: audit 2026-03-10T07:37:15.150549+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: audit 2026-03-10T07:37:15.150549+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: cluster 2026-03-10T07:37:16.138467+0000 mon.a (mon.0) 2829 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: cluster 2026-03-10T07:37:16.138467+0000 mon.a (mon.0) 2829 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: audit 2026-03-10T07:37:16.142166+0000 mon.a (mon.0) 2830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: audit 2026-03-10T07:37:16.142166+0000 mon.a (mon.0) 2830 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: audit 2026-03-10T07:37:16.151307+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: audit 2026-03-10T07:37:16.151307+0000 mon.b (mon.1) 508 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: cluster 2026-03-10T07:37:16.153481+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: cluster 2026-03-10T07:37:16.153481+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: audit 2026-03-10T07:37:16.154182+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:16 vm00 bash[20701]: audit 2026-03-10T07:37:16.154182+0000 mon.a (mon.0) 2832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T07:37:18.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:18 vm03 bash[23382]: cluster 2026-03-10T07:37:16.704930+0000 mgr.y (mgr.24407) 469 : cluster [DBG] pgmap v763: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:18 vm03 bash[23382]: cluster 2026-03-10T07:37:16.704930+0000 mgr.y (mgr.24407) 469 : cluster [DBG] pgmap v763: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:18 vm03 bash[23382]: audit 2026-03-10T07:37:17.145682+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T07:37:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:18 vm03 bash[23382]: audit 2026-03-10T07:37:17.145682+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T07:37:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:18 vm03 bash[23382]: cluster 2026-03-10T07:37:17.149607+0000 mon.a (mon.0) 2834 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T07:37:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:18 vm03 bash[23382]: cluster 2026-03-10T07:37:17.149607+0000 mon.a (mon.0) 2834 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T07:37:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:18 vm03 bash[23382]: audit 2026-03-10T07:37:17.149805+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:18 vm03 bash[23382]: audit 2026-03-10T07:37:17.149805+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:18 vm03 bash[23382]: audit 2026-03-10T07:37:17.151401+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:18.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:18 vm03 bash[23382]: audit 2026-03-10T07:37:17.151401+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:18 vm00 bash[28005]: cluster 2026-03-10T07:37:16.704930+0000 mgr.y (mgr.24407) 469 : cluster [DBG] pgmap v763: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:18 vm00 bash[28005]: cluster 2026-03-10T07:37:16.704930+0000 mgr.y (mgr.24407) 469 : cluster [DBG] pgmap v763: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:18 vm00 bash[28005]: audit 2026-03-10T07:37:17.145682+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:18 vm00 bash[28005]: audit 2026-03-10T07:37:17.145682+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:18 vm00 bash[28005]: cluster 2026-03-10T07:37:17.149607+0000 mon.a (mon.0) 2834 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:18 vm00 bash[28005]: cluster 2026-03-10T07:37:17.149607+0000 mon.a (mon.0) 2834 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:18 vm00 bash[28005]: audit 2026-03-10T07:37:17.149805+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:18 vm00 bash[28005]: audit 2026-03-10T07:37:17.149805+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:18 vm00 bash[28005]: audit 2026-03-10T07:37:17.151401+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:18 vm00 bash[28005]: audit 2026-03-10T07:37:17.151401+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:18 vm00 bash[20701]: cluster 2026-03-10T07:37:16.704930+0000 mgr.y (mgr.24407) 469 : cluster [DBG] pgmap v763: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:18 vm00 bash[20701]: cluster 2026-03-10T07:37:16.704930+0000 mgr.y (mgr.24407) 469 : cluster [DBG] pgmap v763: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:18 vm00 bash[20701]: audit 2026-03-10T07:37:17.145682+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:18 vm00 bash[20701]: audit 2026-03-10T07:37:17.145682+0000 mon.a (mon.0) 2833 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:18 vm00 bash[20701]: cluster 2026-03-10T07:37:17.149607+0000 mon.a (mon.0) 2834 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:18 vm00 bash[20701]: cluster 2026-03-10T07:37:17.149607+0000 mon.a (mon.0) 2834 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:18 vm00 bash[20701]: audit 2026-03-10T07:37:17.149805+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:18 vm00 bash[20701]: audit 2026-03-10T07:37:17.149805+0000 mon.b (mon.1) 509 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:18 vm00 bash[20701]: audit 2026-03-10T07:37:17.151401+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:18 vm00 bash[20701]: audit 2026-03-10T07:37:17.151401+0000 mon.a (mon.0) 2835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:19 vm03 bash[23382]: audit 2026-03-10T07:37:18.153492+0000 mon.a (mon.0) 2836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:19 vm03 bash[23382]: audit 2026-03-10T07:37:18.153492+0000 mon.a (mon.0) 2836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:19 vm03 bash[23382]: audit 2026-03-10T07:37:18.156188+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:19 vm03 bash[23382]: audit 2026-03-10T07:37:18.156188+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:19 vm03 bash[23382]: cluster 2026-03-10T07:37:18.162944+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T07:37:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:19 vm03 bash[23382]: cluster 2026-03-10T07:37:18.162944+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T07:37:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:19 vm03 bash[23382]: audit 2026-03-10T07:37:18.168162+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:19.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:19 vm03 bash[23382]: audit 2026-03-10T07:37:18.168162+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:19 vm00 bash[28005]: audit 2026-03-10T07:37:18.153492+0000 mon.a (mon.0) 2836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:19 vm00 bash[28005]: audit 2026-03-10T07:37:18.153492+0000 mon.a (mon.0) 2836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:19 vm00 bash[28005]: audit 2026-03-10T07:37:18.156188+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:19 vm00 bash[28005]: audit 2026-03-10T07:37:18.156188+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:19 vm00 bash[28005]: cluster 2026-03-10T07:37:18.162944+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:19 vm00 bash[28005]: cluster 2026-03-10T07:37:18.162944+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:19 vm00 bash[28005]: audit 2026-03-10T07:37:18.168162+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:19 vm00 bash[28005]: audit 2026-03-10T07:37:18.168162+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:19 vm00 bash[20701]: audit 2026-03-10T07:37:18.153492+0000 mon.a (mon.0) 2836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:19 vm00 bash[20701]: audit 2026-03-10T07:37:18.153492+0000 mon.a (mon.0) 2836 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:19 vm00 bash[20701]: audit 2026-03-10T07:37:18.156188+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:19 vm00 bash[20701]: audit 2026-03-10T07:37:18.156188+0000 mon.b (mon.1) 510 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:19 vm00 bash[20701]: cluster 2026-03-10T07:37:18.162944+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:19 vm00 bash[20701]: cluster 2026-03-10T07:37:18.162944+0000 mon.a (mon.0) 2837 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:19 vm00 bash[20701]: audit 2026-03-10T07:37:18.168162+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:19.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:19 vm00 bash[20701]: audit 2026-03-10T07:37:18.168162+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T07:37:20.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:20 vm03 bash[23382]: cluster 2026-03-10T07:37:18.705309+0000 mgr.y (mgr.24407) 470 : cluster [DBG] pgmap v766: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:20.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:20 vm03 bash[23382]: cluster 2026-03-10T07:37:18.705309+0000 mgr.y (mgr.24407) 470 : cluster [DBG] pgmap v766: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:20.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:20 vm03 bash[23382]: audit 2026-03-10T07:37:19.217509+0000 mon.a (mon.0) 2839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T07:37:20.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:20 vm03 bash[23382]: audit 2026-03-10T07:37:19.217509+0000 mon.a (mon.0) 2839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T07:37:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:20 vm03 bash[23382]: cluster 2026-03-10T07:37:19.220722+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T07:37:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:20 vm03 bash[23382]: cluster 2026-03-10T07:37:19.220722+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T07:37:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:20 vm03 bash[23382]: audit 2026-03-10T07:37:19.263556+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:20 vm03 bash[23382]: audit 2026-03-10T07:37:19.263556+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:20 vm03 bash[23382]: audit 2026-03-10T07:37:19.265007+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:20.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:20 vm03 bash[23382]: audit 2026-03-10T07:37:19.265007+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:20 vm00 bash[28005]: cluster 2026-03-10T07:37:18.705309+0000 mgr.y (mgr.24407) 470 : cluster [DBG] pgmap v766: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:20 vm00 bash[28005]: cluster 2026-03-10T07:37:18.705309+0000 mgr.y (mgr.24407) 470 : cluster [DBG] pgmap v766: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:20 vm00 bash[28005]: audit 2026-03-10T07:37:19.217509+0000 mon.a (mon.0) 2839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:20 vm00 bash[28005]: audit 2026-03-10T07:37:19.217509+0000 mon.a (mon.0) 2839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:20 vm00 bash[28005]: cluster 2026-03-10T07:37:19.220722+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:20 vm00 bash[28005]: cluster 2026-03-10T07:37:19.220722+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:20 vm00 bash[28005]: audit 2026-03-10T07:37:19.263556+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:20 vm00 bash[28005]: audit 2026-03-10T07:37:19.263556+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:20 vm00 bash[28005]: audit 2026-03-10T07:37:19.265007+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:20 vm00 bash[28005]: audit 2026-03-10T07:37:19.265007+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:20 vm00 bash[20701]: cluster 2026-03-10T07:37:18.705309+0000 mgr.y (mgr.24407) 470 : cluster [DBG] pgmap v766: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:20 vm00 bash[20701]: cluster 2026-03-10T07:37:18.705309+0000 mgr.y (mgr.24407) 470 : cluster [DBG] pgmap v766: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:20 vm00 bash[20701]: audit 2026-03-10T07:37:19.217509+0000 mon.a (mon.0) 2839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:20 vm00 bash[20701]: audit 2026-03-10T07:37:19.217509+0000 mon.a (mon.0) 2839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:20 vm00 bash[20701]: cluster 2026-03-10T07:37:19.220722+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:20 vm00 bash[20701]: cluster 2026-03-10T07:37:19.220722+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:20 vm00 bash[20701]: audit 2026-03-10T07:37:19.263556+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:20 vm00 bash[20701]: audit 2026-03-10T07:37:19.263556+0000 mon.b (mon.1) 511 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:20 vm00 bash[20701]: audit 2026-03-10T07:37:19.265007+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:20 vm00 bash[20701]: audit 2026-03-10T07:37:19.265007+0000 mon.a (mon.0) 2841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:21 vm00 bash[28005]: audit 2026-03-10T07:37:20.260964+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:21 vm00 bash[28005]: audit 2026-03-10T07:37:20.260964+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:21 vm00 bash[28005]: audit 2026-03-10T07:37:20.262471+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:21 vm00 bash[28005]: audit 2026-03-10T07:37:20.262471+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:21 vm00 bash[28005]: cluster 2026-03-10T07:37:20.269084+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:21 vm00 bash[28005]: cluster 2026-03-10T07:37:20.269084+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:21 vm00 bash[28005]: audit 2026-03-10T07:37:20.273938+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:21 vm00 bash[28005]: audit 2026-03-10T07:37:20.273938+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:21 vm00 bash[20701]: audit 2026-03-10T07:37:20.260964+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:21 vm00 bash[20701]: audit 2026-03-10T07:37:20.260964+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:21 vm00 bash[20701]: audit 2026-03-10T07:37:20.262471+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:21 vm00 bash[20701]: audit 2026-03-10T07:37:20.262471+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:21 vm00 bash[20701]: cluster 2026-03-10T07:37:20.269084+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:21 vm00 bash[20701]: cluster 2026-03-10T07:37:20.269084+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:21 vm00 bash[20701]: audit 2026-03-10T07:37:20.273938+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:21 vm00 bash[20701]: audit 2026-03-10T07:37:20.273938+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:21.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:37:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:37:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:37:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:21 vm03 bash[23382]: audit 2026-03-10T07:37:20.260964+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:21 vm03 bash[23382]: audit 2026-03-10T07:37:20.260964+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:21 vm03 bash[23382]: audit 2026-03-10T07:37:20.262471+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:21 vm03 bash[23382]: audit 2026-03-10T07:37:20.262471+0000 mon.b (mon.1) 512 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:21 vm03 bash[23382]: cluster 2026-03-10T07:37:20.269084+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T07:37:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:21 vm03 bash[23382]: cluster 2026-03-10T07:37:20.269084+0000 mon.a (mon.0) 2843 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T07:37:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:21 vm03 bash[23382]: audit 2026-03-10T07:37:20.273938+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:21 vm03 bash[23382]: audit 2026-03-10T07:37:20.273938+0000 mon.a (mon.0) 2844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]: dispatch 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:22 vm00 bash[20701]: cluster 2026-03-10T07:37:20.705684+0000 mgr.y (mgr.24407) 471 : cluster [DBG] pgmap v769: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:22 vm00 bash[20701]: cluster 2026-03-10T07:37:20.705684+0000 mgr.y (mgr.24407) 471 : cluster [DBG] pgmap v769: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:22 vm00 bash[20701]: audit 2026-03-10T07:37:21.264714+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:22 vm00 bash[20701]: audit 2026-03-10T07:37:21.264714+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:22 vm00 bash[20701]: cluster 2026-03-10T07:37:21.267487+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:22 vm00 bash[20701]: cluster 2026-03-10T07:37:21.267487+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:22 vm00 bash[28005]: cluster 2026-03-10T07:37:20.705684+0000 mgr.y (mgr.24407) 471 : cluster [DBG] pgmap v769: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:22 vm00 bash[28005]: cluster 2026-03-10T07:37:20.705684+0000 mgr.y (mgr.24407) 471 : cluster [DBG] pgmap v769: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:22 vm00 bash[28005]: audit 2026-03-10T07:37:21.264714+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:22 vm00 bash[28005]: audit 2026-03-10T07:37:21.264714+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:22 vm00 bash[28005]: cluster 2026-03-10T07:37:21.267487+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T07:37:22.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:22 vm00 bash[28005]: cluster 2026-03-10T07:37:21.267487+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T07:37:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:22 vm03 bash[23382]: cluster 2026-03-10T07:37:20.705684+0000 mgr.y (mgr.24407) 471 : cluster [DBG] pgmap v769: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:22 vm03 bash[23382]: cluster 2026-03-10T07:37:20.705684+0000 mgr.y (mgr.24407) 471 : cluster [DBG] pgmap v769: 292 pgs: 292 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:22 vm03 bash[23382]: audit 2026-03-10T07:37:21.264714+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:22 vm03 bash[23382]: audit 2026-03-10T07:37:21.264714+0000 mon.a (mon.0) 2845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-98"}]': finished 2026-03-10T07:37:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:22 vm03 bash[23382]: cluster 2026-03-10T07:37:21.267487+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T07:37:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:22 vm03 bash[23382]: cluster 2026-03-10T07:37:21.267487+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T07:37:23.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:23 vm00 bash[20701]: cluster 2026-03-10T07:37:22.304176+0000 mon.a (mon.0) 2847 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T07:37:23.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:23 vm00 bash[20701]: cluster 2026-03-10T07:37:22.304176+0000 mon.a (mon.0) 2847 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T07:37:23.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:23 vm00 bash[28005]: cluster 2026-03-10T07:37:22.304176+0000 mon.a (mon.0) 2847 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T07:37:23.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:23 vm00 bash[28005]: cluster 2026-03-10T07:37:22.304176+0000 mon.a (mon.0) 2847 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T07:37:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:23 vm03 bash[23382]: cluster 2026-03-10T07:37:22.304176+0000 mon.a (mon.0) 2847 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T07:37:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:23 vm03 bash[23382]: cluster 2026-03-10T07:37:22.304176+0000 mon.a (mon.0) 2847 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T07:37:23.764 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:37:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:37:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:24 vm03 bash[23382]: cluster 2026-03-10T07:37:22.706099+0000 mgr.y (mgr.24407) 472 : cluster [DBG] pgmap v772: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:24 vm03 bash[23382]: cluster 2026-03-10T07:37:22.706099+0000 mgr.y (mgr.24407) 472 : cluster [DBG] pgmap v772: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:24 vm03 bash[23382]: cluster 2026-03-10T07:37:23.335134+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T07:37:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:24 vm03 bash[23382]: cluster 2026-03-10T07:37:23.335134+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T07:37:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:24 vm03 bash[23382]: audit 2026-03-10T07:37:23.336435+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:24 vm03 bash[23382]: audit 2026-03-10T07:37:23.336435+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:24 vm03 bash[23382]: audit 2026-03-10T07:37:23.340392+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:24 vm03 bash[23382]: audit 2026-03-10T07:37:23.340392+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:24 vm00 bash[20701]: cluster 2026-03-10T07:37:22.706099+0000 mgr.y (mgr.24407) 472 : cluster [DBG] pgmap v772: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:24 vm00 bash[20701]: cluster 2026-03-10T07:37:22.706099+0000 mgr.y (mgr.24407) 472 : cluster [DBG] pgmap v772: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:24 vm00 bash[20701]: cluster 2026-03-10T07:37:23.335134+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:24 vm00 bash[20701]: cluster 2026-03-10T07:37:23.335134+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:24 vm00 bash[20701]: audit 2026-03-10T07:37:23.336435+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:24 vm00 bash[20701]: audit 2026-03-10T07:37:23.336435+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:24 vm00 bash[20701]: audit 2026-03-10T07:37:23.340392+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:24 vm00 bash[20701]: audit 2026-03-10T07:37:23.340392+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:24 vm00 bash[28005]: cluster 2026-03-10T07:37:22.706099+0000 mgr.y (mgr.24407) 472 : cluster [DBG] pgmap v772: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:24 vm00 bash[28005]: cluster 2026-03-10T07:37:22.706099+0000 mgr.y (mgr.24407) 472 : cluster [DBG] pgmap v772: 260 pgs: 260 active+clean; 8.3 MiB data, 955 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:24 vm00 bash[28005]: cluster 2026-03-10T07:37:23.335134+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:24 vm00 bash[28005]: cluster 2026-03-10T07:37:23.335134+0000 mon.a (mon.0) 2848 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:24 vm00 bash[28005]: audit 2026-03-10T07:37:23.336435+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:24 vm00 bash[28005]: audit 2026-03-10T07:37:23.336435+0000 mon.b (mon.1) 513 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:24 vm00 bash[28005]: audit 2026-03-10T07:37:23.340392+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:24 vm00 bash[28005]: audit 2026-03-10T07:37:23.340392+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:23.375703+0000 mgr.y (mgr.24407) 473 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:23.375703+0000 mgr.y (mgr.24407) 473 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:24.314625+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:24.314625+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: cluster 2026-03-10T07:37:24.326470+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: cluster 2026-03-10T07:37:24.326470+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:24.328955+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:24.328955+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:24.396481+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:24.396481+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:24.602647+0000 mon.c (mon.2) 327 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:24.602647+0000 mon.c (mon.2) 327 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: cluster 2026-03-10T07:37:24.706748+0000 mgr.y (mgr.24407) 474 : cluster [DBG] pgmap v775: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: cluster 2026-03-10T07:37:24.706748+0000 mgr.y (mgr.24407) 474 : cluster [DBG] pgmap v775: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:25.319683+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:25.319683+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:25.322391+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:25.322391+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: cluster 2026-03-10T07:37:25.323770+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: cluster 2026-03-10T07:37:25.323770+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:25.324978+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:25.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:25 vm03 bash[23382]: audit 2026-03-10T07:37:25.324978+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:23.375703+0000 mgr.y (mgr.24407) 473 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:23.375703+0000 mgr.y (mgr.24407) 473 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:24.314625+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:24.314625+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: cluster 2026-03-10T07:37:24.326470+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: cluster 2026-03-10T07:37:24.326470+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:24.328955+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:24.328955+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:24.396481+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:24.396481+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:24.602647+0000 mon.c (mon.2) 327 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:24.602647+0000 mon.c (mon.2) 327 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: cluster 2026-03-10T07:37:24.706748+0000 mgr.y (mgr.24407) 474 : cluster [DBG] pgmap v775: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: cluster 2026-03-10T07:37:24.706748+0000 mgr.y (mgr.24407) 474 : cluster [DBG] pgmap v775: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:25.319683+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:23.375703+0000 mgr.y (mgr.24407) 473 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:23.375703+0000 mgr.y (mgr.24407) 473 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:25.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:24.314625+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:24.314625+0000 mon.a (mon.0) 2850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: cluster 2026-03-10T07:37:24.326470+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: cluster 2026-03-10T07:37:24.326470+0000 mon.a (mon.0) 2851 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:24.328955+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:24.328955+0000 mon.b (mon.1) 514 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:24.396481+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:24.396481+0000 mon.a (mon.0) 2852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:24.602647+0000 mon.c (mon.2) 327 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:24.602647+0000 mon.c (mon.2) 327 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: cluster 2026-03-10T07:37:24.706748+0000 mgr.y (mgr.24407) 474 : cluster [DBG] pgmap v775: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: cluster 2026-03-10T07:37:24.706748+0000 mgr.y (mgr.24407) 474 : cluster [DBG] pgmap v775: 292 pgs: 11 unknown, 281 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:25.319683+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:25.319683+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:25.322391+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:25.322391+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: cluster 2026-03-10T07:37:25.323770+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: cluster 2026-03-10T07:37:25.323770+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:25.324978+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:25 vm00 bash[28005]: audit 2026-03-10T07:37:25.324978+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:25.319683+0000 mon.a (mon.0) 2853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:25.322391+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:25.322391+0000 mon.b (mon.1) 515 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: cluster 2026-03-10T07:37:25.323770+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: cluster 2026-03-10T07:37:25.323770+0000 mon.a (mon.0) 2854 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:25.324978+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:25.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:25 vm00 bash[20701]: audit 2026-03-10T07:37:25.324978+0000 mon.a (mon.0) 2855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:27 vm03 bash[23382]: audit 2026-03-10T07:37:26.324104+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:27 vm03 bash[23382]: audit 2026-03-10T07:37:26.324104+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:27 vm03 bash[23382]: audit 2026-03-10T07:37:26.328261+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:27 vm03 bash[23382]: audit 2026-03-10T07:37:26.328261+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:27 vm03 bash[23382]: cluster 2026-03-10T07:37:26.333308+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T07:37:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:27 vm03 bash[23382]: cluster 2026-03-10T07:37:26.333308+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T07:37:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:27 vm03 bash[23382]: audit 2026-03-10T07:37:26.334195+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:27 vm03 bash[23382]: audit 2026-03-10T07:37:26.334195+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:27 vm00 bash[20701]: audit 2026-03-10T07:37:26.324104+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:27 vm00 bash[20701]: audit 2026-03-10T07:37:26.324104+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:27 vm00 bash[20701]: audit 2026-03-10T07:37:26.328261+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:27 vm00 bash[20701]: audit 2026-03-10T07:37:26.328261+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:27 vm00 bash[20701]: cluster 2026-03-10T07:37:26.333308+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:27 vm00 bash[20701]: cluster 2026-03-10T07:37:26.333308+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:27 vm00 bash[20701]: audit 2026-03-10T07:37:26.334195+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:27 vm00 bash[20701]: audit 2026-03-10T07:37:26.334195+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:27 vm00 bash[28005]: audit 2026-03-10T07:37:26.324104+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:27 vm00 bash[28005]: audit 2026-03-10T07:37:26.324104+0000 mon.a (mon.0) 2856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-6", "overlaypool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:27 vm00 bash[28005]: audit 2026-03-10T07:37:26.328261+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:27 vm00 bash[28005]: audit 2026-03-10T07:37:26.328261+0000 mon.b (mon.1) 516 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:27 vm00 bash[28005]: cluster 2026-03-10T07:37:26.333308+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:27 vm00 bash[28005]: cluster 2026-03-10T07:37:26.333308+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:27 vm00 bash[28005]: audit 2026-03-10T07:37:26.334195+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:27.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:27 vm00 bash[28005]: audit 2026-03-10T07:37:26.334195+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]: dispatch 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: cluster 2026-03-10T07:37:26.707093+0000 mgr.y (mgr.24407) 475 : cluster [DBG] pgmap v778: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: cluster 2026-03-10T07:37:26.707093+0000 mgr.y (mgr.24407) 475 : cluster [DBG] pgmap v778: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: cluster 2026-03-10T07:37:27.324142+0000 mon.a (mon.0) 2859 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: cluster 2026-03-10T07:37:27.324142+0000 mon.a (mon.0) 2859 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:27.442644+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]': finished 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:27.442644+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]': finished 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:27.447191+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:27.447191+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: cluster 2026-03-10T07:37:27.452694+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: cluster 2026-03-10T07:37:27.452694+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:27.454037+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:27.454037+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:28.446242+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:28.446242+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:28.449671+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:28.449671+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: cluster 2026-03-10T07:37:28.453823+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: cluster 2026-03-10T07:37:28.453823+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:28.454440+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:28.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:28 vm03 bash[23382]: audit 2026-03-10T07:37:28.454440+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: cluster 2026-03-10T07:37:26.707093+0000 mgr.y (mgr.24407) 475 : cluster [DBG] pgmap v778: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: cluster 2026-03-10T07:37:26.707093+0000 mgr.y (mgr.24407) 475 : cluster [DBG] pgmap v778: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: cluster 2026-03-10T07:37:27.324142+0000 mon.a (mon.0) 2859 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: cluster 2026-03-10T07:37:27.324142+0000 mon.a (mon.0) 2859 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:27.442644+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]': finished 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:27.442644+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]': finished 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:27.447191+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:27.447191+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: cluster 2026-03-10T07:37:27.452694+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: cluster 2026-03-10T07:37:27.452694+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:27.454037+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:27.454037+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:28.446242+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:28.446242+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:28.449671+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:28.449671+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: cluster 2026-03-10T07:37:28.453823+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: cluster 2026-03-10T07:37:28.453823+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:28.454440+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:28 vm00 bash[28005]: audit 2026-03-10T07:37:28.454440+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: cluster 2026-03-10T07:37:26.707093+0000 mgr.y (mgr.24407) 475 : cluster [DBG] pgmap v778: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: cluster 2026-03-10T07:37:26.707093+0000 mgr.y (mgr.24407) 475 : cluster [DBG] pgmap v778: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: cluster 2026-03-10T07:37:27.324142+0000 mon.a (mon.0) 2859 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: cluster 2026-03-10T07:37:27.324142+0000 mon.a (mon.0) 2859 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:27.442644+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]': finished 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:27.442644+0000 mon.a (mon.0) 2860 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-100", "mode": "writeback"}]': finished 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:27.447191+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:27.447191+0000 mon.b (mon.1) 517 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: cluster 2026-03-10T07:37:27.452694+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: cluster 2026-03-10T07:37:27.452694+0000 mon.a (mon.0) 2861 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T07:37:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:27.454037+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:27.454037+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:37:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:28.446242+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:37:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:28.446242+0000 mon.a (mon.0) 2863 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:37:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:28.449671+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:28.449671+0000 mon.b (mon.1) 518 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: cluster 2026-03-10T07:37:28.453823+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T07:37:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: cluster 2026-03-10T07:37:28.453823+0000 mon.a (mon.0) 2864 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T07:37:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:28.454440+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:28 vm00 bash[20701]: audit 2026-03-10T07:37:28.454440+0000 mon.a (mon.0) 2865 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:37:29.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:29 vm03 bash[23382]: cluster 2026-03-10T07:37:28.707428+0000 mgr.y (mgr.24407) 476 : cluster [DBG] pgmap v781: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:29.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:29 vm03 bash[23382]: cluster 2026-03-10T07:37:28.707428+0000 mgr.y (mgr.24407) 476 : cluster [DBG] pgmap v781: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:29.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:29 vm00 bash[28005]: cluster 2026-03-10T07:37:28.707428+0000 mgr.y (mgr.24407) 476 : cluster [DBG] pgmap v781: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:29.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:29 vm00 bash[28005]: cluster 2026-03-10T07:37:28.707428+0000 mgr.y (mgr.24407) 476 : cluster [DBG] pgmap v781: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:29.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:29 vm00 bash[20701]: cluster 2026-03-10T07:37:28.707428+0000 mgr.y (mgr.24407) 476 : cluster [DBG] pgmap v781: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:29.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:29 vm00 bash[20701]: cluster 2026-03-10T07:37:28.707428+0000 mgr.y (mgr.24407) 476 : cluster [DBG] pgmap v781: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: audit 2026-03-10T07:37:29.470440+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: audit 2026-03-10T07:37:29.470440+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: audit 2026-03-10T07:37:29.473732+0000 mon.b (mon.1) 519 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: audit 2026-03-10T07:37:29.473732+0000 mon.b (mon.1) 519 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: cluster 2026-03-10T07:37:29.474090+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: cluster 2026-03-10T07:37:29.474090+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: audit 2026-03-10T07:37:29.475970+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: audit 2026-03-10T07:37:29.475970+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: cluster 2026-03-10T07:37:30.470680+0000 mon.a (mon.0) 2869 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: cluster 2026-03-10T07:37:30.470680+0000 mon.a (mon.0) 2869 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: audit 2026-03-10T07:37:30.473839+0000 mon.a (mon.0) 2870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: audit 2026-03-10T07:37:30.473839+0000 mon.a (mon.0) 2870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: audit 2026-03-10T07:37:30.480969+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: audit 2026-03-10T07:37:30.480969+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: cluster 2026-03-10T07:37:30.482497+0000 mon.a (mon.0) 2871 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T07:37:30.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:30 vm03 bash[23382]: cluster 2026-03-10T07:37:30.482497+0000 mon.a (mon.0) 2871 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: audit 2026-03-10T07:37:29.470440+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: audit 2026-03-10T07:37:29.470440+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: audit 2026-03-10T07:37:29.473732+0000 mon.b (mon.1) 519 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: audit 2026-03-10T07:37:29.473732+0000 mon.b (mon.1) 519 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: cluster 2026-03-10T07:37:29.474090+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: cluster 2026-03-10T07:37:29.474090+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: audit 2026-03-10T07:37:29.475970+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: audit 2026-03-10T07:37:29.475970+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: cluster 2026-03-10T07:37:30.470680+0000 mon.a (mon.0) 2869 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: cluster 2026-03-10T07:37:30.470680+0000 mon.a (mon.0) 2869 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: audit 2026-03-10T07:37:30.473839+0000 mon.a (mon.0) 2870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: audit 2026-03-10T07:37:30.473839+0000 mon.a (mon.0) 2870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: audit 2026-03-10T07:37:30.480969+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: audit 2026-03-10T07:37:30.480969+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: cluster 2026-03-10T07:37:30.482497+0000 mon.a (mon.0) 2871 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:30 vm00 bash[28005]: cluster 2026-03-10T07:37:30.482497+0000 mon.a (mon.0) 2871 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: audit 2026-03-10T07:37:29.470440+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: audit 2026-03-10T07:37:29.470440+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: audit 2026-03-10T07:37:29.473732+0000 mon.b (mon.1) 519 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: audit 2026-03-10T07:37:29.473732+0000 mon.b (mon.1) 519 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: cluster 2026-03-10T07:37:29.474090+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: cluster 2026-03-10T07:37:29.474090+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: audit 2026-03-10T07:37:29.475970+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: audit 2026-03-10T07:37:29.475970+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: cluster 2026-03-10T07:37:30.470680+0000 mon.a (mon.0) 2869 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: cluster 2026-03-10T07:37:30.470680+0000 mon.a (mon.0) 2869 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: audit 2026-03-10T07:37:30.473839+0000 mon.a (mon.0) 2870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: audit 2026-03-10T07:37:30.473839+0000 mon.a (mon.0) 2870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: audit 2026-03-10T07:37:30.480969+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: audit 2026-03-10T07:37:30.480969+0000 mon.b (mon.1) 520 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: cluster 2026-03-10T07:37:30.482497+0000 mon.a (mon.0) 2871 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T07:37:30.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:30 vm00 bash[20701]: cluster 2026-03-10T07:37:30.482497+0000 mon.a (mon.0) 2871 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T07:37:31.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:37:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:37:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:37:31.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:31 vm03 bash[23382]: audit 2026-03-10T07:37:30.483009+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:31 vm03 bash[23382]: audit 2026-03-10T07:37:30.483009+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:31 vm03 bash[23382]: cluster 2026-03-10T07:37:30.707818+0000 mgr.y (mgr.24407) 477 : cluster [DBG] pgmap v784: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:31.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:31 vm03 bash[23382]: cluster 2026-03-10T07:37:30.707818+0000 mgr.y (mgr.24407) 477 : cluster [DBG] pgmap v784: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:31.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:31 vm00 bash[28005]: audit 2026-03-10T07:37:30.483009+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:31.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:31 vm00 bash[28005]: audit 2026-03-10T07:37:30.483009+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:31.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:31 vm00 bash[28005]: cluster 2026-03-10T07:37:30.707818+0000 mgr.y (mgr.24407) 477 : cluster [DBG] pgmap v784: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:31.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:31 vm00 bash[28005]: cluster 2026-03-10T07:37:30.707818+0000 mgr.y (mgr.24407) 477 : cluster [DBG] pgmap v784: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:31.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:31 vm00 bash[20701]: audit 2026-03-10T07:37:30.483009+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:31.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:31 vm00 bash[20701]: audit 2026-03-10T07:37:30.483009+0000 mon.a (mon.0) 2872 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T07:37:31.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:31 vm00 bash[20701]: cluster 2026-03-10T07:37:30.707818+0000 mgr.y (mgr.24407) 477 : cluster [DBG] pgmap v784: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:31.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:31 vm00 bash[20701]: cluster 2026-03-10T07:37:30.707818+0000 mgr.y (mgr.24407) 477 : cluster [DBG] pgmap v784: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:32.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:32 vm03 bash[23382]: audit 2026-03-10T07:37:31.493509+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T07:37:32.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:32 vm03 bash[23382]: audit 2026-03-10T07:37:31.493509+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T07:37:32.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:32 vm03 bash[23382]: cluster 2026-03-10T07:37:31.497824+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T07:37:32.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:32 vm03 bash[23382]: cluster 2026-03-10T07:37:31.497824+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T07:37:32.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:32 vm03 bash[23382]: audit 2026-03-10T07:37:31.548080+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:32.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:32 vm03 bash[23382]: audit 2026-03-10T07:37:31.548080+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:32.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:32 vm03 bash[23382]: audit 2026-03-10T07:37:31.549277+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:32.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:32 vm03 bash[23382]: audit 2026-03-10T07:37:31.549277+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:32 vm00 bash[20701]: audit 2026-03-10T07:37:31.493509+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:32 vm00 bash[20701]: audit 2026-03-10T07:37:31.493509+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:32 vm00 bash[20701]: cluster 2026-03-10T07:37:31.497824+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:32 vm00 bash[20701]: cluster 2026-03-10T07:37:31.497824+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:32 vm00 bash[20701]: audit 2026-03-10T07:37:31.548080+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:32 vm00 bash[20701]: audit 2026-03-10T07:37:31.548080+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:32 vm00 bash[20701]: audit 2026-03-10T07:37:31.549277+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:32 vm00 bash[20701]: audit 2026-03-10T07:37:31.549277+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:32 vm00 bash[28005]: audit 2026-03-10T07:37:31.493509+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:32 vm00 bash[28005]: audit 2026-03-10T07:37:31.493509+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:32 vm00 bash[28005]: cluster 2026-03-10T07:37:31.497824+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:32 vm00 bash[28005]: cluster 2026-03-10T07:37:31.497824+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:32 vm00 bash[28005]: audit 2026-03-10T07:37:31.548080+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:32 vm00 bash[28005]: audit 2026-03-10T07:37:31.548080+0000 mon.b (mon.1) 521 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:32 vm00 bash[28005]: audit 2026-03-10T07:37:31.549277+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:32 vm00 bash[28005]: audit 2026-03-10T07:37:31.549277+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:33.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:37:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:37:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:33 vm03 bash[23382]: audit 2026-03-10T07:37:32.534353+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:33 vm03 bash[23382]: audit 2026-03-10T07:37:32.534353+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:33 vm03 bash[23382]: cluster 2026-03-10T07:37:32.536894+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T07:37:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:33 vm03 bash[23382]: cluster 2026-03-10T07:37:32.536894+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T07:37:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:33 vm03 bash[23382]: audit 2026-03-10T07:37:32.539776+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:33 vm03 bash[23382]: audit 2026-03-10T07:37:32.539776+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:33 vm03 bash[23382]: audit 2026-03-10T07:37:32.553295+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:33 vm03 bash[23382]: audit 2026-03-10T07:37:32.553295+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:33 vm03 bash[23382]: cluster 2026-03-10T07:37:32.708218+0000 mgr.y (mgr.24407) 478 : cluster [DBG] pgmap v787: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:33.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:33 vm03 bash[23382]: cluster 2026-03-10T07:37:32.708218+0000 mgr.y (mgr.24407) 478 : cluster [DBG] pgmap v787: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:33 vm00 bash[28005]: audit 2026-03-10T07:37:32.534353+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:33 vm00 bash[28005]: audit 2026-03-10T07:37:32.534353+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:33 vm00 bash[28005]: cluster 2026-03-10T07:37:32.536894+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:33 vm00 bash[28005]: cluster 2026-03-10T07:37:32.536894+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:33 vm00 bash[28005]: audit 2026-03-10T07:37:32.539776+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:33 vm00 bash[28005]: audit 2026-03-10T07:37:32.539776+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:33 vm00 bash[28005]: audit 2026-03-10T07:37:32.553295+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:33 vm00 bash[28005]: audit 2026-03-10T07:37:32.553295+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:33 vm00 bash[28005]: cluster 2026-03-10T07:37:32.708218+0000 mgr.y (mgr.24407) 478 : cluster [DBG] pgmap v787: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:33 vm00 bash[28005]: cluster 2026-03-10T07:37:32.708218+0000 mgr.y (mgr.24407) 478 : cluster [DBG] pgmap v787: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:33 vm00 bash[20701]: audit 2026-03-10T07:37:32.534353+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:33 vm00 bash[20701]: audit 2026-03-10T07:37:32.534353+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]': finished 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:33 vm00 bash[20701]: cluster 2026-03-10T07:37:32.536894+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:33 vm00 bash[20701]: cluster 2026-03-10T07:37:32.536894+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:33 vm00 bash[20701]: audit 2026-03-10T07:37:32.539776+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:33 vm00 bash[20701]: audit 2026-03-10T07:37:32.539776+0000 mon.b (mon.1) 522 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:33 vm00 bash[20701]: audit 2026-03-10T07:37:32.553295+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:33 vm00 bash[20701]: audit 2026-03-10T07:37:32.553295+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]: dispatch 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:33 vm00 bash[20701]: cluster 2026-03-10T07:37:32.708218+0000 mgr.y (mgr.24407) 478 : cluster [DBG] pgmap v787: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:33 vm00 bash[20701]: cluster 2026-03-10T07:37:32.708218+0000 mgr.y (mgr.24407) 478 : cluster [DBG] pgmap v787: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:35.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:34 vm03 bash[23382]: audit 2026-03-10T07:37:33.379829+0000 mgr.y (mgr.24407) 479 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:35.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:34 vm03 bash[23382]: audit 2026-03-10T07:37:33.379829+0000 mgr.y (mgr.24407) 479 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:35.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:34 vm03 bash[23382]: audit 2026-03-10T07:37:33.553040+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:35.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:34 vm03 bash[23382]: audit 2026-03-10T07:37:33.553040+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:35.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:34 vm03 bash[23382]: cluster 2026-03-10T07:37:33.559224+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T07:37:35.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:34 vm03 bash[23382]: cluster 2026-03-10T07:37:33.559224+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:34 vm00 bash[28005]: audit 2026-03-10T07:37:33.379829+0000 mgr.y (mgr.24407) 479 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:34 vm00 bash[28005]: audit 2026-03-10T07:37:33.379829+0000 mgr.y (mgr.24407) 479 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:34 vm00 bash[28005]: audit 2026-03-10T07:37:33.553040+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:34 vm00 bash[28005]: audit 2026-03-10T07:37:33.553040+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:34 vm00 bash[28005]: cluster 2026-03-10T07:37:33.559224+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:34 vm00 bash[28005]: cluster 2026-03-10T07:37:33.559224+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:34 vm00 bash[20701]: audit 2026-03-10T07:37:33.379829+0000 mgr.y (mgr.24407) 479 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:34 vm00 bash[20701]: audit 2026-03-10T07:37:33.379829+0000 mgr.y (mgr.24407) 479 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:34 vm00 bash[20701]: audit 2026-03-10T07:37:33.553040+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:34 vm00 bash[20701]: audit 2026-03-10T07:37:33.553040+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-100"}]': finished 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:34 vm00 bash[20701]: cluster 2026-03-10T07:37:33.559224+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T07:37:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:34 vm00 bash[20701]: cluster 2026-03-10T07:37:33.559224+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T07:37:36.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:35 vm03 bash[23382]: cluster 2026-03-10T07:37:34.666313+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T07:37:36.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:35 vm03 bash[23382]: cluster 2026-03-10T07:37:34.666313+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T07:37:36.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:35 vm03 bash[23382]: cluster 2026-03-10T07:37:34.708691+0000 mgr.y (mgr.24407) 480 : cluster [DBG] pgmap v790: 260 pgs: 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:36.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:35 vm03 bash[23382]: cluster 2026-03-10T07:37:34.708691+0000 mgr.y (mgr.24407) 480 : cluster [DBG] pgmap v790: 260 pgs: 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:36.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:35 vm03 bash[23382]: cluster 2026-03-10T07:37:35.668369+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T07:37:36.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:35 vm03 bash[23382]: cluster 2026-03-10T07:37:35.668369+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:35 vm00 bash[28005]: cluster 2026-03-10T07:37:34.666313+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:35 vm00 bash[28005]: cluster 2026-03-10T07:37:34.666313+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:35 vm00 bash[28005]: cluster 2026-03-10T07:37:34.708691+0000 mgr.y (mgr.24407) 480 : cluster [DBG] pgmap v790: 260 pgs: 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:35 vm00 bash[28005]: cluster 2026-03-10T07:37:34.708691+0000 mgr.y (mgr.24407) 480 : cluster [DBG] pgmap v790: 260 pgs: 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:35 vm00 bash[28005]: cluster 2026-03-10T07:37:35.668369+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:35 vm00 bash[28005]: cluster 2026-03-10T07:37:35.668369+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:35 vm00 bash[20701]: cluster 2026-03-10T07:37:34.666313+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:35 vm00 bash[20701]: cluster 2026-03-10T07:37:34.666313+0000 mon.a (mon.0) 2881 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:35 vm00 bash[20701]: cluster 2026-03-10T07:37:34.708691+0000 mgr.y (mgr.24407) 480 : cluster [DBG] pgmap v790: 260 pgs: 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:35 vm00 bash[20701]: cluster 2026-03-10T07:37:34.708691+0000 mgr.y (mgr.24407) 480 : cluster [DBG] pgmap v790: 260 pgs: 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:35 vm00 bash[20701]: cluster 2026-03-10T07:37:35.668369+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T07:37:36.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:35 vm00 bash[20701]: cluster 2026-03-10T07:37:35.668369+0000 mon.a (mon.0) 2882 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T07:37:37.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:36 vm03 bash[23382]: audit 2026-03-10T07:37:35.680777+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:36 vm03 bash[23382]: audit 2026-03-10T07:37:35.680777+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:36 vm03 bash[23382]: audit 2026-03-10T07:37:35.681964+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:36 vm03 bash[23382]: audit 2026-03-10T07:37:35.681964+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:36 vm03 bash[23382]: audit 2026-03-10T07:37:36.665574+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:37.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:36 vm03 bash[23382]: audit 2026-03-10T07:37:36.665574+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:37.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:36 vm03 bash[23382]: cluster 2026-03-10T07:37:36.670403+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T07:37:37.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:36 vm03 bash[23382]: cluster 2026-03-10T07:37:36.670403+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:36 vm00 bash[28005]: audit 2026-03-10T07:37:35.680777+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:36 vm00 bash[28005]: audit 2026-03-10T07:37:35.680777+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:36 vm00 bash[28005]: audit 2026-03-10T07:37:35.681964+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:36 vm00 bash[28005]: audit 2026-03-10T07:37:35.681964+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:36 vm00 bash[28005]: audit 2026-03-10T07:37:36.665574+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:36 vm00 bash[28005]: audit 2026-03-10T07:37:36.665574+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:36 vm00 bash[28005]: cluster 2026-03-10T07:37:36.670403+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:36 vm00 bash[28005]: cluster 2026-03-10T07:37:36.670403+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:36 vm00 bash[20701]: audit 2026-03-10T07:37:35.680777+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:36 vm00 bash[20701]: audit 2026-03-10T07:37:35.680777+0000 mon.b (mon.1) 523 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:36 vm00 bash[20701]: audit 2026-03-10T07:37:35.681964+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:36 vm00 bash[20701]: audit 2026-03-10T07:37:35.681964+0000 mon.a (mon.0) 2883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:36 vm00 bash[20701]: audit 2026-03-10T07:37:36.665574+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:36 vm00 bash[20701]: audit 2026-03-10T07:37:36.665574+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:36 vm00 bash[20701]: cluster 2026-03-10T07:37:36.670403+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T07:37:37.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:36 vm00 bash[20701]: cluster 2026-03-10T07:37:36.670403+0000 mon.a (mon.0) 2885 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:37 vm00 bash[28005]: audit 2026-03-10T07:37:36.693457+0000 mon.b (mon.1) 524 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:37 vm00 bash[28005]: audit 2026-03-10T07:37:36.693457+0000 mon.b (mon.1) 524 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:37 vm00 bash[28005]: audit 2026-03-10T07:37:36.697755+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:37 vm00 bash[28005]: audit 2026-03-10T07:37:36.697755+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:37 vm00 bash[28005]: cluster 2026-03-10T07:37:36.709076+0000 mgr.y (mgr.24407) 481 : cluster [DBG] pgmap v793: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:37 vm00 bash[28005]: cluster 2026-03-10T07:37:36.709076+0000 mgr.y (mgr.24407) 481 : cluster [DBG] pgmap v793: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:37 vm00 bash[28005]: audit 2026-03-10T07:37:36.711662+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:37 vm00 bash[28005]: audit 2026-03-10T07:37:36.711662+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:37 vm00 bash[20701]: audit 2026-03-10T07:37:36.693457+0000 mon.b (mon.1) 524 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:37 vm00 bash[20701]: audit 2026-03-10T07:37:36.693457+0000 mon.b (mon.1) 524 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:37 vm00 bash[20701]: audit 2026-03-10T07:37:36.697755+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:37 vm00 bash[20701]: audit 2026-03-10T07:37:36.697755+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:37 vm00 bash[20701]: cluster 2026-03-10T07:37:36.709076+0000 mgr.y (mgr.24407) 481 : cluster [DBG] pgmap v793: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:37 vm00 bash[20701]: cluster 2026-03-10T07:37:36.709076+0000 mgr.y (mgr.24407) 481 : cluster [DBG] pgmap v793: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:37 vm00 bash[20701]: audit 2026-03-10T07:37:36.711662+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:38.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:37 vm00 bash[20701]: audit 2026-03-10T07:37:36.711662+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:37 vm03 bash[23382]: audit 2026-03-10T07:37:36.693457+0000 mon.b (mon.1) 524 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:37:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:37 vm03 bash[23382]: audit 2026-03-10T07:37:36.693457+0000 mon.b (mon.1) 524 : audit [DBG] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T07:37:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:37 vm03 bash[23382]: audit 2026-03-10T07:37:36.697755+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:37 vm03 bash[23382]: audit 2026-03-10T07:37:36.697755+0000 mon.b (mon.1) 525 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:37 vm03 bash[23382]: cluster 2026-03-10T07:37:36.709076+0000 mgr.y (mgr.24407) 481 : cluster [DBG] pgmap v793: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:37 vm03 bash[23382]: cluster 2026-03-10T07:37:36.709076+0000 mgr.y (mgr.24407) 481 : cluster [DBG] pgmap v793: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:37 vm03 bash[23382]: audit 2026-03-10T07:37:36.711662+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:37 vm03 bash[23382]: audit 2026-03-10T07:37:36.711662+0000 mon.a (mon.0) 2886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:38 vm00 bash[28005]: audit 2026-03-10T07:37:37.781488+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:38 vm00 bash[28005]: audit 2026-03-10T07:37:37.781488+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:38 vm00 bash[28005]: cluster 2026-03-10T07:37:37.788535+0000 mon.a (mon.0) 2888 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:38 vm00 bash[28005]: cluster 2026-03-10T07:37:37.788535+0000 mon.a (mon.0) 2888 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:38 vm00 bash[28005]: audit 2026-03-10T07:37:37.795314+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:38 vm00 bash[28005]: audit 2026-03-10T07:37:37.795314+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:38 vm00 bash[28005]: audit 2026-03-10T07:37:37.797085+0000 mon.a (mon.0) 2889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:38 vm00 bash[28005]: audit 2026-03-10T07:37:37.797085+0000 mon.a (mon.0) 2889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:38 vm00 bash[20701]: audit 2026-03-10T07:37:37.781488+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:38 vm00 bash[20701]: audit 2026-03-10T07:37:37.781488+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:38 vm00 bash[20701]: cluster 2026-03-10T07:37:37.788535+0000 mon.a (mon.0) 2888 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:38 vm00 bash[20701]: cluster 2026-03-10T07:37:37.788535+0000 mon.a (mon.0) 2888 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:38 vm00 bash[20701]: audit 2026-03-10T07:37:37.795314+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:38 vm00 bash[20701]: audit 2026-03-10T07:37:37.795314+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:38 vm00 bash[20701]: audit 2026-03-10T07:37:37.797085+0000 mon.a (mon.0) 2889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:39.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:38 vm00 bash[20701]: audit 2026-03-10T07:37:37.797085+0000 mon.a (mon.0) 2889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:38 vm03 bash[23382]: audit 2026-03-10T07:37:37.781488+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:37:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:38 vm03 bash[23382]: audit 2026-03-10T07:37:37.781488+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T07:37:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:38 vm03 bash[23382]: cluster 2026-03-10T07:37:37.788535+0000 mon.a (mon.0) 2888 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T07:37:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:38 vm03 bash[23382]: cluster 2026-03-10T07:37:37.788535+0000 mon.a (mon.0) 2888 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T07:37:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:38 vm03 bash[23382]: audit 2026-03-10T07:37:37.795314+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:38 vm03 bash[23382]: audit 2026-03-10T07:37:37.795314+0000 mon.b (mon.1) 526 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:38 vm03 bash[23382]: audit 2026-03-10T07:37:37.797085+0000 mon.a (mon.0) 2889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:38 vm03 bash[23382]: audit 2026-03-10T07:37:37.797085+0000 mon.a (mon.0) 2889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: cluster 2026-03-10T07:37:38.709452+0000 mgr.y (mgr.24407) 482 : cluster [DBG] pgmap v795: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: cluster 2026-03-10T07:37:38.709452+0000 mgr.y (mgr.24407) 482 : cluster [DBG] pgmap v795: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: audit 2026-03-10T07:37:38.793650+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: audit 2026-03-10T07:37:38.793650+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: audit 2026-03-10T07:37:38.805583+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: audit 2026-03-10T07:37:38.805583+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: cluster 2026-03-10T07:37:38.809528+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: cluster 2026-03-10T07:37:38.809528+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: audit 2026-03-10T07:37:38.810768+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: audit 2026-03-10T07:37:38.810768+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: audit 2026-03-10T07:37:39.609505+0000 mon.c (mon.2) 328 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:39 vm00 bash[28005]: audit 2026-03-10T07:37:39.609505+0000 mon.c (mon.2) 328 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: cluster 2026-03-10T07:37:38.709452+0000 mgr.y (mgr.24407) 482 : cluster [DBG] pgmap v795: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: cluster 2026-03-10T07:37:38.709452+0000 mgr.y (mgr.24407) 482 : cluster [DBG] pgmap v795: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: audit 2026-03-10T07:37:38.793650+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: audit 2026-03-10T07:37:38.793650+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: audit 2026-03-10T07:37:38.805583+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: audit 2026-03-10T07:37:38.805583+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: cluster 2026-03-10T07:37:38.809528+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: cluster 2026-03-10T07:37:38.809528+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: audit 2026-03-10T07:37:38.810768+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: audit 2026-03-10T07:37:38.810768+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: audit 2026-03-10T07:37:39.609505+0000 mon.c (mon.2) 328 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:40.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:39 vm00 bash[20701]: audit 2026-03-10T07:37:39.609505+0000 mon.c (mon.2) 328 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: cluster 2026-03-10T07:37:38.709452+0000 mgr.y (mgr.24407) 482 : cluster [DBG] pgmap v795: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: cluster 2026-03-10T07:37:38.709452+0000 mgr.y (mgr.24407) 482 : cluster [DBG] pgmap v795: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: audit 2026-03-10T07:37:38.793650+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:37:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: audit 2026-03-10T07:37:38.793650+0000 mon.a (mon.0) 2890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T07:37:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: audit 2026-03-10T07:37:38.805583+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: audit 2026-03-10T07:37:38.805583+0000 mon.b (mon.1) 527 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: cluster 2026-03-10T07:37:38.809528+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T07:37:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: cluster 2026-03-10T07:37:38.809528+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T07:37:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: audit 2026-03-10T07:37:38.810768+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: audit 2026-03-10T07:37:38.810768+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T07:37:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: audit 2026-03-10T07:37:39.609505+0000 mon.c (mon.2) 328 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:39 vm03 bash[23382]: audit 2026-03-10T07:37:39.609505+0000 mon.c (mon.2) 328 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: audit 2026-03-10T07:37:39.797319+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: audit 2026-03-10T07:37:39.797319+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: cluster 2026-03-10T07:37:39.805960+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: cluster 2026-03-10T07:37:39.805960+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: audit 2026-03-10T07:37:39.845946+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: audit 2026-03-10T07:37:39.845946+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: audit 2026-03-10T07:37:39.846918+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: audit 2026-03-10T07:37:39.846918+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: audit 2026-03-10T07:37:39.847134+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: audit 2026-03-10T07:37:39.847134+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: audit 2026-03-10T07:37:39.848017+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:40 vm00 bash[28005]: audit 2026-03-10T07:37:39.848017+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:37:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:37:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: audit 2026-03-10T07:37:39.797319+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: audit 2026-03-10T07:37:39.797319+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: cluster 2026-03-10T07:37:39.805960+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: cluster 2026-03-10T07:37:39.805960+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: audit 2026-03-10T07:37:39.845946+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: audit 2026-03-10T07:37:39.845946+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: audit 2026-03-10T07:37:39.846918+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:41.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: audit 2026-03-10T07:37:39.846918+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:41.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: audit 2026-03-10T07:37:39.847134+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: audit 2026-03-10T07:37:39.847134+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: audit 2026-03-10T07:37:39.848017+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:41.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:40 vm00 bash[20701]: audit 2026-03-10T07:37:39.848017+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:41.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: audit 2026-03-10T07:37:39.797319+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:37:41.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: audit 2026-03-10T07:37:39.797319+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T07:37:41.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: cluster 2026-03-10T07:37:39.805960+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T07:37:41.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: cluster 2026-03-10T07:37:39.805960+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T07:37:41.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: audit 2026-03-10T07:37:39.845946+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: audit 2026-03-10T07:37:39.845946+0000 mon.b (mon.1) 528 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: audit 2026-03-10T07:37:39.846918+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:41.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: audit 2026-03-10T07:37:39.846918+0000 mon.b (mon.1) 529 : audit [INF] from='client.? 192.168.123.100:0/17629877' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:41.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: audit 2026-03-10T07:37:39.847134+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: audit 2026-03-10T07:37:39.847134+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-6"}]: dispatch 2026-03-10T07:37:41.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: audit 2026-03-10T07:37:39.848017+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:41.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:40 vm03 bash[23382]: audit 2026-03-10T07:37:39.848017+0000 mon.a (mon.0) 2896 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-6", "tierpool": "test-rados-api-vm00-59782-102"}]: dispatch 2026-03-10T07:37:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:41 vm00 bash[28005]: cluster 2026-03-10T07:37:40.709836+0000 mgr.y (mgr.24407) 483 : cluster [DBG] pgmap v798: 292 pgs: 292 active+clean; 8.3 MiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:41 vm00 bash[28005]: cluster 2026-03-10T07:37:40.709836+0000 mgr.y (mgr.24407) 483 : cluster [DBG] pgmap v798: 292 pgs: 292 active+clean; 8.3 MiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:41 vm00 bash[28005]: cluster 2026-03-10T07:37:40.827164+0000 mon.a (mon.0) 2897 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T07:37:42.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:41 vm00 bash[28005]: cluster 2026-03-10T07:37:40.827164+0000 mon.a (mon.0) 2897 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T07:37:42.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:41 vm00 bash[20701]: cluster 2026-03-10T07:37:40.709836+0000 mgr.y (mgr.24407) 483 : cluster [DBG] pgmap v798: 292 pgs: 292 active+clean; 8.3 MiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:42.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:41 vm00 bash[20701]: cluster 2026-03-10T07:37:40.709836+0000 mgr.y (mgr.24407) 483 : cluster [DBG] pgmap v798: 292 pgs: 292 active+clean; 8.3 MiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:42.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:41 vm00 bash[20701]: cluster 2026-03-10T07:37:40.827164+0000 mon.a (mon.0) 2897 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T07:37:42.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:41 vm00 bash[20701]: cluster 2026-03-10T07:37:40.827164+0000 mon.a (mon.0) 2897 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T07:37:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:41 vm03 bash[23382]: cluster 2026-03-10T07:37:40.709836+0000 mgr.y (mgr.24407) 483 : cluster [DBG] pgmap v798: 292 pgs: 292 active+clean; 8.3 MiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:41 vm03 bash[23382]: cluster 2026-03-10T07:37:40.709836+0000 mgr.y (mgr.24407) 483 : cluster [DBG] pgmap v798: 292 pgs: 292 active+clean; 8.3 MiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:41 vm03 bash[23382]: cluster 2026-03-10T07:37:40.827164+0000 mon.a (mon.0) 2897 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T07:37:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:41 vm03 bash[23382]: cluster 2026-03-10T07:37:40.827164+0000 mon.a (mon.0) 2897 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:42 vm00 bash[28005]: cluster 2026-03-10T07:37:41.848624+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:42 vm00 bash[28005]: cluster 2026-03-10T07:37:41.848624+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:42 vm00 bash[28005]: audit 2026-03-10T07:37:41.870232+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:42 vm00 bash[28005]: audit 2026-03-10T07:37:41.870232+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:42 vm00 bash[28005]: audit 2026-03-10T07:37:41.870513+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:42 vm00 bash[28005]: audit 2026-03-10T07:37:41.870513+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:42 vm00 bash[28005]: audit 2026-03-10T07:37:41.870731+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:42 vm00 bash[28005]: audit 2026-03-10T07:37:41.870731+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:42 vm00 bash[20701]: cluster 2026-03-10T07:37:41.848624+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:42 vm00 bash[20701]: cluster 2026-03-10T07:37:41.848624+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:42 vm00 bash[20701]: audit 2026-03-10T07:37:41.870232+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:42 vm00 bash[20701]: audit 2026-03-10T07:37:41.870232+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:42 vm00 bash[20701]: audit 2026-03-10T07:37:41.870513+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:42 vm00 bash[20701]: audit 2026-03-10T07:37:41.870513+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:42 vm00 bash[20701]: audit 2026-03-10T07:37:41.870731+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:43.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:42 vm00 bash[20701]: audit 2026-03-10T07:37:41.870731+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:43.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:42 vm03 bash[23382]: cluster 2026-03-10T07:37:41.848624+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T07:37:43.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:42 vm03 bash[23382]: cluster 2026-03-10T07:37:41.848624+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T07:37:43.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:42 vm03 bash[23382]: audit 2026-03-10T07:37:41.870232+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:42 vm03 bash[23382]: audit 2026-03-10T07:37:41.870232+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:42 vm03 bash[23382]: audit 2026-03-10T07:37:41.870513+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:42 vm03 bash[23382]: audit 2026-03-10T07:37:41.870513+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:43.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:42 vm03 bash[23382]: audit 2026-03-10T07:37:41.870731+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:43.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:42 vm03 bash[23382]: audit 2026-03-10T07:37:41.870731+0000 mon.a (mon.0) 2901 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:43.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:37:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:37:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:43 vm03 bash[23382]: cluster 2026-03-10T07:37:42.710183+0000 mgr.y (mgr.24407) 484 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:43 vm03 bash[23382]: cluster 2026-03-10T07:37:42.710183+0000 mgr.y (mgr.24407) 484 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:43 vm03 bash[23382]: audit 2026-03-10T07:37:42.854675+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:43 vm03 bash[23382]: audit 2026-03-10T07:37:42.854675+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:43 vm03 bash[23382]: cluster 2026-03-10T07:37:42.867391+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T07:37:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:43 vm03 bash[23382]: cluster 2026-03-10T07:37:42.867391+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T07:37:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:43 vm03 bash[23382]: audit 2026-03-10T07:37:42.867819+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:44.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:43 vm03 bash[23382]: audit 2026-03-10T07:37:42.867819+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:43 vm00 bash[28005]: cluster 2026-03-10T07:37:42.710183+0000 mgr.y (mgr.24407) 484 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:43 vm00 bash[28005]: cluster 2026-03-10T07:37:42.710183+0000 mgr.y (mgr.24407) 484 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:43 vm00 bash[28005]: audit 2026-03-10T07:37:42.854675+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:43 vm00 bash[28005]: audit 2026-03-10T07:37:42.854675+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:43 vm00 bash[28005]: cluster 2026-03-10T07:37:42.867391+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:43 vm00 bash[28005]: cluster 2026-03-10T07:37:42.867391+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:43 vm00 bash[28005]: audit 2026-03-10T07:37:42.867819+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:43 vm00 bash[28005]: audit 2026-03-10T07:37:42.867819+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:43 vm00 bash[20701]: cluster 2026-03-10T07:37:42.710183+0000 mgr.y (mgr.24407) 484 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:43 vm00 bash[20701]: cluster 2026-03-10T07:37:42.710183+0000 mgr.y (mgr.24407) 484 : cluster [DBG] pgmap v801: 228 pgs: 228 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:43 vm00 bash[20701]: audit 2026-03-10T07:37:42.854675+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:43 vm00 bash[20701]: audit 2026-03-10T07:37:42.854675+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm00-59782-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:43 vm00 bash[20701]: cluster 2026-03-10T07:37:42.867391+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:43 vm00 bash[20701]: cluster 2026-03-10T07:37:42.867391+0000 mon.a (mon.0) 2903 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:43 vm00 bash[20701]: audit 2026-03-10T07:37:42.867819+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:43 vm00 bash[20701]: audit 2026-03-10T07:37:42.867819+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:37:45.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:44 vm03 bash[23382]: audit 2026-03-10T07:37:43.387523+0000 mgr.y (mgr.24407) 485 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:45.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:44 vm03 bash[23382]: audit 2026-03-10T07:37:43.387523+0000 mgr.y (mgr.24407) 485 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:45.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:44 vm03 bash[23382]: cluster 2026-03-10T07:37:43.899086+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T07:37:45.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:44 vm03 bash[23382]: cluster 2026-03-10T07:37:43.899086+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T07:37:45.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:44 vm00 bash[28005]: audit 2026-03-10T07:37:43.387523+0000 mgr.y (mgr.24407) 485 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:45.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:44 vm00 bash[28005]: audit 2026-03-10T07:37:43.387523+0000 mgr.y (mgr.24407) 485 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:45.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:44 vm00 bash[28005]: cluster 2026-03-10T07:37:43.899086+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T07:37:45.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:44 vm00 bash[28005]: cluster 2026-03-10T07:37:43.899086+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T07:37:45.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:44 vm00 bash[20701]: audit 2026-03-10T07:37:43.387523+0000 mgr.y (mgr.24407) 485 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:45.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:44 vm00 bash[20701]: audit 2026-03-10T07:37:43.387523+0000 mgr.y (mgr.24407) 485 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:45.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:44 vm00 bash[20701]: cluster 2026-03-10T07:37:43.899086+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T07:37:45.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:44 vm00 bash[20701]: cluster 2026-03-10T07:37:43.899086+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T07:37:46.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:46 vm03 bash[23382]: cluster 2026-03-10T07:37:44.710584+0000 mgr.y (mgr.24407) 486 : cluster [DBG] pgmap v804: 228 pgs: 228 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:46.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:46 vm03 bash[23382]: cluster 2026-03-10T07:37:44.710584+0000 mgr.y (mgr.24407) 486 : cluster [DBG] pgmap v804: 228 pgs: 228 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:46.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:46 vm03 bash[23382]: audit 2026-03-10T07:37:44.987109+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:37:46.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:46 vm03 bash[23382]: audit 2026-03-10T07:37:44.987109+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:37:46.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:46 vm03 bash[23382]: cluster 2026-03-10T07:37:44.996430+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T07:37:46.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:46 vm03 bash[23382]: cluster 2026-03-10T07:37:44.996430+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:46 vm00 bash[28005]: cluster 2026-03-10T07:37:44.710584+0000 mgr.y (mgr.24407) 486 : cluster [DBG] pgmap v804: 228 pgs: 228 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:46 vm00 bash[28005]: cluster 2026-03-10T07:37:44.710584+0000 mgr.y (mgr.24407) 486 : cluster [DBG] pgmap v804: 228 pgs: 228 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:46 vm00 bash[28005]: audit 2026-03-10T07:37:44.987109+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:46 vm00 bash[28005]: audit 2026-03-10T07:37:44.987109+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:46 vm00 bash[28005]: cluster 2026-03-10T07:37:44.996430+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:46 vm00 bash[28005]: cluster 2026-03-10T07:37:44.996430+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:46 vm00 bash[20701]: cluster 2026-03-10T07:37:44.710584+0000 mgr.y (mgr.24407) 486 : cluster [DBG] pgmap v804: 228 pgs: 228 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:46 vm00 bash[20701]: cluster 2026-03-10T07:37:44.710584+0000 mgr.y (mgr.24407) 486 : cluster [DBG] pgmap v804: 228 pgs: 228 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:46 vm00 bash[20701]: audit 2026-03-10T07:37:44.987109+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:46 vm00 bash[20701]: audit 2026-03-10T07:37:44.987109+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm00-59782-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:46 vm00 bash[20701]: cluster 2026-03-10T07:37:44.996430+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T07:37:46.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:46 vm00 bash[20701]: cluster 2026-03-10T07:37:44.996430+0000 mon.a (mon.0) 2907 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T07:37:47.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:47 vm03 bash[23382]: cluster 2026-03-10T07:37:45.997367+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T07:37:47.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:47 vm03 bash[23382]: cluster 2026-03-10T07:37:45.997367+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T07:37:47.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:47 vm03 bash[23382]: cluster 2026-03-10T07:37:46.999594+0000 mon.a (mon.0) 2909 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T07:37:47.304 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:47 vm03 bash[23382]: cluster 2026-03-10T07:37:46.999594+0000 mon.a (mon.0) 2909 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T07:37:47.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:47 vm00 bash[28005]: cluster 2026-03-10T07:37:45.997367+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T07:37:47.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:47 vm00 bash[28005]: cluster 2026-03-10T07:37:45.997367+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T07:37:47.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:47 vm00 bash[28005]: cluster 2026-03-10T07:37:46.999594+0000 mon.a (mon.0) 2909 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T07:37:47.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:47 vm00 bash[28005]: cluster 2026-03-10T07:37:46.999594+0000 mon.a (mon.0) 2909 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T07:37:47.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:47 vm00 bash[20701]: cluster 2026-03-10T07:37:45.997367+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T07:37:47.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:47 vm00 bash[20701]: cluster 2026-03-10T07:37:45.997367+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T07:37:47.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:47 vm00 bash[20701]: cluster 2026-03-10T07:37:46.999594+0000 mon.a (mon.0) 2909 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T07:37:47.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:47 vm00 bash[20701]: cluster 2026-03-10T07:37:46.999594+0000 mon.a (mon.0) 2909 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: cluster 2026-03-10T07:37:46.710888+0000 mgr.y (mgr.24407) 487 : cluster [DBG] pgmap v807: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: cluster 2026-03-10T07:37:46.710888+0000 mgr.y (mgr.24407) 487 : cluster [DBG] pgmap v807: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: audit 2026-03-10T07:37:47.011246+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: audit 2026-03-10T07:37:47.011246+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: cluster 2026-03-10T07:37:47.014489+0000 mon.a (mon.0) 2910 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: cluster 2026-03-10T07:37:47.014489+0000 mon.a (mon.0) 2910 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: audit 2026-03-10T07:37:47.019596+0000 mon.a (mon.0) 2911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: audit 2026-03-10T07:37:47.019596+0000 mon.a (mon.0) 2911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: audit 2026-03-10T07:37:47.997549+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: audit 2026-03-10T07:37:47.997549+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: cluster 2026-03-10T07:37:48.018916+0000 mon.a (mon.0) 2913 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:48 vm00 bash[28005]: cluster 2026-03-10T07:37:48.018916+0000 mon.a (mon.0) 2913 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: cluster 2026-03-10T07:37:46.710888+0000 mgr.y (mgr.24407) 487 : cluster [DBG] pgmap v807: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: cluster 2026-03-10T07:37:46.710888+0000 mgr.y (mgr.24407) 487 : cluster [DBG] pgmap v807: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:48.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: audit 2026-03-10T07:37:47.011246+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: audit 2026-03-10T07:37:47.011246+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: cluster 2026-03-10T07:37:47.014489+0000 mon.a (mon.0) 2910 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:37:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: cluster 2026-03-10T07:37:47.014489+0000 mon.a (mon.0) 2910 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:37:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: audit 2026-03-10T07:37:47.019596+0000 mon.a (mon.0) 2911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: audit 2026-03-10T07:37:47.019596+0000 mon.a (mon.0) 2911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: audit 2026-03-10T07:37:47.997549+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: audit 2026-03-10T07:37:47.997549+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: cluster 2026-03-10T07:37:48.018916+0000 mon.a (mon.0) 2913 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T07:37:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:48 vm00 bash[20701]: cluster 2026-03-10T07:37:48.018916+0000 mon.a (mon.0) 2913 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: cluster 2026-03-10T07:37:46.710888+0000 mgr.y (mgr.24407) 487 : cluster [DBG] pgmap v807: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: cluster 2026-03-10T07:37:46.710888+0000 mgr.y (mgr.24407) 487 : cluster [DBG] pgmap v807: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: audit 2026-03-10T07:37:47.011246+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: audit 2026-03-10T07:37:47.011246+0000 mon.c (mon.2) 329 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: cluster 2026-03-10T07:37:47.014489+0000 mon.a (mon.0) 2910 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: cluster 2026-03-10T07:37:47.014489+0000 mon.a (mon.0) 2910 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: audit 2026-03-10T07:37:47.019596+0000 mon.a (mon.0) 2911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: audit 2026-03-10T07:37:47.019596+0000 mon.a (mon.0) 2911 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: audit 2026-03-10T07:37:47.997549+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: audit 2026-03-10T07:37:47.997549+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: cluster 2026-03-10T07:37:48.018916+0000 mon.a (mon.0) 2913 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T07:37:48.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:48 vm03 bash[23382]: cluster 2026-03-10T07:37:48.018916+0000 mon.a (mon.0) 2913 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:50 vm00 bash[28005]: cluster 2026-03-10T07:37:48.711262+0000 mgr.y (mgr.24407) 488 : cluster [DBG] pgmap v810: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:50 vm00 bash[28005]: cluster 2026-03-10T07:37:48.711262+0000 mgr.y (mgr.24407) 488 : cluster [DBG] pgmap v810: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:50 vm00 bash[28005]: cluster 2026-03-10T07:37:49.008306+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:50 vm00 bash[28005]: cluster 2026-03-10T07:37:49.008306+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:50 vm00 bash[28005]: audit 2026-03-10T07:37:49.020590+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:50 vm00 bash[28005]: audit 2026-03-10T07:37:49.020590+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:50 vm00 bash[28005]: audit 2026-03-10T07:37:49.021728+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:50 vm00 bash[28005]: audit 2026-03-10T07:37:49.021728+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:50 vm00 bash[20701]: cluster 2026-03-10T07:37:48.711262+0000 mgr.y (mgr.24407) 488 : cluster [DBG] pgmap v810: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:50 vm00 bash[20701]: cluster 2026-03-10T07:37:48.711262+0000 mgr.y (mgr.24407) 488 : cluster [DBG] pgmap v810: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:50 vm00 bash[20701]: cluster 2026-03-10T07:37:49.008306+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:50 vm00 bash[20701]: cluster 2026-03-10T07:37:49.008306+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:50 vm00 bash[20701]: audit 2026-03-10T07:37:49.020590+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:50 vm00 bash[20701]: audit 2026-03-10T07:37:49.020590+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:50 vm00 bash[20701]: audit 2026-03-10T07:37:49.021728+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:50 vm00 bash[20701]: audit 2026-03-10T07:37:49.021728+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:50 vm03 bash[23382]: cluster 2026-03-10T07:37:48.711262+0000 mgr.y (mgr.24407) 488 : cluster [DBG] pgmap v810: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:50.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:50 vm03 bash[23382]: cluster 2026-03-10T07:37:48.711262+0000 mgr.y (mgr.24407) 488 : cluster [DBG] pgmap v810: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:37:50.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:50 vm03 bash[23382]: cluster 2026-03-10T07:37:49.008306+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T07:37:50.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:50 vm03 bash[23382]: cluster 2026-03-10T07:37:49.008306+0000 mon.a (mon.0) 2914 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T07:37:50.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:50 vm03 bash[23382]: audit 2026-03-10T07:37:49.020590+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:50.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:50 vm03 bash[23382]: audit 2026-03-10T07:37:49.020590+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:50.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:50 vm03 bash[23382]: audit 2026-03-10T07:37:49.021728+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:50.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:50 vm03 bash[23382]: audit 2026-03-10T07:37:49.021728+0000 mon.a (mon.0) 2915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:50.004694+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:50.004694+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: cluster 2026-03-10T07:37:50.013131+0000 mon.a (mon.0) 2917 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: cluster 2026-03-10T07:37:50.013131+0000 mon.a (mon.0) 2917 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:50.057370+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:50.057370+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:50.057941+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:50.057941+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:51.015945+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:51.015945+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:51.021874+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:51.021874+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: cluster 2026-03-10T07:37:51.023609+0000 mon.a (mon.0) 2920 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: cluster 2026-03-10T07:37:51.023609+0000 mon.a (mon.0) 2920 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:51.024565+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:51 vm00 bash[28005]: audit 2026-03-10T07:37:51.024565+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:37:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:37:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:37:51.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:50.004694+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:50.004694+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: cluster 2026-03-10T07:37:50.013131+0000 mon.a (mon.0) 2917 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: cluster 2026-03-10T07:37:50.013131+0000 mon.a (mon.0) 2917 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:50.057370+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:50.057370+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:50.057941+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:50.057941+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:51.015945+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:51.015945+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:51.021874+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:51.021874+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: cluster 2026-03-10T07:37:51.023609+0000 mon.a (mon.0) 2920 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: cluster 2026-03-10T07:37:51.023609+0000 mon.a (mon.0) 2920 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:51.024565+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:51 vm00 bash[20701]: audit 2026-03-10T07:37:51.024565+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:50.004694+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:50.004694+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: cluster 2026-03-10T07:37:50.013131+0000 mon.a (mon.0) 2917 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: cluster 2026-03-10T07:37:50.013131+0000 mon.a (mon.0) 2917 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:50.057370+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:50.057370+0000 mon.c (mon.2) 331 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:50.057941+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:50.057941+0000 mon.a (mon.0) 2918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:51.015945+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:51.015945+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:51.021874+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:51.021874+0000 mon.c (mon.2) 332 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: cluster 2026-03-10T07:37:51.023609+0000 mon.a (mon.0) 2920 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: cluster 2026-03-10T07:37:51.023609+0000 mon.a (mon.0) 2920 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:51.024565+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:51.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:51 vm03 bash[23382]: audit 2026-03-10T07:37:51.024565+0000 mon.a (mon.0) 2921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:52 vm00 bash[28005]: cluster 2026-03-10T07:37:50.711628+0000 mgr.y (mgr.24407) 489 : cluster [DBG] pgmap v813: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:52 vm00 bash[28005]: cluster 2026-03-10T07:37:50.711628+0000 mgr.y (mgr.24407) 489 : cluster [DBG] pgmap v813: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:52 vm00 bash[28005]: audit 2026-03-10T07:37:52.019153+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:52 vm00 bash[28005]: audit 2026-03-10T07:37:52.019153+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:52 vm00 bash[28005]: cluster 2026-03-10T07:37:52.028714+0000 mon.a (mon.0) 2923 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:52 vm00 bash[28005]: cluster 2026-03-10T07:37:52.028714+0000 mon.a (mon.0) 2923 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:52 vm00 bash[28005]: audit 2026-03-10T07:37:52.030598+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:52 vm00 bash[28005]: audit 2026-03-10T07:37:52.030598+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:52 vm00 bash[28005]: audit 2026-03-10T07:37:52.031248+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:52 vm00 bash[28005]: audit 2026-03-10T07:37:52.031248+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:52 vm00 bash[20701]: cluster 2026-03-10T07:37:50.711628+0000 mgr.y (mgr.24407) 489 : cluster [DBG] pgmap v813: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:52 vm00 bash[20701]: cluster 2026-03-10T07:37:50.711628+0000 mgr.y (mgr.24407) 489 : cluster [DBG] pgmap v813: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:52 vm00 bash[20701]: audit 2026-03-10T07:37:52.019153+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:52 vm00 bash[20701]: audit 2026-03-10T07:37:52.019153+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:52 vm00 bash[20701]: cluster 2026-03-10T07:37:52.028714+0000 mon.a (mon.0) 2923 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:52 vm00 bash[20701]: cluster 2026-03-10T07:37:52.028714+0000 mon.a (mon.0) 2923 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:52 vm00 bash[20701]: audit 2026-03-10T07:37:52.030598+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:52 vm00 bash[20701]: audit 2026-03-10T07:37:52.030598+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:52 vm00 bash[20701]: audit 2026-03-10T07:37:52.031248+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:52 vm00 bash[20701]: audit 2026-03-10T07:37:52.031248+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:52 vm03 bash[23382]: cluster 2026-03-10T07:37:50.711628+0000 mgr.y (mgr.24407) 489 : cluster [DBG] pgmap v813: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:52 vm03 bash[23382]: cluster 2026-03-10T07:37:50.711628+0000 mgr.y (mgr.24407) 489 : cluster [DBG] pgmap v813: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:52 vm03 bash[23382]: audit 2026-03-10T07:37:52.019153+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:52 vm03 bash[23382]: audit 2026-03-10T07:37:52.019153+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-107", "overlaypool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:52 vm03 bash[23382]: cluster 2026-03-10T07:37:52.028714+0000 mon.a (mon.0) 2923 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T07:37:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:52 vm03 bash[23382]: cluster 2026-03-10T07:37:52.028714+0000 mon.a (mon.0) 2923 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T07:37:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:52 vm03 bash[23382]: audit 2026-03-10T07:37:52.030598+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:52 vm03 bash[23382]: audit 2026-03-10T07:37:52.030598+0000 mon.c (mon.2) 333 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:52 vm03 bash[23382]: audit 2026-03-10T07:37:52.031248+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:52.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:52 vm03 bash[23382]: audit 2026-03-10T07:37:52.031248+0000 mon.a (mon.0) 2924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:53 vm00 bash[28005]: cluster 2026-03-10T07:37:53.019293+0000 mon.a (mon.0) 2925 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:53 vm00 bash[28005]: cluster 2026-03-10T07:37:53.019293+0000 mon.a (mon.0) 2925 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:53 vm00 bash[28005]: audit 2026-03-10T07:37:53.023201+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]': finished 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:53 vm00 bash[28005]: audit 2026-03-10T07:37:53.023201+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]': finished 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:53 vm00 bash[28005]: cluster 2026-03-10T07:37:53.029565+0000 mon.a (mon.0) 2927 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:53 vm00 bash[28005]: cluster 2026-03-10T07:37:53.029565+0000 mon.a (mon.0) 2927 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:53 vm00 bash[28005]: audit 2026-03-10T07:37:53.051112+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:53 vm00 bash[28005]: audit 2026-03-10T07:37:53.051112+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:53 vm00 bash[28005]: audit 2026-03-10T07:37:53.051420+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:53 vm00 bash[28005]: audit 2026-03-10T07:37:53.051420+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:53 vm00 bash[20701]: cluster 2026-03-10T07:37:53.019293+0000 mon.a (mon.0) 2925 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:53 vm00 bash[20701]: cluster 2026-03-10T07:37:53.019293+0000 mon.a (mon.0) 2925 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:53 vm00 bash[20701]: audit 2026-03-10T07:37:53.023201+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]': finished 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:53 vm00 bash[20701]: audit 2026-03-10T07:37:53.023201+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]': finished 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:53 vm00 bash[20701]: cluster 2026-03-10T07:37:53.029565+0000 mon.a (mon.0) 2927 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:53 vm00 bash[20701]: cluster 2026-03-10T07:37:53.029565+0000 mon.a (mon.0) 2927 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:53 vm00 bash[20701]: audit 2026-03-10T07:37:53.051112+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:53 vm00 bash[20701]: audit 2026-03-10T07:37:53.051112+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:53 vm00 bash[20701]: audit 2026-03-10T07:37:53.051420+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:53 vm00 bash[20701]: audit 2026-03-10T07:37:53.051420+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.388 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:53 vm03 bash[23382]: cluster 2026-03-10T07:37:53.019293+0000 mon.a (mon.0) 2925 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:53.388 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:53 vm03 bash[23382]: cluster 2026-03-10T07:37:53.019293+0000 mon.a (mon.0) 2925 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:37:53.388 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:53 vm03 bash[23382]: audit 2026-03-10T07:37:53.023201+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]': finished 2026-03-10T07:37:53.388 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:53 vm03 bash[23382]: audit 2026-03-10T07:37:53.023201+0000 mon.a (mon.0) 2926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-107-cache", "mode": "writeback"}]': finished 2026-03-10T07:37:53.388 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:53 vm03 bash[23382]: cluster 2026-03-10T07:37:53.029565+0000 mon.a (mon.0) 2927 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T07:37:53.388 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:53 vm03 bash[23382]: cluster 2026-03-10T07:37:53.029565+0000 mon.a (mon.0) 2927 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T07:37:53.388 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:53 vm03 bash[23382]: audit 2026-03-10T07:37:53.051112+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.389 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:53 vm03 bash[23382]: audit 2026-03-10T07:37:53.051112+0000 mon.c (mon.2) 334 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.389 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:53 vm03 bash[23382]: audit 2026-03-10T07:37:53.051420+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.389 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:53 vm03 bash[23382]: audit 2026-03-10T07:37:53.051420+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]: dispatch 2026-03-10T07:37:53.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:37:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:54 vm00 bash[28005]: cluster 2026-03-10T07:37:52.711993+0000 mgr.y (mgr.24407) 490 : cluster [DBG] pgmap v816: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:54 vm00 bash[28005]: cluster 2026-03-10T07:37:52.711993+0000 mgr.y (mgr.24407) 490 : cluster [DBG] pgmap v816: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:54 vm00 bash[28005]: audit 2026-03-10T07:37:54.026232+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]': finished 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:54 vm00 bash[28005]: audit 2026-03-10T07:37:54.026232+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]': finished 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:54 vm00 bash[28005]: cluster 2026-03-10T07:37:54.031144+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:54 vm00 bash[28005]: cluster 2026-03-10T07:37:54.031144+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:54 vm00 bash[28005]: audit 2026-03-10T07:37:54.032173+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:54 vm00 bash[28005]: audit 2026-03-10T07:37:54.032173+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:54 vm00 bash[28005]: audit 2026-03-10T07:37:54.034858+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:54 vm00 bash[28005]: audit 2026-03-10T07:37:54.034858+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:54 vm00 bash[20701]: cluster 2026-03-10T07:37:52.711993+0000 mgr.y (mgr.24407) 490 : cluster [DBG] pgmap v816: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:54.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:54 vm00 bash[20701]: cluster 2026-03-10T07:37:52.711993+0000 mgr.y (mgr.24407) 490 : cluster [DBG] pgmap v816: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:54 vm00 bash[20701]: audit 2026-03-10T07:37:54.026232+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]': finished 2026-03-10T07:37:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:54 vm00 bash[20701]: audit 2026-03-10T07:37:54.026232+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]': finished 2026-03-10T07:37:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:54 vm00 bash[20701]: cluster 2026-03-10T07:37:54.031144+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T07:37:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:54 vm00 bash[20701]: cluster 2026-03-10T07:37:54.031144+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T07:37:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:54 vm00 bash[20701]: audit 2026-03-10T07:37:54.032173+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:54 vm00 bash[20701]: audit 2026-03-10T07:37:54.032173+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:54 vm00 bash[20701]: audit 2026-03-10T07:37:54.034858+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:54 vm00 bash[20701]: audit 2026-03-10T07:37:54.034858+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:54 vm03 bash[23382]: cluster 2026-03-10T07:37:52.711993+0000 mgr.y (mgr.24407) 490 : cluster [DBG] pgmap v816: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:54 vm03 bash[23382]: cluster 2026-03-10T07:37:52.711993+0000 mgr.y (mgr.24407) 490 : cluster [DBG] pgmap v816: 300 pgs: 32 creating+peering, 268 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:37:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:54 vm03 bash[23382]: audit 2026-03-10T07:37:54.026232+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]': finished 2026-03-10T07:37:54.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:54 vm03 bash[23382]: audit 2026-03-10T07:37:54.026232+0000 mon.a (mon.0) 2929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-107"}]': finished 2026-03-10T07:37:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:54 vm03 bash[23382]: cluster 2026-03-10T07:37:54.031144+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T07:37:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:54 vm03 bash[23382]: cluster 2026-03-10T07:37:54.031144+0000 mon.a (mon.0) 2930 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T07:37:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:54 vm03 bash[23382]: audit 2026-03-10T07:37:54.032173+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:54 vm03 bash[23382]: audit 2026-03-10T07:37:54.032173+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.100:0/3510805149' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:54 vm03 bash[23382]: audit 2026-03-10T07:37:54.034858+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:54.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:54 vm03 bash[23382]: audit 2026-03-10T07:37:54.034858+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]: dispatch 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:55 vm00 bash[28005]: audit 2026-03-10T07:37:53.393886+0000 mgr.y (mgr.24407) 491 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:55 vm00 bash[28005]: audit 2026-03-10T07:37:53.393886+0000 mgr.y (mgr.24407) 491 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:55 vm00 bash[28005]: audit 2026-03-10T07:37:54.628599+0000 mon.a (mon.0) 2932 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:55 vm00 bash[28005]: audit 2026-03-10T07:37:54.628599+0000 mon.a (mon.0) 2932 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:55 vm00 bash[28005]: audit 2026-03-10T07:37:54.631077+0000 mon.c (mon.2) 336 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:55 vm00 bash[28005]: audit 2026-03-10T07:37:54.631077+0000 mon.c (mon.2) 336 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:55 vm00 bash[28005]: cluster 2026-03-10T07:37:55.026371+0000 mon.a (mon.0) 2933 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:55 vm00 bash[28005]: cluster 2026-03-10T07:37:55.026371+0000 mon.a (mon.0) 2933 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:55 vm00 bash[20701]: audit 2026-03-10T07:37:53.393886+0000 mgr.y (mgr.24407) 491 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:55 vm00 bash[20701]: audit 2026-03-10T07:37:53.393886+0000 mgr.y (mgr.24407) 491 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:55 vm00 bash[20701]: audit 2026-03-10T07:37:54.628599+0000 mon.a (mon.0) 2932 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:55 vm00 bash[20701]: audit 2026-03-10T07:37:54.628599+0000 mon.a (mon.0) 2932 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:55 vm00 bash[20701]: audit 2026-03-10T07:37:54.631077+0000 mon.c (mon.2) 336 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:55 vm00 bash[20701]: audit 2026-03-10T07:37:54.631077+0000 mon.c (mon.2) 336 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:55 vm00 bash[20701]: cluster 2026-03-10T07:37:55.026371+0000 mon.a (mon.0) 2933 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:55.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:55 vm00 bash[20701]: cluster 2026-03-10T07:37:55.026371+0000 mon.a (mon.0) 2933 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:55 vm03 bash[23382]: audit 2026-03-10T07:37:53.393886+0000 mgr.y (mgr.24407) 491 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:55.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:55 vm03 bash[23382]: audit 2026-03-10T07:37:53.393886+0000 mgr.y (mgr.24407) 491 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:37:55.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:55 vm03 bash[23382]: audit 2026-03-10T07:37:54.628599+0000 mon.a (mon.0) 2932 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:55.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:55 vm03 bash[23382]: audit 2026-03-10T07:37:54.628599+0000 mon.a (mon.0) 2932 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:37:55.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:55 vm03 bash[23382]: audit 2026-03-10T07:37:54.631077+0000 mon.c (mon.2) 336 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:55.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:55 vm03 bash[23382]: audit 2026-03-10T07:37:54.631077+0000 mon.c (mon.2) 336 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:37:55.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:55 vm03 bash[23382]: cluster 2026-03-10T07:37:55.026371+0000 mon.a (mon.0) 2933 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:55.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:55 vm03 bash[23382]: cluster 2026-03-10T07:37:55.026371+0000 mon.a (mon.0) 2933 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:56 vm00 bash[28005]: cluster 2026-03-10T07:37:54.712893+0000 mgr.y (mgr.24407) 492 : cluster [DBG] pgmap v819: 300 pgs: 12 creating+peering, 288 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:56 vm00 bash[28005]: cluster 2026-03-10T07:37:54.712893+0000 mgr.y (mgr.24407) 492 : cluster [DBG] pgmap v819: 300 pgs: 12 creating+peering, 288 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:56 vm00 bash[28005]: audit 2026-03-10T07:37:55.099652+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:56 vm00 bash[28005]: audit 2026-03-10T07:37:55.099652+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:56 vm00 bash[28005]: cluster 2026-03-10T07:37:55.104827+0000 mon.a (mon.0) 2935 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:56 vm00 bash[28005]: cluster 2026-03-10T07:37:55.104827+0000 mon.a (mon.0) 2935 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:56 vm00 bash[20701]: cluster 2026-03-10T07:37:54.712893+0000 mgr.y (mgr.24407) 492 : cluster [DBG] pgmap v819: 300 pgs: 12 creating+peering, 288 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:56 vm00 bash[20701]: cluster 2026-03-10T07:37:54.712893+0000 mgr.y (mgr.24407) 492 : cluster [DBG] pgmap v819: 300 pgs: 12 creating+peering, 288 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:56 vm00 bash[20701]: audit 2026-03-10T07:37:55.099652+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:56 vm00 bash[20701]: audit 2026-03-10T07:37:55.099652+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:56 vm00 bash[20701]: cluster 2026-03-10T07:37:55.104827+0000 mon.a (mon.0) 2935 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T07:37:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:56 vm00 bash[20701]: cluster 2026-03-10T07:37:55.104827+0000 mon.a (mon.0) 2935 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T07:37:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:56 vm03 bash[23382]: cluster 2026-03-10T07:37:54.712893+0000 mgr.y (mgr.24407) 492 : cluster [DBG] pgmap v819: 300 pgs: 12 creating+peering, 288 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:56 vm03 bash[23382]: cluster 2026-03-10T07:37:54.712893+0000 mgr.y (mgr.24407) 492 : cluster [DBG] pgmap v819: 300 pgs: 12 creating+peering, 288 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:37:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:56 vm03 bash[23382]: audit 2026-03-10T07:37:55.099652+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:56 vm03 bash[23382]: audit 2026-03-10T07:37:55.099652+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-107", "tierpool": "test-rados-api-vm00-59782-107-cache"}]': finished 2026-03-10T07:37:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:56 vm03 bash[23382]: cluster 2026-03-10T07:37:55.104827+0000 mon.a (mon.0) 2935 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T07:37:56.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:56 vm03 bash[23382]: cluster 2026-03-10T07:37:55.104827+0000 mon.a (mon.0) 2935 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T07:37:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:57 vm03 bash[23382]: cluster 2026-03-10T07:37:56.118802+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T07:37:57.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:57 vm03 bash[23382]: cluster 2026-03-10T07:37:56.118802+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T07:37:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:57 vm00 bash[28005]: cluster 2026-03-10T07:37:56.118802+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T07:37:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:57 vm00 bash[28005]: cluster 2026-03-10T07:37:56.118802+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T07:37:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:57 vm00 bash[20701]: cluster 2026-03-10T07:37:56.118802+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T07:37:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:57 vm00 bash[20701]: cluster 2026-03-10T07:37:56.118802+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T07:37:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:58 vm03 bash[23382]: cluster 2026-03-10T07:37:56.713233+0000 mgr.y (mgr.24407) 493 : cluster [DBG] pgmap v822: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:58 vm03 bash[23382]: cluster 2026-03-10T07:37:56.713233+0000 mgr.y (mgr.24407) 493 : cluster [DBG] pgmap v822: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:58 vm03 bash[23382]: cluster 2026-03-10T07:37:57.153685+0000 mon.a (mon.0) 2937 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T07:37:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:58 vm03 bash[23382]: cluster 2026-03-10T07:37:57.153685+0000 mon.a (mon.0) 2937 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T07:37:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:58 vm03 bash[23382]: audit 2026-03-10T07:37:57.173259+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:58 vm03 bash[23382]: audit 2026-03-10T07:37:57.173259+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:58 vm03 bash[23382]: audit 2026-03-10T07:37:57.174182+0000 mon.a (mon.0) 2939 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:58 vm03 bash[23382]: audit 2026-03-10T07:37:57.174182+0000 mon.a (mon.0) 2939 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:58 vm03 bash[23382]: audit 2026-03-10T07:37:57.174531+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:58.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:58 vm03 bash[23382]: audit 2026-03-10T07:37:57.174531+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:58 vm00 bash[28005]: cluster 2026-03-10T07:37:56.713233+0000 mgr.y (mgr.24407) 493 : cluster [DBG] pgmap v822: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:58 vm00 bash[28005]: cluster 2026-03-10T07:37:56.713233+0000 mgr.y (mgr.24407) 493 : cluster [DBG] pgmap v822: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:58 vm00 bash[28005]: cluster 2026-03-10T07:37:57.153685+0000 mon.a (mon.0) 2937 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:58 vm00 bash[28005]: cluster 2026-03-10T07:37:57.153685+0000 mon.a (mon.0) 2937 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:58 vm00 bash[28005]: audit 2026-03-10T07:37:57.173259+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:58 vm00 bash[28005]: audit 2026-03-10T07:37:57.173259+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:58 vm00 bash[28005]: audit 2026-03-10T07:37:57.174182+0000 mon.a (mon.0) 2939 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:58 vm00 bash[28005]: audit 2026-03-10T07:37:57.174182+0000 mon.a (mon.0) 2939 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:58 vm00 bash[28005]: audit 2026-03-10T07:37:57.174531+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:58 vm00 bash[28005]: audit 2026-03-10T07:37:57.174531+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:58 vm00 bash[20701]: cluster 2026-03-10T07:37:56.713233+0000 mgr.y (mgr.24407) 493 : cluster [DBG] pgmap v822: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:58 vm00 bash[20701]: cluster 2026-03-10T07:37:56.713233+0000 mgr.y (mgr.24407) 493 : cluster [DBG] pgmap v822: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:58 vm00 bash[20701]: cluster 2026-03-10T07:37:57.153685+0000 mon.a (mon.0) 2937 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T07:37:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:58 vm00 bash[20701]: cluster 2026-03-10T07:37:57.153685+0000 mon.a (mon.0) 2937 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T07:37:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:58 vm00 bash[20701]: audit 2026-03-10T07:37:57.173259+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:58 vm00 bash[20701]: audit 2026-03-10T07:37:57.173259+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:58 vm00 bash[20701]: audit 2026-03-10T07:37:57.174182+0000 mon.a (mon.0) 2939 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:58 vm00 bash[20701]: audit 2026-03-10T07:37:57.174182+0000 mon.a (mon.0) 2939 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:58 vm00 bash[20701]: audit 2026-03-10T07:37:57.174531+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:58 vm00 bash[20701]: audit 2026-03-10T07:37:57.174531+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:37:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:59 vm03 bash[23382]: audit 2026-03-10T07:37:58.145487+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:59 vm03 bash[23382]: audit 2026-03-10T07:37:58.145487+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:59 vm03 bash[23382]: cluster 2026-03-10T07:37:58.152108+0000 mon.a (mon.0) 2942 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T07:37:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:59 vm03 bash[23382]: cluster 2026-03-10T07:37:58.152108+0000 mon.a (mon.0) 2942 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T07:37:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:59 vm03 bash[23382]: audit 2026-03-10T07:37:58.154050+0000 mon.a (mon.0) 2943 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:59 vm03 bash[23382]: audit 2026-03-10T07:37:58.154050+0000 mon.a (mon.0) 2943 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:59 vm03 bash[23382]: cluster 2026-03-10T07:37:59.158835+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T07:37:59.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:37:59 vm03 bash[23382]: cluster 2026-03-10T07:37:59.158835+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:59 vm00 bash[28005]: audit 2026-03-10T07:37:58.145487+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:59 vm00 bash[28005]: audit 2026-03-10T07:37:58.145487+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:59 vm00 bash[28005]: cluster 2026-03-10T07:37:58.152108+0000 mon.a (mon.0) 2942 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:59 vm00 bash[28005]: cluster 2026-03-10T07:37:58.152108+0000 mon.a (mon.0) 2942 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:59 vm00 bash[28005]: audit 2026-03-10T07:37:58.154050+0000 mon.a (mon.0) 2943 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:59 vm00 bash[28005]: audit 2026-03-10T07:37:58.154050+0000 mon.a (mon.0) 2943 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:59 vm00 bash[28005]: cluster 2026-03-10T07:37:59.158835+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:37:59 vm00 bash[28005]: cluster 2026-03-10T07:37:59.158835+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:59 vm00 bash[20701]: audit 2026-03-10T07:37:58.145487+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:59 vm00 bash[20701]: audit 2026-03-10T07:37:58.145487+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:59 vm00 bash[20701]: cluster 2026-03-10T07:37:58.152108+0000 mon.a (mon.0) 2942 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:59 vm00 bash[20701]: cluster 2026-03-10T07:37:58.152108+0000 mon.a (mon.0) 2942 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:59 vm00 bash[20701]: audit 2026-03-10T07:37:58.154050+0000 mon.a (mon.0) 2943 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:59 vm00 bash[20701]: audit 2026-03-10T07:37:58.154050+0000 mon.a (mon.0) 2943 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:59 vm00 bash[20701]: cluster 2026-03-10T07:37:59.158835+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T07:37:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:37:59 vm00 bash[20701]: cluster 2026-03-10T07:37:59.158835+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T07:38:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:00 vm03 bash[23382]: cluster 2026-03-10T07:37:58.713552+0000 mgr.y (mgr.24407) 494 : cluster [DBG] pgmap v825: 236 pgs: 236 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:00.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:00 vm03 bash[23382]: cluster 2026-03-10T07:37:58.713552+0000 mgr.y (mgr.24407) 494 : cluster [DBG] pgmap v825: 236 pgs: 236 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:00 vm00 bash[28005]: cluster 2026-03-10T07:37:58.713552+0000 mgr.y (mgr.24407) 494 : cluster [DBG] pgmap v825: 236 pgs: 236 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:00 vm00 bash[28005]: cluster 2026-03-10T07:37:58.713552+0000 mgr.y (mgr.24407) 494 : cluster [DBG] pgmap v825: 236 pgs: 236 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:00 vm00 bash[20701]: cluster 2026-03-10T07:37:58.713552+0000 mgr.y (mgr.24407) 494 : cluster [DBG] pgmap v825: 236 pgs: 236 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:00 vm00 bash[20701]: cluster 2026-03-10T07:37:58.713552+0000 mgr.y (mgr.24407) 494 : cluster [DBG] pgmap v825: 236 pgs: 236 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:01 vm00 bash[28005]: audit 2026-03-10T07:38:00.177023+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:01 vm00 bash[28005]: audit 2026-03-10T07:38:00.177023+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:01 vm00 bash[28005]: cluster 2026-03-10T07:38:00.186421+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T07:38:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:01 vm00 bash[28005]: cluster 2026-03-10T07:38:00.186421+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T07:38:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:01 vm00 bash[28005]: cluster 2026-03-10T07:38:01.186308+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T07:38:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:01 vm00 bash[28005]: cluster 2026-03-10T07:38:01.186308+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T07:38:01.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:38:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:38:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:38:01.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:01 vm00 bash[20701]: audit 2026-03-10T07:38:00.177023+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:01 vm00 bash[20701]: audit 2026-03-10T07:38:00.177023+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:01 vm00 bash[20701]: cluster 2026-03-10T07:38:00.186421+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T07:38:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:01 vm00 bash[20701]: cluster 2026-03-10T07:38:00.186421+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T07:38:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:01 vm00 bash[20701]: cluster 2026-03-10T07:38:01.186308+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T07:38:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:01 vm00 bash[20701]: cluster 2026-03-10T07:38:01.186308+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T07:38:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:01 vm03 bash[23382]: audit 2026-03-10T07:38:00.177023+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:01 vm03 bash[23382]: audit 2026-03-10T07:38:00.177023+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:01 vm03 bash[23382]: cluster 2026-03-10T07:38:00.186421+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T07:38:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:01 vm03 bash[23382]: cluster 2026-03-10T07:38:00.186421+0000 mon.a (mon.0) 2946 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T07:38:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:01 vm03 bash[23382]: cluster 2026-03-10T07:38:01.186308+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T07:38:01.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:01 vm03 bash[23382]: cluster 2026-03-10T07:38:01.186308+0000 mon.a (mon.0) 2947 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T07:38:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:02 vm03 bash[23382]: cluster 2026-03-10T07:38:00.713917+0000 mgr.y (mgr.24407) 495 : cluster [DBG] pgmap v828: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:02 vm03 bash[23382]: cluster 2026-03-10T07:38:00.713917+0000 mgr.y (mgr.24407) 495 : cluster [DBG] pgmap v828: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:02 vm03 bash[23382]: cluster 2026-03-10T07:38:01.209693+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:02 vm03 bash[23382]: cluster 2026-03-10T07:38:01.209693+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:02 vm03 bash[23382]: audit 2026-03-10T07:38:01.210650+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:02 vm03 bash[23382]: audit 2026-03-10T07:38:01.210650+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:02 vm03 bash[23382]: audit 2026-03-10T07:38:02.189194+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:02 vm03 bash[23382]: audit 2026-03-10T07:38:02.189194+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:02 vm03 bash[23382]: cluster 2026-03-10T07:38:02.212038+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T07:38:02.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:02 vm03 bash[23382]: cluster 2026-03-10T07:38:02.212038+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:02 vm00 bash[28005]: cluster 2026-03-10T07:38:00.713917+0000 mgr.y (mgr.24407) 495 : cluster [DBG] pgmap v828: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:02 vm00 bash[28005]: cluster 2026-03-10T07:38:00.713917+0000 mgr.y (mgr.24407) 495 : cluster [DBG] pgmap v828: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:02 vm00 bash[28005]: cluster 2026-03-10T07:38:01.209693+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:02 vm00 bash[28005]: cluster 2026-03-10T07:38:01.209693+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:02 vm00 bash[28005]: audit 2026-03-10T07:38:01.210650+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:02 vm00 bash[28005]: audit 2026-03-10T07:38:01.210650+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:02 vm00 bash[28005]: audit 2026-03-10T07:38:02.189194+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:02 vm00 bash[28005]: audit 2026-03-10T07:38:02.189194+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:02 vm00 bash[28005]: cluster 2026-03-10T07:38:02.212038+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:02 vm00 bash[28005]: cluster 2026-03-10T07:38:02.212038+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:02 vm00 bash[20701]: cluster 2026-03-10T07:38:00.713917+0000 mgr.y (mgr.24407) 495 : cluster [DBG] pgmap v828: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:02 vm00 bash[20701]: cluster 2026-03-10T07:38:00.713917+0000 mgr.y (mgr.24407) 495 : cluster [DBG] pgmap v828: 244 pgs: 8 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:02.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:02 vm00 bash[20701]: cluster 2026-03-10T07:38:01.209693+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:02.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:02 vm00 bash[20701]: cluster 2026-03-10T07:38:01.209693+0000 mon.a (mon.0) 2948 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:02.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:02 vm00 bash[20701]: audit 2026-03-10T07:38:01.210650+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:02.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:02 vm00 bash[20701]: audit 2026-03-10T07:38:01.210650+0000 mon.a (mon.0) 2949 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:02.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:02 vm00 bash[20701]: audit 2026-03-10T07:38:02.189194+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:02.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:02 vm00 bash[20701]: audit 2026-03-10T07:38:02.189194+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:02.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:02 vm00 bash[20701]: cluster 2026-03-10T07:38:02.212038+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T07:38:02.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:02 vm00 bash[20701]: cluster 2026-03-10T07:38:02.212038+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T07:38:03.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:38:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:38:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:03 vm03 bash[23382]: audit 2026-03-10T07:38:02.230913+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:03 vm03 bash[23382]: audit 2026-03-10T07:38:02.230913+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:03 vm03 bash[23382]: audit 2026-03-10T07:38:03.194068+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:03 vm03 bash[23382]: audit 2026-03-10T07:38:03.194068+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:03 vm03 bash[23382]: cluster 2026-03-10T07:38:03.197125+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T07:38:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:03 vm03 bash[23382]: cluster 2026-03-10T07:38:03.197125+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T07:38:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:03 vm03 bash[23382]: audit 2026-03-10T07:38:03.209224+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:03.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:03 vm03 bash[23382]: audit 2026-03-10T07:38:03.209224+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:03 vm00 bash[28005]: audit 2026-03-10T07:38:02.230913+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:03 vm00 bash[28005]: audit 2026-03-10T07:38:02.230913+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:03 vm00 bash[28005]: audit 2026-03-10T07:38:03.194068+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:03 vm00 bash[28005]: audit 2026-03-10T07:38:03.194068+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:03 vm00 bash[28005]: cluster 2026-03-10T07:38:03.197125+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:03 vm00 bash[28005]: cluster 2026-03-10T07:38:03.197125+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:03 vm00 bash[28005]: audit 2026-03-10T07:38:03.209224+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:03 vm00 bash[28005]: audit 2026-03-10T07:38:03.209224+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:03 vm00 bash[20701]: audit 2026-03-10T07:38:02.230913+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:03 vm00 bash[20701]: audit 2026-03-10T07:38:02.230913+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:03 vm00 bash[20701]: audit 2026-03-10T07:38:03.194068+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:03 vm00 bash[20701]: audit 2026-03-10T07:38:03.194068+0000 mon.a (mon.0) 2953 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:03 vm00 bash[20701]: cluster 2026-03-10T07:38:03.197125+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:03 vm00 bash[20701]: cluster 2026-03-10T07:38:03.197125+0000 mon.a (mon.0) 2954 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:03 vm00 bash[20701]: audit 2026-03-10T07:38:03.209224+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:03 vm00 bash[20701]: audit 2026-03-10T07:38:03.209224+0000 mon.a (mon.0) 2955 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:04 vm03 bash[23382]: cluster 2026-03-10T07:38:02.714284+0000 mgr.y (mgr.24407) 496 : cluster [DBG] pgmap v831: 276 pgs: 40 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:04 vm03 bash[23382]: cluster 2026-03-10T07:38:02.714284+0000 mgr.y (mgr.24407) 496 : cluster [DBG] pgmap v831: 276 pgs: 40 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:04 vm03 bash[23382]: audit 2026-03-10T07:38:04.197896+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:04 vm03 bash[23382]: audit 2026-03-10T07:38:04.197896+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:04 vm03 bash[23382]: cluster 2026-03-10T07:38:04.208334+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T07:38:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:04 vm03 bash[23382]: cluster 2026-03-10T07:38:04.208334+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T07:38:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:04 vm03 bash[23382]: audit 2026-03-10T07:38:04.209648+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:38:04.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:04 vm03 bash[23382]: audit 2026-03-10T07:38:04.209648+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:04 vm00 bash[28005]: cluster 2026-03-10T07:38:02.714284+0000 mgr.y (mgr.24407) 496 : cluster [DBG] pgmap v831: 276 pgs: 40 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:04 vm00 bash[28005]: cluster 2026-03-10T07:38:02.714284+0000 mgr.y (mgr.24407) 496 : cluster [DBG] pgmap v831: 276 pgs: 40 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:04 vm00 bash[28005]: audit 2026-03-10T07:38:04.197896+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:04 vm00 bash[28005]: audit 2026-03-10T07:38:04.197896+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:04 vm00 bash[28005]: cluster 2026-03-10T07:38:04.208334+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:04 vm00 bash[28005]: cluster 2026-03-10T07:38:04.208334+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:04 vm00 bash[28005]: audit 2026-03-10T07:38:04.209648+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:04 vm00 bash[28005]: audit 2026-03-10T07:38:04.209648+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:04 vm00 bash[20701]: cluster 2026-03-10T07:38:02.714284+0000 mgr.y (mgr.24407) 496 : cluster [DBG] pgmap v831: 276 pgs: 40 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:04 vm00 bash[20701]: cluster 2026-03-10T07:38:02.714284+0000 mgr.y (mgr.24407) 496 : cluster [DBG] pgmap v831: 276 pgs: 40 unknown, 236 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:04 vm00 bash[20701]: audit 2026-03-10T07:38:04.197896+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:04 vm00 bash[20701]: audit 2026-03-10T07:38:04.197896+0000 mon.a (mon.0) 2956 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-109", "overlaypool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:04 vm00 bash[20701]: cluster 2026-03-10T07:38:04.208334+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:04 vm00 bash[20701]: cluster 2026-03-10T07:38:04.208334+0000 mon.a (mon.0) 2957 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:04 vm00 bash[20701]: audit 2026-03-10T07:38:04.209648+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:38:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:04 vm00 bash[20701]: audit 2026-03-10T07:38:04.209648+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T07:38:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:05 vm03 bash[23382]: audit 2026-03-10T07:38:03.399548+0000 mgr.y (mgr.24407) 497 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:05 vm03 bash[23382]: audit 2026-03-10T07:38:03.399548+0000 mgr.y (mgr.24407) 497 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:05 vm03 bash[23382]: cluster 2026-03-10T07:38:05.197922+0000 mon.a (mon.0) 2959 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:05 vm03 bash[23382]: cluster 2026-03-10T07:38:05.197922+0000 mon.a (mon.0) 2959 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:05 vm03 bash[23382]: audit 2026-03-10T07:38:05.201877+0000 mon.a (mon.0) 2960 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]': finished 2026-03-10T07:38:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:05 vm03 bash[23382]: audit 2026-03-10T07:38:05.201877+0000 mon.a (mon.0) 2960 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]': finished 2026-03-10T07:38:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:05 vm03 bash[23382]: cluster 2026-03-10T07:38:05.206526+0000 mon.a (mon.0) 2961 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T07:38:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:05 vm03 bash[23382]: cluster 2026-03-10T07:38:05.206526+0000 mon.a (mon.0) 2961 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T07:38:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:05 vm03 bash[23382]: audit 2026-03-10T07:38:05.209425+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:38:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:05 vm03 bash[23382]: audit 2026-03-10T07:38:05.209425+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:05 vm00 bash[28005]: audit 2026-03-10T07:38:03.399548+0000 mgr.y (mgr.24407) 497 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:05 vm00 bash[28005]: audit 2026-03-10T07:38:03.399548+0000 mgr.y (mgr.24407) 497 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:05 vm00 bash[28005]: cluster 2026-03-10T07:38:05.197922+0000 mon.a (mon.0) 2959 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:05 vm00 bash[28005]: cluster 2026-03-10T07:38:05.197922+0000 mon.a (mon.0) 2959 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:05 vm00 bash[28005]: audit 2026-03-10T07:38:05.201877+0000 mon.a (mon.0) 2960 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]': finished 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:05 vm00 bash[28005]: audit 2026-03-10T07:38:05.201877+0000 mon.a (mon.0) 2960 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]': finished 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:05 vm00 bash[28005]: cluster 2026-03-10T07:38:05.206526+0000 mon.a (mon.0) 2961 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:05 vm00 bash[28005]: cluster 2026-03-10T07:38:05.206526+0000 mon.a (mon.0) 2961 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:05 vm00 bash[28005]: audit 2026-03-10T07:38:05.209425+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:05 vm00 bash[28005]: audit 2026-03-10T07:38:05.209425+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:05 vm00 bash[20701]: audit 2026-03-10T07:38:03.399548+0000 mgr.y (mgr.24407) 497 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:05 vm00 bash[20701]: audit 2026-03-10T07:38:03.399548+0000 mgr.y (mgr.24407) 497 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:05 vm00 bash[20701]: cluster 2026-03-10T07:38:05.197922+0000 mon.a (mon.0) 2959 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:05 vm00 bash[20701]: cluster 2026-03-10T07:38:05.197922+0000 mon.a (mon.0) 2959 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:05 vm00 bash[20701]: audit 2026-03-10T07:38:05.201877+0000 mon.a (mon.0) 2960 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]': finished 2026-03-10T07:38:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:05 vm00 bash[20701]: audit 2026-03-10T07:38:05.201877+0000 mon.a (mon.0) 2960 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-109-cache", "mode": "writeback"}]': finished 2026-03-10T07:38:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:05 vm00 bash[20701]: cluster 2026-03-10T07:38:05.206526+0000 mon.a (mon.0) 2961 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T07:38:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:05 vm00 bash[20701]: cluster 2026-03-10T07:38:05.206526+0000 mon.a (mon.0) 2961 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T07:38:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:05 vm00 bash[20701]: audit 2026-03-10T07:38:05.209425+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:38:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:05 vm00 bash[20701]: audit 2026-03-10T07:38:05.209425+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:38:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:06 vm03 bash[23382]: cluster 2026-03-10T07:38:04.714771+0000 mgr.y (mgr.24407) 498 : cluster [DBG] pgmap v834: 276 pgs: 15 unknown, 261 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:06 vm03 bash[23382]: cluster 2026-03-10T07:38:04.714771+0000 mgr.y (mgr.24407) 498 : cluster [DBG] pgmap v834: 276 pgs: 15 unknown, 261 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:06 vm03 bash[23382]: audit 2026-03-10T07:38:06.205862+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:38:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:06 vm03 bash[23382]: audit 2026-03-10T07:38:06.205862+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:38:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:06 vm03 bash[23382]: cluster 2026-03-10T07:38:06.210432+0000 mon.a (mon.0) 2964 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T07:38:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:06 vm03 bash[23382]: cluster 2026-03-10T07:38:06.210432+0000 mon.a (mon.0) 2964 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T07:38:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:06 vm03 bash[23382]: audit 2026-03-10T07:38:06.210871+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:38:06.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:06 vm03 bash[23382]: audit 2026-03-10T07:38:06.210871+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:06 vm00 bash[28005]: cluster 2026-03-10T07:38:04.714771+0000 mgr.y (mgr.24407) 498 : cluster [DBG] pgmap v834: 276 pgs: 15 unknown, 261 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:06 vm00 bash[28005]: cluster 2026-03-10T07:38:04.714771+0000 mgr.y (mgr.24407) 498 : cluster [DBG] pgmap v834: 276 pgs: 15 unknown, 261 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:06 vm00 bash[28005]: audit 2026-03-10T07:38:06.205862+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:06 vm00 bash[28005]: audit 2026-03-10T07:38:06.205862+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:06 vm00 bash[28005]: cluster 2026-03-10T07:38:06.210432+0000 mon.a (mon.0) 2964 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:06 vm00 bash[28005]: cluster 2026-03-10T07:38:06.210432+0000 mon.a (mon.0) 2964 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:06 vm00 bash[28005]: audit 2026-03-10T07:38:06.210871+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:06 vm00 bash[28005]: audit 2026-03-10T07:38:06.210871+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:06 vm00 bash[20701]: cluster 2026-03-10T07:38:04.714771+0000 mgr.y (mgr.24407) 498 : cluster [DBG] pgmap v834: 276 pgs: 15 unknown, 261 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:06 vm00 bash[20701]: cluster 2026-03-10T07:38:04.714771+0000 mgr.y (mgr.24407) 498 : cluster [DBG] pgmap v834: 276 pgs: 15 unknown, 261 active+clean; 455 KiB data, 992 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:06 vm00 bash[20701]: audit 2026-03-10T07:38:06.205862+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:06 vm00 bash[20701]: audit 2026-03-10T07:38:06.205862+0000 mon.a (mon.0) 2963 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:06 vm00 bash[20701]: cluster 2026-03-10T07:38:06.210432+0000 mon.a (mon.0) 2964 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:06 vm00 bash[20701]: cluster 2026-03-10T07:38:06.210432+0000 mon.a (mon.0) 2964 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:06 vm00 bash[20701]: audit 2026-03-10T07:38:06.210871+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:38:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:06 vm00 bash[20701]: audit 2026-03-10T07:38:06.210871+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:38:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:08 vm03 bash[23382]: cluster 2026-03-10T07:38:06.715191+0000 mgr.y (mgr.24407) 499 : cluster [DBG] pgmap v837: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:08 vm03 bash[23382]: cluster 2026-03-10T07:38:06.715191+0000 mgr.y (mgr.24407) 499 : cluster [DBG] pgmap v837: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:08 vm03 bash[23382]: audit 2026-03-10T07:38:07.210435+0000 mon.a (mon.0) 2966 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:38:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:08 vm03 bash[23382]: audit 2026-03-10T07:38:07.210435+0000 mon.a (mon.0) 2966 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:38:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:08 vm03 bash[23382]: cluster 2026-03-10T07:38:07.222251+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T07:38:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:08 vm03 bash[23382]: cluster 2026-03-10T07:38:07.222251+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T07:38:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:08 vm03 bash[23382]: audit 2026-03-10T07:38:07.223280+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:38:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:08 vm03 bash[23382]: audit 2026-03-10T07:38:07.223280+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:38:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:08 vm03 bash[23382]: audit 2026-03-10T07:38:07.593030+0000 mon.c (mon.2) 337 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:38:08.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:08 vm03 bash[23382]: audit 2026-03-10T07:38:07.593030+0000 mon.c (mon.2) 337 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:08 vm00 bash[28005]: cluster 2026-03-10T07:38:06.715191+0000 mgr.y (mgr.24407) 499 : cluster [DBG] pgmap v837: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:08 vm00 bash[28005]: cluster 2026-03-10T07:38:06.715191+0000 mgr.y (mgr.24407) 499 : cluster [DBG] pgmap v837: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:08 vm00 bash[28005]: audit 2026-03-10T07:38:07.210435+0000 mon.a (mon.0) 2966 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:08 vm00 bash[28005]: audit 2026-03-10T07:38:07.210435+0000 mon.a (mon.0) 2966 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:08 vm00 bash[28005]: cluster 2026-03-10T07:38:07.222251+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:08 vm00 bash[28005]: cluster 2026-03-10T07:38:07.222251+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:08 vm00 bash[28005]: audit 2026-03-10T07:38:07.223280+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:08 vm00 bash[28005]: audit 2026-03-10T07:38:07.223280+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:08 vm00 bash[28005]: audit 2026-03-10T07:38:07.593030+0000 mon.c (mon.2) 337 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:08 vm00 bash[28005]: audit 2026-03-10T07:38:07.593030+0000 mon.c (mon.2) 337 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:08 vm00 bash[20701]: cluster 2026-03-10T07:38:06.715191+0000 mgr.y (mgr.24407) 499 : cluster [DBG] pgmap v837: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:08 vm00 bash[20701]: cluster 2026-03-10T07:38:06.715191+0000 mgr.y (mgr.24407) 499 : cluster [DBG] pgmap v837: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:08 vm00 bash[20701]: audit 2026-03-10T07:38:07.210435+0000 mon.a (mon.0) 2966 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:08 vm00 bash[20701]: audit 2026-03-10T07:38:07.210435+0000 mon.a (mon.0) 2966 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:08 vm00 bash[20701]: cluster 2026-03-10T07:38:07.222251+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:08 vm00 bash[20701]: cluster 2026-03-10T07:38:07.222251+0000 mon.a (mon.0) 2967 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:08 vm00 bash[20701]: audit 2026-03-10T07:38:07.223280+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:08 vm00 bash[20701]: audit 2026-03-10T07:38:07.223280+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:08 vm00 bash[20701]: audit 2026-03-10T07:38:07.593030+0000 mon.c (mon.2) 337 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:38:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:08 vm00 bash[20701]: audit 2026-03-10T07:38:07.593030+0000 mon.c (mon.2) 337 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: cluster 2026-03-10T07:38:08.210617+0000 mon.a (mon.0) 2969 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: cluster 2026-03-10T07:38:08.210617+0000 mon.a (mon.0) 2969 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: audit 2026-03-10T07:38:08.214422+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: audit 2026-03-10T07:38:08.214422+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: cluster 2026-03-10T07:38:08.223015+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: cluster 2026-03-10T07:38:08.223015+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: audit 2026-03-10T07:38:08.227050+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: audit 2026-03-10T07:38:08.227050+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: audit 2026-03-10T07:38:09.217814+0000 mon.a (mon.0) 2973 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: audit 2026-03-10T07:38:09.217814+0000 mon.a (mon.0) 2973 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: cluster 2026-03-10T07:38:09.220555+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T07:38:09.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:09 vm03 bash[23382]: cluster 2026-03-10T07:38:09.220555+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: cluster 2026-03-10T07:38:08.210617+0000 mon.a (mon.0) 2969 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: cluster 2026-03-10T07:38:08.210617+0000 mon.a (mon.0) 2969 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: audit 2026-03-10T07:38:08.214422+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: audit 2026-03-10T07:38:08.214422+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: cluster 2026-03-10T07:38:08.223015+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: cluster 2026-03-10T07:38:08.223015+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: audit 2026-03-10T07:38:08.227050+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: audit 2026-03-10T07:38:08.227050+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: audit 2026-03-10T07:38:09.217814+0000 mon.a (mon.0) 2973 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: audit 2026-03-10T07:38:09.217814+0000 mon.a (mon.0) 2973 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: cluster 2026-03-10T07:38:09.220555+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:09 vm00 bash[28005]: cluster 2026-03-10T07:38:09.220555+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: cluster 2026-03-10T07:38:08.210617+0000 mon.a (mon.0) 2969 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: cluster 2026-03-10T07:38:08.210617+0000 mon.a (mon.0) 2969 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: audit 2026-03-10T07:38:08.214422+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: audit 2026-03-10T07:38:08.214422+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: cluster 2026-03-10T07:38:08.223015+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: cluster 2026-03-10T07:38:08.223015+0000 mon.a (mon.0) 2971 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: audit 2026-03-10T07:38:08.227050+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: audit 2026-03-10T07:38:08.227050+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: audit 2026-03-10T07:38:09.217814+0000 mon.a (mon.0) 2973 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: audit 2026-03-10T07:38:09.217814+0000 mon.a (mon.0) 2973 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: cluster 2026-03-10T07:38:09.220555+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T07:38:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:09 vm00 bash[20701]: cluster 2026-03-10T07:38:09.220555+0000 mon.a (mon.0) 2974 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: cluster 2026-03-10T07:38:08.715588+0000 mgr.y (mgr.24407) 500 : cluster [DBG] pgmap v840: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: cluster 2026-03-10T07:38:08.715588+0000 mgr.y (mgr.24407) 500 : cluster [DBG] pgmap v840: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: audit 2026-03-10T07:38:09.263857+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: audit 2026-03-10T07:38:09.263857+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: audit 2026-03-10T07:38:09.671997+0000 mon.a (mon.0) 2976 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: audit 2026-03-10T07:38:09.671997+0000 mon.a (mon.0) 2976 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: audit 2026-03-10T07:38:09.675084+0000 mon.c (mon.2) 338 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: audit 2026-03-10T07:38:09.675084+0000 mon.c (mon.2) 338 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: audit 2026-03-10T07:38:10.139592+0000 mon.c (mon.2) 339 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: audit 2026-03-10T07:38:10.139592+0000 mon.c (mon.2) 339 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: audit 2026-03-10T07:38:10.140010+0000 mon.a (mon.0) 2977 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:10.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:10 vm03 bash[23382]: audit 2026-03-10T07:38:10.140010+0000 mon.a (mon.0) 2977 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: cluster 2026-03-10T07:38:08.715588+0000 mgr.y (mgr.24407) 500 : cluster [DBG] pgmap v840: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: cluster 2026-03-10T07:38:08.715588+0000 mgr.y (mgr.24407) 500 : cluster [DBG] pgmap v840: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: audit 2026-03-10T07:38:09.263857+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: audit 2026-03-10T07:38:09.263857+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: audit 2026-03-10T07:38:09.671997+0000 mon.a (mon.0) 2976 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: audit 2026-03-10T07:38:09.671997+0000 mon.a (mon.0) 2976 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: audit 2026-03-10T07:38:09.675084+0000 mon.c (mon.2) 338 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: audit 2026-03-10T07:38:09.675084+0000 mon.c (mon.2) 338 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: audit 2026-03-10T07:38:10.139592+0000 mon.c (mon.2) 339 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: audit 2026-03-10T07:38:10.139592+0000 mon.c (mon.2) 339 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: audit 2026-03-10T07:38:10.140010+0000 mon.a (mon.0) 2977 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:10 vm00 bash[28005]: audit 2026-03-10T07:38:10.140010+0000 mon.a (mon.0) 2977 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: cluster 2026-03-10T07:38:08.715588+0000 mgr.y (mgr.24407) 500 : cluster [DBG] pgmap v840: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: cluster 2026-03-10T07:38:08.715588+0000 mgr.y (mgr.24407) 500 : cluster [DBG] pgmap v840: 276 pgs: 276 active+clean; 455 KiB data, 996 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: audit 2026-03-10T07:38:09.263857+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: audit 2026-03-10T07:38:09.263857+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: audit 2026-03-10T07:38:09.671997+0000 mon.a (mon.0) 2976 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: audit 2026-03-10T07:38:09.671997+0000 mon.a (mon.0) 2976 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: audit 2026-03-10T07:38:09.675084+0000 mon.c (mon.2) 338 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: audit 2026-03-10T07:38:09.675084+0000 mon.c (mon.2) 338 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: audit 2026-03-10T07:38:10.139592+0000 mon.c (mon.2) 339 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: audit 2026-03-10T07:38:10.139592+0000 mon.c (mon.2) 339 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: audit 2026-03-10T07:38:10.140010+0000 mon.a (mon.0) 2977 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:10 vm00 bash[20701]: audit 2026-03-10T07:38:10.140010+0000 mon.a (mon.0) 2977 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]: dispatch 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:38:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:38:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:11 vm00 bash[28005]: audit 2026-03-10T07:38:10.254429+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:11 vm00 bash[28005]: audit 2026-03-10T07:38:10.254429+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:11 vm00 bash[28005]: audit 2026-03-10T07:38:10.254499+0000 mon.a (mon.0) 2979 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]': finished 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:11 vm00 bash[28005]: audit 2026-03-10T07:38:10.254499+0000 mon.a (mon.0) 2979 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]': finished 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:11 vm00 bash[28005]: cluster 2026-03-10T07:38:10.258314+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:11 vm00 bash[28005]: cluster 2026-03-10T07:38:10.258314+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:11 vm00 bash[28005]: audit 2026-03-10T07:38:10.259213+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:11 vm00 bash[28005]: audit 2026-03-10T07:38:10.259213+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:11 vm00 bash[20701]: audit 2026-03-10T07:38:10.254429+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:11 vm00 bash[20701]: audit 2026-03-10T07:38:10.254429+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:11 vm00 bash[20701]: audit 2026-03-10T07:38:10.254499+0000 mon.a (mon.0) 2979 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]': finished 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:11 vm00 bash[20701]: audit 2026-03-10T07:38:10.254499+0000 mon.a (mon.0) 2979 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]': finished 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:11 vm00 bash[20701]: cluster 2026-03-10T07:38:10.258314+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:11 vm00 bash[20701]: cluster 2026-03-10T07:38:10.258314+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:11 vm00 bash[20701]: audit 2026-03-10T07:38:10.259213+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:11.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:11 vm00 bash[20701]: audit 2026-03-10T07:38:10.259213+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:11 vm03 bash[23382]: audit 2026-03-10T07:38:10.254429+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:11 vm03 bash[23382]: audit 2026-03-10T07:38:10.254429+0000 mon.a (mon.0) 2978 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:11 vm03 bash[23382]: audit 2026-03-10T07:38:10.254499+0000 mon.a (mon.0) 2979 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]': finished 2026-03-10T07:38:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:11 vm03 bash[23382]: audit 2026-03-10T07:38:10.254499+0000 mon.a (mon.0) 2979 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "314.c", "id": [6, 0]}]': finished 2026-03-10T07:38:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:11 vm03 bash[23382]: cluster 2026-03-10T07:38:10.258314+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T07:38:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:11 vm03 bash[23382]: cluster 2026-03-10T07:38:10.258314+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T07:38:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:11 vm03 bash[23382]: audit 2026-03-10T07:38:10.259213+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:11.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:11 vm03 bash[23382]: audit 2026-03-10T07:38:10.259213+0000 mon.a (mon.0) 2981 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]: dispatch 2026-03-10T07:38:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:12 vm00 bash[20701]: cluster 2026-03-10T07:38:10.715970+0000 mgr.y (mgr.24407) 501 : cluster [DBG] pgmap v843: 276 pgs: 276 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:12 vm00 bash[20701]: cluster 2026-03-10T07:38:10.715970+0000 mgr.y (mgr.24407) 501 : cluster [DBG] pgmap v843: 276 pgs: 276 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:12 vm00 bash[20701]: audit 2026-03-10T07:38:11.258234+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:12 vm00 bash[20701]: audit 2026-03-10T07:38:11.258234+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:12 vm00 bash[20701]: cluster 2026-03-10T07:38:11.262564+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T07:38:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:12 vm00 bash[20701]: cluster 2026-03-10T07:38:11.262564+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T07:38:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:12 vm00 bash[20701]: cluster 2026-03-10T07:38:12.272963+0000 mon.a (mon.0) 2984 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T07:38:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:12 vm00 bash[20701]: cluster 2026-03-10T07:38:12.272963+0000 mon.a (mon.0) 2984 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T07:38:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:12 vm00 bash[28005]: cluster 2026-03-10T07:38:10.715970+0000 mgr.y (mgr.24407) 501 : cluster [DBG] pgmap v843: 276 pgs: 276 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:12 vm00 bash[28005]: cluster 2026-03-10T07:38:10.715970+0000 mgr.y (mgr.24407) 501 : cluster [DBG] pgmap v843: 276 pgs: 276 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:12 vm00 bash[28005]: audit 2026-03-10T07:38:11.258234+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:12 vm00 bash[28005]: audit 2026-03-10T07:38:11.258234+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:12 vm00 bash[28005]: cluster 2026-03-10T07:38:11.262564+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T07:38:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:12 vm00 bash[28005]: cluster 2026-03-10T07:38:11.262564+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T07:38:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:12 vm00 bash[28005]: cluster 2026-03-10T07:38:12.272963+0000 mon.a (mon.0) 2984 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T07:38:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:12 vm00 bash[28005]: cluster 2026-03-10T07:38:12.272963+0000 mon.a (mon.0) 2984 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T07:38:12.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:12 vm03 bash[23382]: cluster 2026-03-10T07:38:10.715970+0000 mgr.y (mgr.24407) 501 : cluster [DBG] pgmap v843: 276 pgs: 276 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:12 vm03 bash[23382]: cluster 2026-03-10T07:38:10.715970+0000 mgr.y (mgr.24407) 501 : cluster [DBG] pgmap v843: 276 pgs: 276 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:12 vm03 bash[23382]: audit 2026-03-10T07:38:11.258234+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:12 vm03 bash[23382]: audit 2026-03-10T07:38:11.258234+0000 mon.a (mon.0) 2982 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-109", "tierpool": "test-rados-api-vm00-59782-109-cache"}]': finished 2026-03-10T07:38:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:12 vm03 bash[23382]: cluster 2026-03-10T07:38:11.262564+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T07:38:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:12 vm03 bash[23382]: cluster 2026-03-10T07:38:11.262564+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T07:38:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:12 vm03 bash[23382]: cluster 2026-03-10T07:38:12.272963+0000 mon.a (mon.0) 2984 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T07:38:12.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:12 vm03 bash[23382]: cluster 2026-03-10T07:38:12.272963+0000 mon.a (mon.0) 2984 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T07:38:13.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:38:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: cluster 2026-03-10T07:38:12.716336+0000 mgr.y (mgr.24407) 502 : cluster [DBG] pgmap v846: 244 pgs: 244 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: cluster 2026-03-10T07:38:12.716336+0000 mgr.y (mgr.24407) 502 : cluster [DBG] pgmap v846: 244 pgs: 244 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:12.836648+0000 mon.a (mon.0) 2985 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:12.836648+0000 mon.a (mon.0) 2985 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:12.845386+0000 mon.a (mon.0) 2986 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:12.845386+0000 mon.a (mon.0) 2986 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.032060+0000 mon.a (mon.0) 2987 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.032060+0000 mon.a (mon.0) 2987 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.041578+0000 mon.a (mon.0) 2988 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.041578+0000 mon.a (mon.0) 2988 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: cluster 2026-03-10T07:38:13.275260+0000 mon.a (mon.0) 2989 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: cluster 2026-03-10T07:38:13.275260+0000 mon.a (mon.0) 2989 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.276329+0000 mon.a (mon.0) 2990 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.276329+0000 mon.a (mon.0) 2990 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.387759+0000 mon.c (mon.2) 340 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.387759+0000 mon.c (mon.2) 340 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.388590+0000 mon.c (mon.2) 341 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.388590+0000 mon.c (mon.2) 341 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.393445+0000 mon.a (mon.0) 2991 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:13 vm00 bash[28005]: audit 2026-03-10T07:38:13.393445+0000 mon.a (mon.0) 2991 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: cluster 2026-03-10T07:38:12.716336+0000 mgr.y (mgr.24407) 502 : cluster [DBG] pgmap v846: 244 pgs: 244 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: cluster 2026-03-10T07:38:12.716336+0000 mgr.y (mgr.24407) 502 : cluster [DBG] pgmap v846: 244 pgs: 244 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:12.836648+0000 mon.a (mon.0) 2985 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:12.836648+0000 mon.a (mon.0) 2985 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:12.845386+0000 mon.a (mon.0) 2986 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:12.845386+0000 mon.a (mon.0) 2986 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.032060+0000 mon.a (mon.0) 2987 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.032060+0000 mon.a (mon.0) 2987 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.041578+0000 mon.a (mon.0) 2988 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.041578+0000 mon.a (mon.0) 2988 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: cluster 2026-03-10T07:38:13.275260+0000 mon.a (mon.0) 2989 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: cluster 2026-03-10T07:38:13.275260+0000 mon.a (mon.0) 2989 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.276329+0000 mon.a (mon.0) 2990 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.276329+0000 mon.a (mon.0) 2990 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.387759+0000 mon.c (mon.2) 340 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.387759+0000 mon.c (mon.2) 340 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.388590+0000 mon.c (mon.2) 341 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.388590+0000 mon.c (mon.2) 341 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.393445+0000 mon.a (mon.0) 2991 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:13 vm00 bash[20701]: audit 2026-03-10T07:38:13.393445+0000 mon.a (mon.0) 2991 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: cluster 2026-03-10T07:38:12.716336+0000 mgr.y (mgr.24407) 502 : cluster [DBG] pgmap v846: 244 pgs: 244 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:14.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: cluster 2026-03-10T07:38:12.716336+0000 mgr.y (mgr.24407) 502 : cluster [DBG] pgmap v846: 244 pgs: 244 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:14.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:12.836648+0000 mon.a (mon.0) 2985 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:12.836648+0000 mon.a (mon.0) 2985 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:12.845386+0000 mon.a (mon.0) 2986 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:12.845386+0000 mon.a (mon.0) 2986 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.032060+0000 mon.a (mon.0) 2987 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.032060+0000 mon.a (mon.0) 2987 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.041578+0000 mon.a (mon.0) 2988 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.041578+0000 mon.a (mon.0) 2988 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: cluster 2026-03-10T07:38:13.275260+0000 mon.a (mon.0) 2989 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: cluster 2026-03-10T07:38:13.275260+0000 mon.a (mon.0) 2989 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.276329+0000 mon.a (mon.0) 2990 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.276329+0000 mon.a (mon.0) 2990 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.387759+0000 mon.c (mon.2) 340 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.387759+0000 mon.c (mon.2) 340 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.388590+0000 mon.c (mon.2) 341 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.388590+0000 mon.c (mon.2) 341 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.393445+0000 mon.a (mon.0) 2991 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:14.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:13 vm03 bash[23382]: audit 2026-03-10T07:38:13.393445+0000 mon.a (mon.0) 2991 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:15 vm00 bash[28005]: audit 2026-03-10T07:38:13.407341+0000 mgr.y (mgr.24407) 503 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:15 vm00 bash[28005]: audit 2026-03-10T07:38:13.407341+0000 mgr.y (mgr.24407) 503 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:15 vm00 bash[28005]: audit 2026-03-10T07:38:14.275858+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:15 vm00 bash[28005]: audit 2026-03-10T07:38:14.275858+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:15 vm00 bash[28005]: cluster 2026-03-10T07:38:14.283883+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T07:38:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:15 vm00 bash[28005]: cluster 2026-03-10T07:38:14.283883+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T07:38:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:15 vm00 bash[28005]: audit 2026-03-10T07:38:14.284679+0000 mon.a (mon.0) 2994 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:15 vm00 bash[28005]: audit 2026-03-10T07:38:14.284679+0000 mon.a (mon.0) 2994 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:15 vm00 bash[20701]: audit 2026-03-10T07:38:13.407341+0000 mgr.y (mgr.24407) 503 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:15 vm00 bash[20701]: audit 2026-03-10T07:38:13.407341+0000 mgr.y (mgr.24407) 503 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:15 vm00 bash[20701]: audit 2026-03-10T07:38:14.275858+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:15 vm00 bash[20701]: audit 2026-03-10T07:38:14.275858+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:15 vm00 bash[20701]: cluster 2026-03-10T07:38:14.283883+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T07:38:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:15 vm00 bash[20701]: cluster 2026-03-10T07:38:14.283883+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T07:38:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:15 vm00 bash[20701]: audit 2026-03-10T07:38:14.284679+0000 mon.a (mon.0) 2994 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:15 vm00 bash[20701]: audit 2026-03-10T07:38:14.284679+0000 mon.a (mon.0) 2994 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:15 vm03 bash[23382]: audit 2026-03-10T07:38:13.407341+0000 mgr.y (mgr.24407) 503 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:15 vm03 bash[23382]: audit 2026-03-10T07:38:13.407341+0000 mgr.y (mgr.24407) 503 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:15 vm03 bash[23382]: audit 2026-03-10T07:38:14.275858+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:15 vm03 bash[23382]: audit 2026-03-10T07:38:14.275858+0000 mon.a (mon.0) 2992 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:15 vm03 bash[23382]: cluster 2026-03-10T07:38:14.283883+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T07:38:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:15 vm03 bash[23382]: cluster 2026-03-10T07:38:14.283883+0000 mon.a (mon.0) 2993 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T07:38:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:15 vm03 bash[23382]: audit 2026-03-10T07:38:14.284679+0000 mon.a (mon.0) 2994 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:15 vm03 bash[23382]: audit 2026-03-10T07:38:14.284679+0000 mon.a (mon.0) 2994 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]: dispatch 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: cluster 2026-03-10T07:38:14.716907+0000 mgr.y (mgr.24407) 504 : cluster [DBG] pgmap v849: 236 pgs: 236 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: cluster 2026-03-10T07:38:14.716907+0000 mgr.y (mgr.24407) 504 : cluster [DBG] pgmap v849: 236 pgs: 236 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: cluster 2026-03-10T07:38:15.277345+0000 mon.a (mon.0) 2995 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: cluster 2026-03-10T07:38:15.277345+0000 mon.a (mon.0) 2995 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: audit 2026-03-10T07:38:15.279692+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: audit 2026-03-10T07:38:15.279692+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: cluster 2026-03-10T07:38:15.291956+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: cluster 2026-03-10T07:38:15.291956+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: cluster 2026-03-10T07:38:16.286593+0000 mon.a (mon.0) 2998 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: cluster 2026-03-10T07:38:16.286593+0000 mon.a (mon.0) 2998 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: audit 2026-03-10T07:38:16.289031+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:16 vm00 bash[28005]: audit 2026-03-10T07:38:16.289031+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: cluster 2026-03-10T07:38:14.716907+0000 mgr.y (mgr.24407) 504 : cluster [DBG] pgmap v849: 236 pgs: 236 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: cluster 2026-03-10T07:38:14.716907+0000 mgr.y (mgr.24407) 504 : cluster [DBG] pgmap v849: 236 pgs: 236 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: cluster 2026-03-10T07:38:15.277345+0000 mon.a (mon.0) 2995 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: cluster 2026-03-10T07:38:15.277345+0000 mon.a (mon.0) 2995 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: audit 2026-03-10T07:38:15.279692+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: audit 2026-03-10T07:38:15.279692+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: cluster 2026-03-10T07:38:15.291956+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: cluster 2026-03-10T07:38:15.291956+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T07:38:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: cluster 2026-03-10T07:38:16.286593+0000 mon.a (mon.0) 2998 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T07:38:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: cluster 2026-03-10T07:38:16.286593+0000 mon.a (mon.0) 2998 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T07:38:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: audit 2026-03-10T07:38:16.289031+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:16 vm00 bash[20701]: audit 2026-03-10T07:38:16.289031+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: cluster 2026-03-10T07:38:14.716907+0000 mgr.y (mgr.24407) 504 : cluster [DBG] pgmap v849: 236 pgs: 236 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: cluster 2026-03-10T07:38:14.716907+0000 mgr.y (mgr.24407) 504 : cluster [DBG] pgmap v849: 236 pgs: 236 active+clean; 455 KiB data, 997 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:38:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: cluster 2026-03-10T07:38:15.277345+0000 mon.a (mon.0) 2995 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: cluster 2026-03-10T07:38:15.277345+0000 mon.a (mon.0) 2995 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: audit 2026-03-10T07:38:15.279692+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: audit 2026-03-10T07:38:15.279692+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? 192.168.123.100:0/2954387079' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-109"}]': finished 2026-03-10T07:38:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: cluster 2026-03-10T07:38:15.291956+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T07:38:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: cluster 2026-03-10T07:38:15.291956+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T07:38:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: cluster 2026-03-10T07:38:16.286593+0000 mon.a (mon.0) 2998 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T07:38:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: cluster 2026-03-10T07:38:16.286593+0000 mon.a (mon.0) 2998 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T07:38:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: audit 2026-03-10T07:38:16.289031+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:16.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:16 vm03 bash[23382]: audit 2026-03-10T07:38:16.289031+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:18 vm00 bash[28005]: cluster 2026-03-10T07:38:16.717265+0000 mgr.y (mgr.24407) 505 : cluster [DBG] pgmap v852: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:18 vm00 bash[28005]: cluster 2026-03-10T07:38:16.717265+0000 mgr.y (mgr.24407) 505 : cluster [DBG] pgmap v852: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:18 vm00 bash[28005]: audit 2026-03-10T07:38:17.287200+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:18 vm00 bash[28005]: audit 2026-03-10T07:38:17.287200+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:18 vm00 bash[28005]: cluster 2026-03-10T07:38:17.290269+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T07:38:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:18 vm00 bash[28005]: cluster 2026-03-10T07:38:17.290269+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T07:38:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:18 vm00 bash[28005]: audit 2026-03-10T07:38:17.295029+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:18 vm00 bash[28005]: audit 2026-03-10T07:38:17.295029+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:18 vm00 bash[20701]: cluster 2026-03-10T07:38:16.717265+0000 mgr.y (mgr.24407) 505 : cluster [DBG] pgmap v852: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:18 vm00 bash[20701]: cluster 2026-03-10T07:38:16.717265+0000 mgr.y (mgr.24407) 505 : cluster [DBG] pgmap v852: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:18 vm00 bash[20701]: audit 2026-03-10T07:38:17.287200+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:18 vm00 bash[20701]: audit 2026-03-10T07:38:17.287200+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:18 vm00 bash[20701]: cluster 2026-03-10T07:38:17.290269+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T07:38:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:18 vm00 bash[20701]: cluster 2026-03-10T07:38:17.290269+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T07:38:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:18 vm00 bash[20701]: audit 2026-03-10T07:38:17.295029+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:18 vm00 bash[20701]: audit 2026-03-10T07:38:17.295029+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:18 vm03 bash[23382]: cluster 2026-03-10T07:38:16.717265+0000 mgr.y (mgr.24407) 505 : cluster [DBG] pgmap v852: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:18 vm03 bash[23382]: cluster 2026-03-10T07:38:16.717265+0000 mgr.y (mgr.24407) 505 : cluster [DBG] pgmap v852: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:18 vm03 bash[23382]: audit 2026-03-10T07:38:17.287200+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:18 vm03 bash[23382]: audit 2026-03-10T07:38:17.287200+0000 mon.a (mon.0) 3000 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:18 vm03 bash[23382]: cluster 2026-03-10T07:38:17.290269+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T07:38:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:18 vm03 bash[23382]: cluster 2026-03-10T07:38:17.290269+0000 mon.a (mon.0) 3001 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T07:38:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:18 vm03 bash[23382]: audit 2026-03-10T07:38:17.295029+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:18.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:18 vm03 bash[23382]: audit 2026-03-10T07:38:17.295029+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]: dispatch 2026-03-10T07:38:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.290489+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.290489+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: cluster 2026-03-10T07:38:18.294540+0000 mon.a (mon.0) 3004 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T07:38:19.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: cluster 2026-03-10T07:38:18.294540+0000 mon.a (mon.0) 3004 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.314701+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.314701+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.316280+0000 mon.a (mon.0) 3005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.316280+0000 mon.a (mon.0) 3005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.316462+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.316462+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.316935+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.316935+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.317200+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.317200+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.317948+0000 mon.a (mon.0) 3007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:19 vm00 bash[28005]: audit 2026-03-10T07:38:18.317948+0000 mon.a (mon.0) 3007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.290489+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.290489+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: cluster 2026-03-10T07:38:18.294540+0000 mon.a (mon.0) 3004 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: cluster 2026-03-10T07:38:18.294540+0000 mon.a (mon.0) 3004 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.314701+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.314701+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.316280+0000 mon.a (mon.0) 3005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.316280+0000 mon.a (mon.0) 3005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.316462+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.316462+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.316935+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.316935+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.317200+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.317200+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.317948+0000 mon.a (mon.0) 3007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:19.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:19 vm00 bash[20701]: audit 2026-03-10T07:38:18.317948+0000 mon.a (mon.0) 3007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:19.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.290489+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.290489+0000 mon.a (mon.0) 3003 : audit [INF] from='client.? 192.168.123.100:0/2596799927' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm00-59782-104"}]': finished 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: cluster 2026-03-10T07:38:18.294540+0000 mon.a (mon.0) 3004 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: cluster 2026-03-10T07:38:18.294540+0000 mon.a (mon.0) 3004 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.314701+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.314701+0000 mon.b (mon.1) 530 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.316280+0000 mon.a (mon.0) 3005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.316280+0000 mon.a (mon.0) 3005 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.316462+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.316462+0000 mon.b (mon.1) 531 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.316935+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.316935+0000 mon.b (mon.1) 532 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.317200+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.317200+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.317948+0000 mon.a (mon.0) 3007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:19.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:19 vm03 bash[23382]: audit 2026-03-10T07:38:18.317948+0000 mon.a (mon.0) 3007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T07:38:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:20 vm03 bash[23382]: cluster 2026-03-10T07:38:18.717595+0000 mgr.y (mgr.24407) 506 : cluster [DBG] pgmap v855: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:20 vm03 bash[23382]: cluster 2026-03-10T07:38:18.717595+0000 mgr.y (mgr.24407) 506 : cluster [DBG] pgmap v855: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:20 vm03 bash[23382]: audit 2026-03-10T07:38:19.315096+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:38:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:20 vm03 bash[23382]: audit 2026-03-10T07:38:19.315096+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:38:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:20 vm03 bash[23382]: audit 2026-03-10T07:38:19.322879+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:20 vm03 bash[23382]: audit 2026-03-10T07:38:19.322879+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:20 vm03 bash[23382]: cluster 2026-03-10T07:38:19.326633+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T07:38:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:20 vm03 bash[23382]: cluster 2026-03-10T07:38:19.326633+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T07:38:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:20 vm03 bash[23382]: audit 2026-03-10T07:38:19.327234+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:20.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:20 vm03 bash[23382]: audit 2026-03-10T07:38:19.327234+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:20 vm00 bash[28005]: cluster 2026-03-10T07:38:18.717595+0000 mgr.y (mgr.24407) 506 : cluster [DBG] pgmap v855: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:20 vm00 bash[28005]: cluster 2026-03-10T07:38:18.717595+0000 mgr.y (mgr.24407) 506 : cluster [DBG] pgmap v855: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:20 vm00 bash[28005]: audit 2026-03-10T07:38:19.315096+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:20 vm00 bash[28005]: audit 2026-03-10T07:38:19.315096+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:20 vm00 bash[28005]: audit 2026-03-10T07:38:19.322879+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:20 vm00 bash[28005]: audit 2026-03-10T07:38:19.322879+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:20 vm00 bash[28005]: cluster 2026-03-10T07:38:19.326633+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:20 vm00 bash[28005]: cluster 2026-03-10T07:38:19.326633+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:20 vm00 bash[28005]: audit 2026-03-10T07:38:19.327234+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:20 vm00 bash[28005]: audit 2026-03-10T07:38:19.327234+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:20 vm00 bash[20701]: cluster 2026-03-10T07:38:18.717595+0000 mgr.y (mgr.24407) 506 : cluster [DBG] pgmap v855: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:20 vm00 bash[20701]: cluster 2026-03-10T07:38:18.717595+0000 mgr.y (mgr.24407) 506 : cluster [DBG] pgmap v855: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:20 vm00 bash[20701]: audit 2026-03-10T07:38:19.315096+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:20 vm00 bash[20701]: audit 2026-03-10T07:38:19.315096+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm00-59782-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:20 vm00 bash[20701]: audit 2026-03-10T07:38:19.322879+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:20.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:20 vm00 bash[20701]: audit 2026-03-10T07:38:19.322879+0000 mon.b (mon.1) 533 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:20.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:20 vm00 bash[20701]: cluster 2026-03-10T07:38:19.326633+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T07:38:20.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:20 vm00 bash[20701]: cluster 2026-03-10T07:38:19.326633+0000 mon.a (mon.0) 3009 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T07:38:20.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:20 vm00 bash[20701]: audit 2026-03-10T07:38:19.327234+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:20.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:20 vm00 bash[20701]: audit 2026-03-10T07:38:19.327234+0000 mon.a (mon.0) 3010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:21.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:38:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:38:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:38:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:21 vm03 bash[23382]: cluster 2026-03-10T07:38:20.416950+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T07:38:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:21 vm03 bash[23382]: cluster 2026-03-10T07:38:20.416950+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T07:38:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:21 vm03 bash[23382]: cluster 2026-03-10T07:38:20.718024+0000 mgr.y (mgr.24407) 507 : cluster [DBG] pgmap v858: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:21 vm03 bash[23382]: cluster 2026-03-10T07:38:20.718024+0000 mgr.y (mgr.24407) 507 : cluster [DBG] pgmap v858: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:21 vm03 bash[23382]: audit 2026-03-10T07:38:21.406143+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:21 vm03 bash[23382]: audit 2026-03-10T07:38:21.406143+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:21 vm03 bash[23382]: cluster 2026-03-10T07:38:21.414660+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T07:38:21.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:21 vm03 bash[23382]: cluster 2026-03-10T07:38:21.414660+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T07:38:21.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:21 vm00 bash[28005]: cluster 2026-03-10T07:38:20.416950+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T07:38:21.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:21 vm00 bash[28005]: cluster 2026-03-10T07:38:20.416950+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T07:38:21.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:21 vm00 bash[28005]: cluster 2026-03-10T07:38:20.718024+0000 mgr.y (mgr.24407) 507 : cluster [DBG] pgmap v858: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:21.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:21 vm00 bash[28005]: cluster 2026-03-10T07:38:20.718024+0000 mgr.y (mgr.24407) 507 : cluster [DBG] pgmap v858: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:21.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:21 vm00 bash[28005]: audit 2026-03-10T07:38:21.406143+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:21.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:21 vm00 bash[28005]: audit 2026-03-10T07:38:21.406143+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:21.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:21 vm00 bash[28005]: cluster 2026-03-10T07:38:21.414660+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T07:38:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:21 vm00 bash[28005]: cluster 2026-03-10T07:38:21.414660+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T07:38:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:21 vm00 bash[20701]: cluster 2026-03-10T07:38:20.416950+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T07:38:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:21 vm00 bash[20701]: cluster 2026-03-10T07:38:20.416950+0000 mon.a (mon.0) 3011 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T07:38:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:21 vm00 bash[20701]: cluster 2026-03-10T07:38:20.718024+0000 mgr.y (mgr.24407) 507 : cluster [DBG] pgmap v858: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:21 vm00 bash[20701]: cluster 2026-03-10T07:38:20.718024+0000 mgr.y (mgr.24407) 507 : cluster [DBG] pgmap v858: 228 pgs: 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:21 vm00 bash[20701]: audit 2026-03-10T07:38:21.406143+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:21 vm00 bash[20701]: audit 2026-03-10T07:38:21.406143+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm00-59782-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:21 vm00 bash[20701]: cluster 2026-03-10T07:38:21.414660+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T07:38:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:21 vm00 bash[20701]: cluster 2026-03-10T07:38:21.414660+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T07:38:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:22 vm03 bash[23382]: cluster 2026-03-10T07:38:21.543949+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:22 vm03 bash[23382]: cluster 2026-03-10T07:38:21.543949+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:22 vm03 bash[23382]: cluster 2026-03-10T07:38:22.430995+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T07:38:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:22 vm03 bash[23382]: cluster 2026-03-10T07:38:22.430995+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T07:38:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:22 vm03 bash[23382]: audit 2026-03-10T07:38:22.433582+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:22 vm03 bash[23382]: audit 2026-03-10T07:38:22.433582+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:22 vm03 bash[23382]: audit 2026-03-10T07:38:22.438200+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:22 vm03 bash[23382]: audit 2026-03-10T07:38:22.438200+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:22 vm00 bash[28005]: cluster 2026-03-10T07:38:21.543949+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:22 vm00 bash[28005]: cluster 2026-03-10T07:38:21.543949+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:22 vm00 bash[28005]: cluster 2026-03-10T07:38:22.430995+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T07:38:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:22 vm00 bash[28005]: cluster 2026-03-10T07:38:22.430995+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T07:38:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:22 vm00 bash[28005]: audit 2026-03-10T07:38:22.433582+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:22 vm00 bash[28005]: audit 2026-03-10T07:38:22.433582+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:22 vm00 bash[28005]: audit 2026-03-10T07:38:22.438200+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:22.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:22 vm00 bash[28005]: audit 2026-03-10T07:38:22.438200+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:22.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:22 vm00 bash[20701]: cluster 2026-03-10T07:38:21.543949+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:22.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:22 vm00 bash[20701]: cluster 2026-03-10T07:38:21.543949+0000 mon.a (mon.0) 3014 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:22.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:22 vm00 bash[20701]: cluster 2026-03-10T07:38:22.430995+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T07:38:22.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:22 vm00 bash[20701]: cluster 2026-03-10T07:38:22.430995+0000 mon.a (mon.0) 3015 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T07:38:22.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:22 vm00 bash[20701]: audit 2026-03-10T07:38:22.433582+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:22.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:22 vm00 bash[20701]: audit 2026-03-10T07:38:22.433582+0000 mon.b (mon.1) 534 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:22.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:22 vm00 bash[20701]: audit 2026-03-10T07:38:22.438200+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:22.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:22 vm00 bash[20701]: audit 2026-03-10T07:38:22.438200+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:23.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:38:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:38:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:23 vm03 bash[23382]: cluster 2026-03-10T07:38:22.718478+0000 mgr.y (mgr.24407) 508 : cluster [DBG] pgmap v861: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:23 vm03 bash[23382]: cluster 2026-03-10T07:38:22.718478+0000 mgr.y (mgr.24407) 508 : cluster [DBG] pgmap v861: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:23 vm03 bash[23382]: audit 2026-03-10T07:38:23.412703+0000 mon.a (mon.0) 3017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:23 vm03 bash[23382]: audit 2026-03-10T07:38:23.412703+0000 mon.a (mon.0) 3017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:23 vm03 bash[23382]: cluster 2026-03-10T07:38:23.417597+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T07:38:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:23 vm03 bash[23382]: cluster 2026-03-10T07:38:23.417597+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T07:38:23.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:23 vm00 bash[28005]: cluster 2026-03-10T07:38:22.718478+0000 mgr.y (mgr.24407) 508 : cluster [DBG] pgmap v861: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:23.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:23 vm00 bash[28005]: cluster 2026-03-10T07:38:22.718478+0000 mgr.y (mgr.24407) 508 : cluster [DBG] pgmap v861: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:23.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:23 vm00 bash[28005]: audit 2026-03-10T07:38:23.412703+0000 mon.a (mon.0) 3017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:23.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:23 vm00 bash[28005]: audit 2026-03-10T07:38:23.412703+0000 mon.a (mon.0) 3017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:23.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:23 vm00 bash[28005]: cluster 2026-03-10T07:38:23.417597+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T07:38:23.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:23 vm00 bash[28005]: cluster 2026-03-10T07:38:23.417597+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T07:38:23.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:23 vm00 bash[20701]: cluster 2026-03-10T07:38:22.718478+0000 mgr.y (mgr.24407) 508 : cluster [DBG] pgmap v861: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:23.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:23 vm00 bash[20701]: cluster 2026-03-10T07:38:22.718478+0000 mgr.y (mgr.24407) 508 : cluster [DBG] pgmap v861: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:38:23.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:23 vm00 bash[20701]: audit 2026-03-10T07:38:23.412703+0000 mon.a (mon.0) 3017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:23.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:23 vm00 bash[20701]: audit 2026-03-10T07:38:23.412703+0000 mon.a (mon.0) 3017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:23.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:23 vm00 bash[20701]: cluster 2026-03-10T07:38:23.417597+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T07:38:23.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:23 vm00 bash[20701]: cluster 2026-03-10T07:38:23.417597+0000 mon.a (mon.0) 3018 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:23.416663+0000 mgr.y (mgr.24407) 509 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:23.416663+0000 mgr.y (mgr.24407) 509 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:23.460474+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:23.460474+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:23.462494+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:23.462494+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:24.416384+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:24.416384+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:24.419514+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:24.419514+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: cluster 2026-03-10T07:38:24.423372+0000 mon.a (mon.0) 3021 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: cluster 2026-03-10T07:38:24.423372+0000 mon.a (mon.0) 3021 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:24.424538+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:24.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:24 vm03 bash[23382]: audit 2026-03-10T07:38:24.424538+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:23.416663+0000 mgr.y (mgr.24407) 509 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:23.416663+0000 mgr.y (mgr.24407) 509 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:23.460474+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:23.460474+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:23.462494+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:23.462494+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:24.416384+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:24.416384+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:24.419514+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:24.419514+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: cluster 2026-03-10T07:38:24.423372+0000 mon.a (mon.0) 3021 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: cluster 2026-03-10T07:38:24.423372+0000 mon.a (mon.0) 3021 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:24.424538+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:24 vm00 bash[28005]: audit 2026-03-10T07:38:24.424538+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:23.416663+0000 mgr.y (mgr.24407) 509 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:23.416663+0000 mgr.y (mgr.24407) 509 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:23.460474+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:23.460474+0000 mon.b (mon.1) 535 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:23.462494+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:23.462494+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:24.416384+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:24.416384+0000 mon.a (mon.0) 3020 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:24.419514+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:24.419514+0000 mon.b (mon.1) 536 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: cluster 2026-03-10T07:38:24.423372+0000 mon.a (mon.0) 3021 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: cluster 2026-03-10T07:38:24.423372+0000 mon.a (mon.0) 3021 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:24.424538+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:24.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:24 vm00 bash[20701]: audit 2026-03-10T07:38:24.424538+0000 mon.a (mon.0) 3022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:25 vm03 bash[23382]: audit 2026-03-10T07:38:24.681913+0000 mon.c (mon.2) 342 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:25 vm03 bash[23382]: audit 2026-03-10T07:38:24.681913+0000 mon.c (mon.2) 342 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:26.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:25 vm03 bash[23382]: cluster 2026-03-10T07:38:24.719022+0000 mgr.y (mgr.24407) 510 : cluster [DBG] pgmap v864: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T07:38:26.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:25 vm03 bash[23382]: cluster 2026-03-10T07:38:24.719022+0000 mgr.y (mgr.24407) 510 : cluster [DBG] pgmap v864: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T07:38:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:25 vm00 bash[28005]: audit 2026-03-10T07:38:24.681913+0000 mon.c (mon.2) 342 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:25 vm00 bash[28005]: audit 2026-03-10T07:38:24.681913+0000 mon.c (mon.2) 342 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:25 vm00 bash[28005]: cluster 2026-03-10T07:38:24.719022+0000 mgr.y (mgr.24407) 510 : cluster [DBG] pgmap v864: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T07:38:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:25 vm00 bash[28005]: cluster 2026-03-10T07:38:24.719022+0000 mgr.y (mgr.24407) 510 : cluster [DBG] pgmap v864: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T07:38:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:25 vm00 bash[20701]: audit 2026-03-10T07:38:24.681913+0000 mon.c (mon.2) 342 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:25 vm00 bash[20701]: audit 2026-03-10T07:38:24.681913+0000 mon.c (mon.2) 342 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:25 vm00 bash[20701]: cluster 2026-03-10T07:38:24.719022+0000 mgr.y (mgr.24407) 510 : cluster [DBG] pgmap v864: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T07:38:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:25 vm00 bash[20701]: cluster 2026-03-10T07:38:24.719022+0000 mgr.y (mgr.24407) 510 : cluster [DBG] pgmap v864: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 998 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:26 vm00 bash[28005]: audit 2026-03-10T07:38:25.743225+0000 mon.a (mon.0) 3023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:26 vm00 bash[28005]: audit 2026-03-10T07:38:25.743225+0000 mon.a (mon.0) 3023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:26 vm00 bash[28005]: cluster 2026-03-10T07:38:25.753233+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:26 vm00 bash[28005]: cluster 2026-03-10T07:38:25.753233+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:26 vm00 bash[28005]: audit 2026-03-10T07:38:25.787333+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:26 vm00 bash[28005]: audit 2026-03-10T07:38:25.787333+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:26 vm00 bash[28005]: audit 2026-03-10T07:38:25.788155+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:26 vm00 bash[28005]: audit 2026-03-10T07:38:25.788155+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:26 vm00 bash[28005]: cluster 2026-03-10T07:38:26.544589+0000 mon.a (mon.0) 3026 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:26 vm00 bash[28005]: cluster 2026-03-10T07:38:26.544589+0000 mon.a (mon.0) 3026 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:26 vm00 bash[20701]: audit 2026-03-10T07:38:25.743225+0000 mon.a (mon.0) 3023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:26 vm00 bash[20701]: audit 2026-03-10T07:38:25.743225+0000 mon.a (mon.0) 3023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:26 vm00 bash[20701]: cluster 2026-03-10T07:38:25.753233+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:26 vm00 bash[20701]: cluster 2026-03-10T07:38:25.753233+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:26 vm00 bash[20701]: audit 2026-03-10T07:38:25.787333+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:26 vm00 bash[20701]: audit 2026-03-10T07:38:25.787333+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:26 vm00 bash[20701]: audit 2026-03-10T07:38:25.788155+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:26 vm00 bash[20701]: audit 2026-03-10T07:38:25.788155+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:26 vm00 bash[20701]: cluster 2026-03-10T07:38:26.544589+0000 mon.a (mon.0) 3026 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:26 vm00 bash[20701]: cluster 2026-03-10T07:38:26.544589+0000 mon.a (mon.0) 3026 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:27.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:26 vm03 bash[23382]: audit 2026-03-10T07:38:25.743225+0000 mon.a (mon.0) 3023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:27.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:26 vm03 bash[23382]: audit 2026-03-10T07:38:25.743225+0000 mon.a (mon.0) 3023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:27.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:26 vm03 bash[23382]: cluster 2026-03-10T07:38:25.753233+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T07:38:27.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:26 vm03 bash[23382]: cluster 2026-03-10T07:38:25.753233+0000 mon.a (mon.0) 3024 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T07:38:27.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:26 vm03 bash[23382]: audit 2026-03-10T07:38:25.787333+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:26 vm03 bash[23382]: audit 2026-03-10T07:38:25.787333+0000 mon.b (mon.1) 537 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:26 vm03 bash[23382]: audit 2026-03-10T07:38:25.788155+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:26 vm03 bash[23382]: audit 2026-03-10T07:38:25.788155+0000 mon.a (mon.0) 3025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:27.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:26 vm03 bash[23382]: cluster 2026-03-10T07:38:26.544589+0000 mon.a (mon.0) 3026 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:27.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:26 vm03 bash[23382]: cluster 2026-03-10T07:38:26.544589+0000 mon.a (mon.0) 3026 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:27 vm00 bash[28005]: cluster 2026-03-10T07:38:26.719384+0000 mgr.y (mgr.24407) 511 : cluster [DBG] pgmap v866: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 475 B/s wr, 1 op/s 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:27 vm00 bash[28005]: cluster 2026-03-10T07:38:26.719384+0000 mgr.y (mgr.24407) 511 : cluster [DBG] pgmap v866: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 475 B/s wr, 1 op/s 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:27 vm00 bash[28005]: audit 2026-03-10T07:38:26.772936+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:27 vm00 bash[28005]: audit 2026-03-10T07:38:26.772936+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:27 vm00 bash[28005]: cluster 2026-03-10T07:38:26.780545+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:27 vm00 bash[28005]: cluster 2026-03-10T07:38:26.780545+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:27 vm00 bash[28005]: audit 2026-03-10T07:38:26.786328+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:27 vm00 bash[28005]: audit 2026-03-10T07:38:26.786328+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:27 vm00 bash[28005]: audit 2026-03-10T07:38:26.791131+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:27 vm00 bash[28005]: audit 2026-03-10T07:38:26.791131+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:27 vm00 bash[20701]: cluster 2026-03-10T07:38:26.719384+0000 mgr.y (mgr.24407) 511 : cluster [DBG] pgmap v866: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 475 B/s wr, 1 op/s 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:27 vm00 bash[20701]: cluster 2026-03-10T07:38:26.719384+0000 mgr.y (mgr.24407) 511 : cluster [DBG] pgmap v866: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 475 B/s wr, 1 op/s 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:27 vm00 bash[20701]: audit 2026-03-10T07:38:26.772936+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:27 vm00 bash[20701]: audit 2026-03-10T07:38:26.772936+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:27 vm00 bash[20701]: cluster 2026-03-10T07:38:26.780545+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:27 vm00 bash[20701]: cluster 2026-03-10T07:38:26.780545+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:27 vm00 bash[20701]: audit 2026-03-10T07:38:26.786328+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:27 vm00 bash[20701]: audit 2026-03-10T07:38:26.786328+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:28.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:27 vm00 bash[20701]: audit 2026-03-10T07:38:26.791131+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:28.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:27 vm00 bash[20701]: audit 2026-03-10T07:38:26.791131+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:27 vm03 bash[23382]: cluster 2026-03-10T07:38:26.719384+0000 mgr.y (mgr.24407) 511 : cluster [DBG] pgmap v866: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 475 B/s wr, 1 op/s 2026-03-10T07:38:28.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:27 vm03 bash[23382]: cluster 2026-03-10T07:38:26.719384+0000 mgr.y (mgr.24407) 511 : cluster [DBG] pgmap v866: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 475 B/s wr, 1 op/s 2026-03-10T07:38:28.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:27 vm03 bash[23382]: audit 2026-03-10T07:38:26.772936+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:28.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:27 vm03 bash[23382]: audit 2026-03-10T07:38:26.772936+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:28.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:27 vm03 bash[23382]: cluster 2026-03-10T07:38:26.780545+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T07:38:28.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:27 vm03 bash[23382]: cluster 2026-03-10T07:38:26.780545+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T07:38:28.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:27 vm03 bash[23382]: audit 2026-03-10T07:38:26.786328+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:28.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:27 vm03 bash[23382]: audit 2026-03-10T07:38:26.786328+0000 mon.b (mon.1) 538 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:28.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:27 vm03 bash[23382]: audit 2026-03-10T07:38:26.791131+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:28.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:27 vm03 bash[23382]: audit 2026-03-10T07:38:26.791131+0000 mon.a (mon.0) 3029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]: dispatch 2026-03-10T07:38:29.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:28 vm00 bash[28005]: audit 2026-03-10T07:38:27.802982+0000 mon.a (mon.0) 3030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:29.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:28 vm00 bash[28005]: audit 2026-03-10T07:38:27.802982+0000 mon.a (mon.0) 3030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:29.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:28 vm00 bash[28005]: cluster 2026-03-10T07:38:27.812976+0000 mon.a (mon.0) 3031 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T07:38:29.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:28 vm00 bash[28005]: cluster 2026-03-10T07:38:27.812976+0000 mon.a (mon.0) 3031 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T07:38:29.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:28 vm00 bash[20701]: audit 2026-03-10T07:38:27.802982+0000 mon.a (mon.0) 3030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:29.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:28 vm00 bash[20701]: audit 2026-03-10T07:38:27.802982+0000 mon.a (mon.0) 3030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:29.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:28 vm00 bash[20701]: cluster 2026-03-10T07:38:27.812976+0000 mon.a (mon.0) 3031 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T07:38:29.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:28 vm00 bash[20701]: cluster 2026-03-10T07:38:27.812976+0000 mon.a (mon.0) 3031 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T07:38:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:28 vm03 bash[23382]: audit 2026-03-10T07:38:27.802982+0000 mon.a (mon.0) 3030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:28 vm03 bash[23382]: audit 2026-03-10T07:38:27.802982+0000 mon.a (mon.0) 3030 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-112"}]': finished 2026-03-10T07:38:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:28 vm03 bash[23382]: cluster 2026-03-10T07:38:27.812976+0000 mon.a (mon.0) 3031 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T07:38:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:28 vm03 bash[23382]: cluster 2026-03-10T07:38:27.812976+0000 mon.a (mon.0) 3031 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T07:38:30.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:29 vm00 bash[28005]: cluster 2026-03-10T07:38:28.719874+0000 mgr.y (mgr.24407) 512 : cluster [DBG] pgmap v869: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 476 B/s wr, 1 op/s 2026-03-10T07:38:30.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:29 vm00 bash[28005]: cluster 2026-03-10T07:38:28.719874+0000 mgr.y (mgr.24407) 512 : cluster [DBG] pgmap v869: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 476 B/s wr, 1 op/s 2026-03-10T07:38:30.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:29 vm00 bash[28005]: cluster 2026-03-10T07:38:28.848001+0000 mon.a (mon.0) 3032 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T07:38:30.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:29 vm00 bash[28005]: cluster 2026-03-10T07:38:28.848001+0000 mon.a (mon.0) 3032 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T07:38:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:29 vm00 bash[20701]: cluster 2026-03-10T07:38:28.719874+0000 mgr.y (mgr.24407) 512 : cluster [DBG] pgmap v869: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 476 B/s wr, 1 op/s 2026-03-10T07:38:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:29 vm00 bash[20701]: cluster 2026-03-10T07:38:28.719874+0000 mgr.y (mgr.24407) 512 : cluster [DBG] pgmap v869: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 476 B/s wr, 1 op/s 2026-03-10T07:38:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:29 vm00 bash[20701]: cluster 2026-03-10T07:38:28.848001+0000 mon.a (mon.0) 3032 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T07:38:30.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:29 vm00 bash[20701]: cluster 2026-03-10T07:38:28.848001+0000 mon.a (mon.0) 3032 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T07:38:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:29 vm03 bash[23382]: cluster 2026-03-10T07:38:28.719874+0000 mgr.y (mgr.24407) 512 : cluster [DBG] pgmap v869: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 476 B/s wr, 1 op/s 2026-03-10T07:38:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:29 vm03 bash[23382]: cluster 2026-03-10T07:38:28.719874+0000 mgr.y (mgr.24407) 512 : cluster [DBG] pgmap v869: 268 pgs: 268 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 476 B/s wr, 1 op/s 2026-03-10T07:38:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:29 vm03 bash[23382]: cluster 2026-03-10T07:38:28.848001+0000 mon.a (mon.0) 3032 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T07:38:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:29 vm03 bash[23382]: cluster 2026-03-10T07:38:28.848001+0000 mon.a (mon.0) 3032 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:30 vm00 bash[28005]: cluster 2026-03-10T07:38:29.846907+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:30 vm00 bash[28005]: cluster 2026-03-10T07:38:29.846907+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:30 vm00 bash[28005]: audit 2026-03-10T07:38:29.850128+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:30 vm00 bash[28005]: audit 2026-03-10T07:38:29.850128+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:30 vm00 bash[28005]: audit 2026-03-10T07:38:29.852520+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:30 vm00 bash[28005]: audit 2026-03-10T07:38:29.852520+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:38:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:38:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:30 vm00 bash[20701]: cluster 2026-03-10T07:38:29.846907+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:30 vm00 bash[20701]: cluster 2026-03-10T07:38:29.846907+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:30 vm00 bash[20701]: audit 2026-03-10T07:38:29.850128+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:30 vm00 bash[20701]: audit 2026-03-10T07:38:29.850128+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:30 vm00 bash[20701]: audit 2026-03-10T07:38:29.852520+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:31.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:30 vm00 bash[20701]: audit 2026-03-10T07:38:29.852520+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:30 vm03 bash[23382]: cluster 2026-03-10T07:38:29.846907+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T07:38:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:30 vm03 bash[23382]: cluster 2026-03-10T07:38:29.846907+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T07:38:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:30 vm03 bash[23382]: audit 2026-03-10T07:38:29.850128+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:30 vm03 bash[23382]: audit 2026-03-10T07:38:29.850128+0000 mon.b (mon.1) 539 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:30 vm03 bash[23382]: audit 2026-03-10T07:38:29.852520+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:31.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:30 vm03 bash[23382]: audit 2026-03-10T07:38:29.852520+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: cluster 2026-03-10T07:38:30.720335+0000 mgr.y (mgr.24407) 513 : cluster [DBG] pgmap v872: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: cluster 2026-03-10T07:38:30.720335+0000 mgr.y (mgr.24407) 513 : cluster [DBG] pgmap v872: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: audit 2026-03-10T07:38:30.850047+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: audit 2026-03-10T07:38:30.850047+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: cluster 2026-03-10T07:38:30.859409+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: cluster 2026-03-10T07:38:30.859409+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: audit 2026-03-10T07:38:30.888519+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: audit 2026-03-10T07:38:30.888519+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: audit 2026-03-10T07:38:30.889281+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: audit 2026-03-10T07:38:30.889281+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: cluster 2026-03-10T07:38:31.545264+0000 mon.a (mon.0) 3038 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:31 vm00 bash[28005]: cluster 2026-03-10T07:38:31.545264+0000 mon.a (mon.0) 3038 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: cluster 2026-03-10T07:38:30.720335+0000 mgr.y (mgr.24407) 513 : cluster [DBG] pgmap v872: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: cluster 2026-03-10T07:38:30.720335+0000 mgr.y (mgr.24407) 513 : cluster [DBG] pgmap v872: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: audit 2026-03-10T07:38:30.850047+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: audit 2026-03-10T07:38:30.850047+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: cluster 2026-03-10T07:38:30.859409+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: cluster 2026-03-10T07:38:30.859409+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: audit 2026-03-10T07:38:30.888519+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: audit 2026-03-10T07:38:30.888519+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: audit 2026-03-10T07:38:30.889281+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: audit 2026-03-10T07:38:30.889281+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: cluster 2026-03-10T07:38:31.545264+0000 mon.a (mon.0) 3038 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:32.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:31 vm00 bash[20701]: cluster 2026-03-10T07:38:31.545264+0000 mon.a (mon.0) 3038 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: cluster 2026-03-10T07:38:30.720335+0000 mgr.y (mgr.24407) 513 : cluster [DBG] pgmap v872: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: cluster 2026-03-10T07:38:30.720335+0000 mgr.y (mgr.24407) 513 : cluster [DBG] pgmap v872: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: audit 2026-03-10T07:38:30.850047+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: audit 2026-03-10T07:38:30.850047+0000 mon.a (mon.0) 3035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: cluster 2026-03-10T07:38:30.859409+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T07:38:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: cluster 2026-03-10T07:38:30.859409+0000 mon.a (mon.0) 3036 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T07:38:32.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: audit 2026-03-10T07:38:30.888519+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: audit 2026-03-10T07:38:30.888519+0000 mon.b (mon.1) 540 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: audit 2026-03-10T07:38:30.889281+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: audit 2026-03-10T07:38:30.889281+0000 mon.a (mon.0) 3037 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:32.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: cluster 2026-03-10T07:38:31.545264+0000 mon.a (mon.0) 3038 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:32.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:31 vm03 bash[23382]: cluster 2026-03-10T07:38:31.545264+0000 mon.a (mon.0) 3038 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: audit 2026-03-10T07:38:31.852971+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: audit 2026-03-10T07:38:31.852971+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: cluster 2026-03-10T07:38:31.855474+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: cluster 2026-03-10T07:38:31.855474+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: audit 2026-03-10T07:38:31.855592+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: audit 2026-03-10T07:38:31.855592+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: audit 2026-03-10T07:38:31.859350+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: audit 2026-03-10T07:38:31.859350+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: audit 2026-03-10T07:38:32.855904+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: audit 2026-03-10T07:38:32.855904+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: audit 2026-03-10T07:38:32.859334+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: audit 2026-03-10T07:38:32.859334+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: cluster 2026-03-10T07:38:32.866606+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T07:38:33.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:32 vm00 bash[28005]: cluster 2026-03-10T07:38:32.866606+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: audit 2026-03-10T07:38:31.852971+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: audit 2026-03-10T07:38:31.852971+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: cluster 2026-03-10T07:38:31.855474+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: cluster 2026-03-10T07:38:31.855474+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: audit 2026-03-10T07:38:31.855592+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: audit 2026-03-10T07:38:31.855592+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: audit 2026-03-10T07:38:31.859350+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: audit 2026-03-10T07:38:31.859350+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: audit 2026-03-10T07:38:32.855904+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: audit 2026-03-10T07:38:32.855904+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: audit 2026-03-10T07:38:32.859334+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: audit 2026-03-10T07:38:32.859334+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: cluster 2026-03-10T07:38:32.866606+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T07:38:33.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:32 vm00 bash[20701]: cluster 2026-03-10T07:38:32.866606+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: audit 2026-03-10T07:38:31.852971+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: audit 2026-03-10T07:38:31.852971+0000 mon.a (mon.0) 3039 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: cluster 2026-03-10T07:38:31.855474+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: cluster 2026-03-10T07:38:31.855474+0000 mon.a (mon.0) 3040 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: audit 2026-03-10T07:38:31.855592+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: audit 2026-03-10T07:38:31.855592+0000 mon.b (mon.1) 541 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: audit 2026-03-10T07:38:31.859350+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: audit 2026-03-10T07:38:31.859350+0000 mon.a (mon.0) 3041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: audit 2026-03-10T07:38:32.855904+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: audit 2026-03-10T07:38:32.855904+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: audit 2026-03-10T07:38:32.859334+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: audit 2026-03-10T07:38:32.859334+0000 mon.b (mon.1) 542 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: cluster 2026-03-10T07:38:32.866606+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T07:38:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:32 vm03 bash[23382]: cluster 2026-03-10T07:38:32.866606+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T07:38:33.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:38:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:38:34.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:33 vm00 bash[28005]: cluster 2026-03-10T07:38:32.720703+0000 mgr.y (mgr.24407) 514 : cluster [DBG] pgmap v875: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:34.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:33 vm00 bash[28005]: cluster 2026-03-10T07:38:32.720703+0000 mgr.y (mgr.24407) 514 : cluster [DBG] pgmap v875: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:34.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:33 vm00 bash[28005]: audit 2026-03-10T07:38:32.867061+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:34.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:33 vm00 bash[28005]: audit 2026-03-10T07:38:32.867061+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:34.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:33 vm00 bash[28005]: cluster 2026-03-10T07:38:33.856028+0000 mon.a (mon.0) 3045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:34.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:33 vm00 bash[28005]: cluster 2026-03-10T07:38:33.856028+0000 mon.a (mon.0) 3045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:34.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:33 vm00 bash[28005]: audit 2026-03-10T07:38:33.858562+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]': finished 2026-03-10T07:38:34.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:33 vm00 bash[28005]: audit 2026-03-10T07:38:33.858562+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]': finished 2026-03-10T07:38:34.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:33 vm00 bash[28005]: cluster 2026-03-10T07:38:33.862252+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T07:38:34.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:33 vm00 bash[28005]: cluster 2026-03-10T07:38:33.862252+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T07:38:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:33 vm00 bash[20701]: cluster 2026-03-10T07:38:32.720703+0000 mgr.y (mgr.24407) 514 : cluster [DBG] pgmap v875: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:33 vm00 bash[20701]: cluster 2026-03-10T07:38:32.720703+0000 mgr.y (mgr.24407) 514 : cluster [DBG] pgmap v875: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:33 vm00 bash[20701]: audit 2026-03-10T07:38:32.867061+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:33 vm00 bash[20701]: audit 2026-03-10T07:38:32.867061+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:33 vm00 bash[20701]: cluster 2026-03-10T07:38:33.856028+0000 mon.a (mon.0) 3045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:33 vm00 bash[20701]: cluster 2026-03-10T07:38:33.856028+0000 mon.a (mon.0) 3045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:33 vm00 bash[20701]: audit 2026-03-10T07:38:33.858562+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]': finished 2026-03-10T07:38:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:33 vm00 bash[20701]: audit 2026-03-10T07:38:33.858562+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]': finished 2026-03-10T07:38:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:33 vm00 bash[20701]: cluster 2026-03-10T07:38:33.862252+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T07:38:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:33 vm00 bash[20701]: cluster 2026-03-10T07:38:33.862252+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T07:38:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:33 vm03 bash[23382]: cluster 2026-03-10T07:38:32.720703+0000 mgr.y (mgr.24407) 514 : cluster [DBG] pgmap v875: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:33 vm03 bash[23382]: cluster 2026-03-10T07:38:32.720703+0000 mgr.y (mgr.24407) 514 : cluster [DBG] pgmap v875: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 999 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:38:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:33 vm03 bash[23382]: audit 2026-03-10T07:38:32.867061+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:33 vm03 bash[23382]: audit 2026-03-10T07:38:32.867061+0000 mon.a (mon.0) 3044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]: dispatch 2026-03-10T07:38:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:33 vm03 bash[23382]: cluster 2026-03-10T07:38:33.856028+0000 mon.a (mon.0) 3045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:33 vm03 bash[23382]: cluster 2026-03-10T07:38:33.856028+0000 mon.a (mon.0) 3045 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:33 vm03 bash[23382]: audit 2026-03-10T07:38:33.858562+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]': finished 2026-03-10T07:38:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:33 vm03 bash[23382]: audit 2026-03-10T07:38:33.858562+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-114", "mode": "writeback"}]': finished 2026-03-10T07:38:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:33 vm03 bash[23382]: cluster 2026-03-10T07:38:33.862252+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T07:38:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:33 vm03 bash[23382]: cluster 2026-03-10T07:38:33.862252+0000 mon.a (mon.0) 3047 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:34 vm00 bash[28005]: audit 2026-03-10T07:38:33.425669+0000 mgr.y (mgr.24407) 515 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:34 vm00 bash[28005]: audit 2026-03-10T07:38:33.425669+0000 mgr.y (mgr.24407) 515 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:34 vm00 bash[28005]: audit 2026-03-10T07:38:33.908686+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:34 vm00 bash[28005]: audit 2026-03-10T07:38:33.908686+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:34 vm00 bash[28005]: audit 2026-03-10T07:38:33.909510+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:34 vm00 bash[28005]: audit 2026-03-10T07:38:33.909510+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:34 vm00 bash[20701]: audit 2026-03-10T07:38:33.425669+0000 mgr.y (mgr.24407) 515 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:34 vm00 bash[20701]: audit 2026-03-10T07:38:33.425669+0000 mgr.y (mgr.24407) 515 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:34 vm00 bash[20701]: audit 2026-03-10T07:38:33.908686+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:34 vm00 bash[20701]: audit 2026-03-10T07:38:33.908686+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:34 vm00 bash[20701]: audit 2026-03-10T07:38:33.909510+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:35.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:34 vm00 bash[20701]: audit 2026-03-10T07:38:33.909510+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:34 vm03 bash[23382]: audit 2026-03-10T07:38:33.425669+0000 mgr.y (mgr.24407) 515 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:34 vm03 bash[23382]: audit 2026-03-10T07:38:33.425669+0000 mgr.y (mgr.24407) 515 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:34 vm03 bash[23382]: audit 2026-03-10T07:38:33.908686+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:34 vm03 bash[23382]: audit 2026-03-10T07:38:33.908686+0000 mon.b (mon.1) 543 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:34 vm03 bash[23382]: audit 2026-03-10T07:38:33.909510+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:34 vm03 bash[23382]: audit 2026-03-10T07:38:33.909510+0000 mon.a (mon.0) 3048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:38:36.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:35 vm03 bash[23382]: cluster 2026-03-10T07:38:34.721453+0000 mgr.y (mgr.24407) 516 : cluster [DBG] pgmap v878: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:36.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:35 vm03 bash[23382]: cluster 2026-03-10T07:38:34.721453+0000 mgr.y (mgr.24407) 516 : cluster [DBG] pgmap v878: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:36.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:35 vm03 bash[23382]: audit 2026-03-10T07:38:34.888261+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:36.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:35 vm03 bash[23382]: audit 2026-03-10T07:38:34.888261+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:36.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:35 vm03 bash[23382]: audit 2026-03-10T07:38:34.890540+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:36.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:35 vm03 bash[23382]: audit 2026-03-10T07:38:34.890540+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:36.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:35 vm03 bash[23382]: cluster 2026-03-10T07:38:34.896298+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T07:38:36.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:35 vm03 bash[23382]: cluster 2026-03-10T07:38:34.896298+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T07:38:36.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:35 vm03 bash[23382]: audit 2026-03-10T07:38:34.900297+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:36.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:35 vm03 bash[23382]: audit 2026-03-10T07:38:34.900297+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:35 vm00 bash[28005]: cluster 2026-03-10T07:38:34.721453+0000 mgr.y (mgr.24407) 516 : cluster [DBG] pgmap v878: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:35 vm00 bash[28005]: cluster 2026-03-10T07:38:34.721453+0000 mgr.y (mgr.24407) 516 : cluster [DBG] pgmap v878: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:35 vm00 bash[28005]: audit 2026-03-10T07:38:34.888261+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:35 vm00 bash[28005]: audit 2026-03-10T07:38:34.888261+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:35 vm00 bash[28005]: audit 2026-03-10T07:38:34.890540+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:35 vm00 bash[28005]: audit 2026-03-10T07:38:34.890540+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:35 vm00 bash[28005]: cluster 2026-03-10T07:38:34.896298+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:35 vm00 bash[28005]: cluster 2026-03-10T07:38:34.896298+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:35 vm00 bash[28005]: audit 2026-03-10T07:38:34.900297+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:35 vm00 bash[28005]: audit 2026-03-10T07:38:34.900297+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:35 vm00 bash[20701]: cluster 2026-03-10T07:38:34.721453+0000 mgr.y (mgr.24407) 516 : cluster [DBG] pgmap v878: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:35 vm00 bash[20701]: cluster 2026-03-10T07:38:34.721453+0000 mgr.y (mgr.24407) 516 : cluster [DBG] pgmap v878: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:35 vm00 bash[20701]: audit 2026-03-10T07:38:34.888261+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:35 vm00 bash[20701]: audit 2026-03-10T07:38:34.888261+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:35 vm00 bash[20701]: audit 2026-03-10T07:38:34.890540+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:35 vm00 bash[20701]: audit 2026-03-10T07:38:34.890540+0000 mon.b (mon.1) 544 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:35 vm00 bash[20701]: cluster 2026-03-10T07:38:34.896298+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:35 vm00 bash[20701]: cluster 2026-03-10T07:38:34.896298+0000 mon.a (mon.0) 3050 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:35 vm00 bash[20701]: audit 2026-03-10T07:38:34.900297+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:35 vm00 bash[20701]: audit 2026-03-10T07:38:34.900297+0000 mon.a (mon.0) 3051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]: dispatch 2026-03-10T07:38:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:36 vm03 bash[23382]: cluster 2026-03-10T07:38:35.888458+0000 mon.a (mon.0) 3052 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:36 vm03 bash[23382]: cluster 2026-03-10T07:38:35.888458+0000 mon.a (mon.0) 3052 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:36 vm03 bash[23382]: audit 2026-03-10T07:38:35.891447+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:36 vm03 bash[23382]: audit 2026-03-10T07:38:35.891447+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:36 vm03 bash[23382]: cluster 2026-03-10T07:38:35.901687+0000 mon.a (mon.0) 3054 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T07:38:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:36 vm03 bash[23382]: cluster 2026-03-10T07:38:35.901687+0000 mon.a (mon.0) 3054 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T07:38:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:36 vm03 bash[23382]: cluster 2026-03-10T07:38:36.546005+0000 mon.a (mon.0) 3055 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:36 vm03 bash[23382]: cluster 2026-03-10T07:38:36.546005+0000 mon.a (mon.0) 3055 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:36 vm00 bash[28005]: cluster 2026-03-10T07:38:35.888458+0000 mon.a (mon.0) 3052 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:36 vm00 bash[28005]: cluster 2026-03-10T07:38:35.888458+0000 mon.a (mon.0) 3052 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:36 vm00 bash[28005]: audit 2026-03-10T07:38:35.891447+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:36 vm00 bash[28005]: audit 2026-03-10T07:38:35.891447+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:36 vm00 bash[28005]: cluster 2026-03-10T07:38:35.901687+0000 mon.a (mon.0) 3054 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:36 vm00 bash[28005]: cluster 2026-03-10T07:38:35.901687+0000 mon.a (mon.0) 3054 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:36 vm00 bash[28005]: cluster 2026-03-10T07:38:36.546005+0000 mon.a (mon.0) 3055 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:36 vm00 bash[28005]: cluster 2026-03-10T07:38:36.546005+0000 mon.a (mon.0) 3055 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:36 vm00 bash[20701]: cluster 2026-03-10T07:38:35.888458+0000 mon.a (mon.0) 3052 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:36 vm00 bash[20701]: cluster 2026-03-10T07:38:35.888458+0000 mon.a (mon.0) 3052 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:36 vm00 bash[20701]: audit 2026-03-10T07:38:35.891447+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:36 vm00 bash[20701]: audit 2026-03-10T07:38:35.891447+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-114"}]': finished 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:36 vm00 bash[20701]: cluster 2026-03-10T07:38:35.901687+0000 mon.a (mon.0) 3054 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:36 vm00 bash[20701]: cluster 2026-03-10T07:38:35.901687+0000 mon.a (mon.0) 3054 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:36 vm00 bash[20701]: cluster 2026-03-10T07:38:36.546005+0000 mon.a (mon.0) 3055 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:37.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:36 vm00 bash[20701]: cluster 2026-03-10T07:38:36.546005+0000 mon.a (mon.0) 3055 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:38:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:37 vm03 bash[23382]: cluster 2026-03-10T07:38:36.721807+0000 mgr.y (mgr.24407) 517 : cluster [DBG] pgmap v881: 268 pgs: 268 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:37 vm03 bash[23382]: cluster 2026-03-10T07:38:36.721807+0000 mgr.y (mgr.24407) 517 : cluster [DBG] pgmap v881: 268 pgs: 268 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:37 vm03 bash[23382]: cluster 2026-03-10T07:38:36.929069+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T07:38:38.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:37 vm03 bash[23382]: cluster 2026-03-10T07:38:36.929069+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T07:38:38.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:37 vm00 bash[28005]: cluster 2026-03-10T07:38:36.721807+0000 mgr.y (mgr.24407) 517 : cluster [DBG] pgmap v881: 268 pgs: 268 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:38.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:37 vm00 bash[28005]: cluster 2026-03-10T07:38:36.721807+0000 mgr.y (mgr.24407) 517 : cluster [DBG] pgmap v881: 268 pgs: 268 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:38.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:37 vm00 bash[28005]: cluster 2026-03-10T07:38:36.929069+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T07:38:38.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:37 vm00 bash[28005]: cluster 2026-03-10T07:38:36.929069+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T07:38:38.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:37 vm00 bash[20701]: cluster 2026-03-10T07:38:36.721807+0000 mgr.y (mgr.24407) 517 : cluster [DBG] pgmap v881: 268 pgs: 268 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:38.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:37 vm00 bash[20701]: cluster 2026-03-10T07:38:36.721807+0000 mgr.y (mgr.24407) 517 : cluster [DBG] pgmap v881: 268 pgs: 268 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:38:38.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:37 vm00 bash[20701]: cluster 2026-03-10T07:38:36.929069+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T07:38:38.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:37 vm00 bash[20701]: cluster 2026-03-10T07:38:36.929069+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T07:38:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:38 vm03 bash[23382]: cluster 2026-03-10T07:38:37.947901+0000 mon.a (mon.0) 3057 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T07:38:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:38 vm03 bash[23382]: cluster 2026-03-10T07:38:37.947901+0000 mon.a (mon.0) 3057 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T07:38:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:38 vm03 bash[23382]: audit 2026-03-10T07:38:37.961121+0000 mon.b (mon.1) 545 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:38 vm03 bash[23382]: audit 2026-03-10T07:38:37.961121+0000 mon.b (mon.1) 545 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:38 vm03 bash[23382]: audit 2026-03-10T07:38:37.961917+0000 mon.a (mon.0) 3058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:39.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:38 vm03 bash[23382]: audit 2026-03-10T07:38:37.961917+0000 mon.a (mon.0) 3058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:38 vm00 bash[28005]: cluster 2026-03-10T07:38:37.947901+0000 mon.a (mon.0) 3057 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:38 vm00 bash[28005]: cluster 2026-03-10T07:38:37.947901+0000 mon.a (mon.0) 3057 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:38 vm00 bash[28005]: audit 2026-03-10T07:38:37.961121+0000 mon.b (mon.1) 545 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:38 vm00 bash[28005]: audit 2026-03-10T07:38:37.961121+0000 mon.b (mon.1) 545 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:38 vm00 bash[28005]: audit 2026-03-10T07:38:37.961917+0000 mon.a (mon.0) 3058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:38 vm00 bash[28005]: audit 2026-03-10T07:38:37.961917+0000 mon.a (mon.0) 3058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:38 vm00 bash[20701]: cluster 2026-03-10T07:38:37.947901+0000 mon.a (mon.0) 3057 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:38 vm00 bash[20701]: cluster 2026-03-10T07:38:37.947901+0000 mon.a (mon.0) 3057 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:38 vm00 bash[20701]: audit 2026-03-10T07:38:37.961121+0000 mon.b (mon.1) 545 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:38 vm00 bash[20701]: audit 2026-03-10T07:38:37.961121+0000 mon.b (mon.1) 545 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:38 vm00 bash[20701]: audit 2026-03-10T07:38:37.961917+0000 mon.a (mon.0) 3058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:39.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:38 vm00 bash[20701]: audit 2026-03-10T07:38:37.961917+0000 mon.a (mon.0) 3058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:38:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:39 vm03 bash[23382]: cluster 2026-03-10T07:38:38.722192+0000 mgr.y (mgr.24407) 518 : cluster [DBG] pgmap v884: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:38:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:39 vm03 bash[23382]: cluster 2026-03-10T07:38:38.722192+0000 mgr.y (mgr.24407) 518 : cluster [DBG] pgmap v884: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:38:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:39 vm03 bash[23382]: audit 2026-03-10T07:38:38.941216+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:39 vm03 bash[23382]: audit 2026-03-10T07:38:38.941216+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:39 vm03 bash[23382]: cluster 2026-03-10T07:38:38.944875+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T07:38:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:39 vm03 bash[23382]: cluster 2026-03-10T07:38:38.944875+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T07:38:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:39 vm03 bash[23382]: audit 2026-03-10T07:38:39.688536+0000 mon.c (mon.2) 343 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:39 vm03 bash[23382]: audit 2026-03-10T07:38:39.688536+0000 mon.c (mon.2) 343 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:39 vm03 bash[23382]: cluster 2026-03-10T07:38:39.947209+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T07:38:40.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:39 vm03 bash[23382]: cluster 2026-03-10T07:38:39.947209+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T07:38:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:39 vm00 bash[28005]: cluster 2026-03-10T07:38:38.722192+0000 mgr.y (mgr.24407) 518 : cluster [DBG] pgmap v884: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:38:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:39 vm00 bash[28005]: cluster 2026-03-10T07:38:38.722192+0000 mgr.y (mgr.24407) 518 : cluster [DBG] pgmap v884: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:38:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:39 vm00 bash[28005]: audit 2026-03-10T07:38:38.941216+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:39 vm00 bash[28005]: audit 2026-03-10T07:38:38.941216+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:39 vm00 bash[28005]: cluster 2026-03-10T07:38:38.944875+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T07:38:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:39 vm00 bash[28005]: cluster 2026-03-10T07:38:38.944875+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T07:38:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:39 vm00 bash[28005]: audit 2026-03-10T07:38:39.688536+0000 mon.c (mon.2) 343 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:39 vm00 bash[28005]: audit 2026-03-10T07:38:39.688536+0000 mon.c (mon.2) 343 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:39 vm00 bash[28005]: cluster 2026-03-10T07:38:39.947209+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T07:38:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:39 vm00 bash[28005]: cluster 2026-03-10T07:38:39.947209+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T07:38:40.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:39 vm00 bash[20701]: cluster 2026-03-10T07:38:38.722192+0000 mgr.y (mgr.24407) 518 : cluster [DBG] pgmap v884: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:38:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:39 vm00 bash[20701]: cluster 2026-03-10T07:38:38.722192+0000 mgr.y (mgr.24407) 518 : cluster [DBG] pgmap v884: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1000 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:38:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:39 vm00 bash[20701]: audit 2026-03-10T07:38:38.941216+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:39 vm00 bash[20701]: audit 2026-03-10T07:38:38.941216+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:38:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:39 vm00 bash[20701]: cluster 2026-03-10T07:38:38.944875+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T07:38:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:39 vm00 bash[20701]: cluster 2026-03-10T07:38:38.944875+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T07:38:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:39 vm00 bash[20701]: audit 2026-03-10T07:38:39.688536+0000 mon.c (mon.2) 343 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:39 vm00 bash[20701]: audit 2026-03-10T07:38:39.688536+0000 mon.c (mon.2) 343 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:39 vm00 bash[20701]: cluster 2026-03-10T07:38:39.947209+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T07:38:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:39 vm00 bash[20701]: cluster 2026-03-10T07:38:39.947209+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T07:38:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:41 vm00 bash[28005]: audit 2026-03-10T07:38:39.971489+0000 mon.b (mon.1) 546 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:41 vm00 bash[28005]: audit 2026-03-10T07:38:39.971489+0000 mon.b (mon.1) 546 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:41 vm00 bash[28005]: audit 2026-03-10T07:38:39.972454+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:41 vm00 bash[28005]: audit 2026-03-10T07:38:39.972454+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:41.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:38:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:38:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:38:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:41 vm00 bash[20701]: audit 2026-03-10T07:38:39.971489+0000 mon.b (mon.1) 546 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:41 vm00 bash[20701]: audit 2026-03-10T07:38:39.971489+0000 mon.b (mon.1) 546 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:41 vm00 bash[20701]: audit 2026-03-10T07:38:39.972454+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:41 vm00 bash[20701]: audit 2026-03-10T07:38:39.972454+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:41.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:41 vm03 bash[23382]: audit 2026-03-10T07:38:39.971489+0000 mon.b (mon.1) 546 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:41.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:41 vm03 bash[23382]: audit 2026-03-10T07:38:39.971489+0000 mon.b (mon.1) 546 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:41.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:41 vm03 bash[23382]: audit 2026-03-10T07:38:39.972454+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:41.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:41 vm03 bash[23382]: audit 2026-03-10T07:38:39.972454+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:38:42.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: cluster 2026-03-10T07:38:40.722592+0000 mgr.y (mgr.24407) 519 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: cluster 2026-03-10T07:38:40.722592+0000 mgr.y (mgr.24407) 519 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:41.081777+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:41.081777+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: cluster 2026-03-10T07:38:41.085553+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: cluster 2026-03-10T07:38:41.085553+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:41.085992+0000 mon.b (mon.1) 547 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:41.085992+0000 mon.b (mon.1) 547 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:41.087048+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:41.087048+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:42.084840+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:42.084840+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:42.088826+0000 mon.b (mon.1) 548 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:42.088826+0000 mon.b (mon.1) 548 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: cluster 2026-03-10T07:38:42.094248+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: cluster 2026-03-10T07:38:42.094248+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:42.095074+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:42 vm00 bash[28005]: audit 2026-03-10T07:38:42.095074+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: cluster 2026-03-10T07:38:40.722592+0000 mgr.y (mgr.24407) 519 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: cluster 2026-03-10T07:38:40.722592+0000 mgr.y (mgr.24407) 519 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:41.081777+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:41.081777+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: cluster 2026-03-10T07:38:41.085553+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: cluster 2026-03-10T07:38:41.085553+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:41.085992+0000 mon.b (mon.1) 547 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:41.085992+0000 mon.b (mon.1) 547 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:41.087048+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:41.087048+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:42.084840+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:42.084840+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:42.088826+0000 mon.b (mon.1) 548 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:42.088826+0000 mon.b (mon.1) 548 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: cluster 2026-03-10T07:38:42.094248+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: cluster 2026-03-10T07:38:42.094248+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:42.095074+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:42 vm00 bash[20701]: audit 2026-03-10T07:38:42.095074+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: cluster 2026-03-10T07:38:40.722592+0000 mgr.y (mgr.24407) 519 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: cluster 2026-03-10T07:38:40.722592+0000 mgr.y (mgr.24407) 519 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:41.081777+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:41.081777+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: cluster 2026-03-10T07:38:41.085553+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: cluster 2026-03-10T07:38:41.085553+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:41.085992+0000 mon.b (mon.1) 547 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:41.085992+0000 mon.b (mon.1) 547 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:41.087048+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:41.087048+0000 mon.a (mon.0) 3065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:42.084840+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:42.084840+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:42.088826+0000 mon.b (mon.1) 548 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:42.088826+0000 mon.b (mon.1) 548 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: cluster 2026-03-10T07:38:42.094248+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: cluster 2026-03-10T07:38:42.094248+0000 mon.a (mon.0) 3067 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:42.095074+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:42 vm03 bash[23382]: audit 2026-03-10T07:38:42.095074+0000 mon.a (mon.0) 3068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]: dispatch 2026-03-10T07:38:43.513 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:38:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:38:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:43 vm03 bash[23382]: cluster 2026-03-10T07:38:43.084972+0000 mon.a (mon.0) 3069 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:43 vm03 bash[23382]: cluster 2026-03-10T07:38:43.084972+0000 mon.a (mon.0) 3069 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:43 vm03 bash[23382]: audit 2026-03-10T07:38:43.089149+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]': finished 2026-03-10T07:38:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:43 vm03 bash[23382]: audit 2026-03-10T07:38:43.089149+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]': finished 2026-03-10T07:38:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:43 vm03 bash[23382]: cluster 2026-03-10T07:38:43.091725+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T07:38:43.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:43 vm03 bash[23382]: cluster 2026-03-10T07:38:43.091725+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T07:38:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:43 vm00 bash[28005]: cluster 2026-03-10T07:38:43.084972+0000 mon.a (mon.0) 3069 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:43 vm00 bash[28005]: cluster 2026-03-10T07:38:43.084972+0000 mon.a (mon.0) 3069 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:43 vm00 bash[28005]: audit 2026-03-10T07:38:43.089149+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]': finished 2026-03-10T07:38:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:43 vm00 bash[28005]: audit 2026-03-10T07:38:43.089149+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]': finished 2026-03-10T07:38:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:43 vm00 bash[28005]: cluster 2026-03-10T07:38:43.091725+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T07:38:43.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:43 vm00 bash[28005]: cluster 2026-03-10T07:38:43.091725+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T07:38:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:43 vm00 bash[20701]: cluster 2026-03-10T07:38:43.084972+0000 mon.a (mon.0) 3069 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:43 vm00 bash[20701]: cluster 2026-03-10T07:38:43.084972+0000 mon.a (mon.0) 3069 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:38:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:43 vm00 bash[20701]: audit 2026-03-10T07:38:43.089149+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]': finished 2026-03-10T07:38:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:43 vm00 bash[20701]: audit 2026-03-10T07:38:43.089149+0000 mon.a (mon.0) 3070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-116", "mode": "writeback"}]': finished 2026-03-10T07:38:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:43 vm00 bash[20701]: cluster 2026-03-10T07:38:43.091725+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T07:38:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:43 vm00 bash[20701]: cluster 2026-03-10T07:38:43.091725+0000 mon.a (mon.0) 3071 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T07:38:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:44 vm00 bash[28005]: cluster 2026-03-10T07:38:42.723006+0000 mgr.y (mgr.24407) 520 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:38:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:44 vm00 bash[28005]: cluster 2026-03-10T07:38:42.723006+0000 mgr.y (mgr.24407) 520 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:38:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:44 vm00 bash[28005]: audit 2026-03-10T07:38:43.258757+0000 mon.b (mon.1) 549 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:44 vm00 bash[28005]: audit 2026-03-10T07:38:43.258757+0000 mon.b (mon.1) 549 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:44 vm00 bash[28005]: audit 2026-03-10T07:38:43.259489+0000 mgr.y (mgr.24407) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:44 vm00 bash[28005]: audit 2026-03-10T07:38:43.259489+0000 mgr.y (mgr.24407) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:44 vm00 bash[20701]: cluster 2026-03-10T07:38:42.723006+0000 mgr.y (mgr.24407) 520 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:38:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:44 vm00 bash[20701]: cluster 2026-03-10T07:38:42.723006+0000 mgr.y (mgr.24407) 520 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:38:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:44 vm00 bash[20701]: audit 2026-03-10T07:38:43.258757+0000 mon.b (mon.1) 549 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:44 vm00 bash[20701]: audit 2026-03-10T07:38:43.258757+0000 mon.b (mon.1) 549 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:44 vm00 bash[20701]: audit 2026-03-10T07:38:43.259489+0000 mgr.y (mgr.24407) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:44 vm00 bash[20701]: audit 2026-03-10T07:38:43.259489+0000 mgr.y (mgr.24407) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:44 vm03 bash[23382]: cluster 2026-03-10T07:38:42.723006+0000 mgr.y (mgr.24407) 520 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:38:44.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:44 vm03 bash[23382]: cluster 2026-03-10T07:38:42.723006+0000 mgr.y (mgr.24407) 520 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:38:44.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:44 vm03 bash[23382]: audit 2026-03-10T07:38:43.258757+0000 mon.b (mon.1) 549 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:44.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:44 vm03 bash[23382]: audit 2026-03-10T07:38:43.258757+0000 mon.b (mon.1) 549 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:44.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:44 vm03 bash[23382]: audit 2026-03-10T07:38:43.259489+0000 mgr.y (mgr.24407) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:44.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:44 vm03 bash[23382]: audit 2026-03-10T07:38:43.259489+0000 mgr.y (mgr.24407) 521 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:45 vm00 bash[28005]: audit 2026-03-10T07:38:43.433763+0000 mgr.y (mgr.24407) 522 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:45 vm00 bash[28005]: audit 2026-03-10T07:38:43.433763+0000 mgr.y (mgr.24407) 522 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:45 vm00 bash[28005]: cluster 2026-03-10T07:38:43.680829+0000 osd.5 (osd.5) 13 : cluster [DBG] 318.5 scrub starts 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:45 vm00 bash[28005]: cluster 2026-03-10T07:38:43.680829+0000 osd.5 (osd.5) 13 : cluster [DBG] 318.5 scrub starts 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:45 vm00 bash[28005]: cluster 2026-03-10T07:38:43.681870+0000 osd.5 (osd.5) 14 : cluster [DBG] 318.5 scrub ok 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:45 vm00 bash[28005]: cluster 2026-03-10T07:38:43.681870+0000 osd.5 (osd.5) 14 : cluster [DBG] 318.5 scrub ok 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:45 vm00 bash[20701]: audit 2026-03-10T07:38:43.433763+0000 mgr.y (mgr.24407) 522 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:45 vm00 bash[20701]: audit 2026-03-10T07:38:43.433763+0000 mgr.y (mgr.24407) 522 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:45 vm00 bash[20701]: cluster 2026-03-10T07:38:43.680829+0000 osd.5 (osd.5) 13 : cluster [DBG] 318.5 scrub starts 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:45 vm00 bash[20701]: cluster 2026-03-10T07:38:43.680829+0000 osd.5 (osd.5) 13 : cluster [DBG] 318.5 scrub starts 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:45 vm00 bash[20701]: cluster 2026-03-10T07:38:43.681870+0000 osd.5 (osd.5) 14 : cluster [DBG] 318.5 scrub ok 2026-03-10T07:38:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:45 vm00 bash[20701]: cluster 2026-03-10T07:38:43.681870+0000 osd.5 (osd.5) 14 : cluster [DBG] 318.5 scrub ok 2026-03-10T07:38:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:45 vm03 bash[23382]: audit 2026-03-10T07:38:43.433763+0000 mgr.y (mgr.24407) 522 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:45 vm03 bash[23382]: audit 2026-03-10T07:38:43.433763+0000 mgr.y (mgr.24407) 522 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:45 vm03 bash[23382]: cluster 2026-03-10T07:38:43.680829+0000 osd.5 (osd.5) 13 : cluster [DBG] 318.5 scrub starts 2026-03-10T07:38:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:45 vm03 bash[23382]: cluster 2026-03-10T07:38:43.680829+0000 osd.5 (osd.5) 13 : cluster [DBG] 318.5 scrub starts 2026-03-10T07:38:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:45 vm03 bash[23382]: cluster 2026-03-10T07:38:43.681870+0000 osd.5 (osd.5) 14 : cluster [DBG] 318.5 scrub ok 2026-03-10T07:38:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:45 vm03 bash[23382]: cluster 2026-03-10T07:38:43.681870+0000 osd.5 (osd.5) 14 : cluster [DBG] 318.5 scrub ok 2026-03-10T07:38:46.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:46 vm00 bash[28005]: cluster 2026-03-10T07:38:44.723768+0000 mgr.y (mgr.24407) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 214 B/s rd, 643 B/s wr, 1 op/s 2026-03-10T07:38:46.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:46 vm00 bash[28005]: cluster 2026-03-10T07:38:44.723768+0000 mgr.y (mgr.24407) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 214 B/s rd, 643 B/s wr, 1 op/s 2026-03-10T07:38:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:46 vm00 bash[20701]: cluster 2026-03-10T07:38:44.723768+0000 mgr.y (mgr.24407) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 214 B/s rd, 643 B/s wr, 1 op/s 2026-03-10T07:38:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:46 vm00 bash[20701]: cluster 2026-03-10T07:38:44.723768+0000 mgr.y (mgr.24407) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 214 B/s rd, 643 B/s wr, 1 op/s 2026-03-10T07:38:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:46 vm03 bash[23382]: cluster 2026-03-10T07:38:44.723768+0000 mgr.y (mgr.24407) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 214 B/s rd, 643 B/s wr, 1 op/s 2026-03-10T07:38:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:46 vm03 bash[23382]: cluster 2026-03-10T07:38:44.723768+0000 mgr.y (mgr.24407) 523 : cluster [DBG] pgmap v892: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 214 B/s rd, 643 B/s wr, 1 op/s 2026-03-10T07:38:47.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:47 vm00 bash[28005]: cluster 2026-03-10T07:38:46.724304+0000 mgr.y (mgr.24407) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:38:47.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:47 vm00 bash[28005]: cluster 2026-03-10T07:38:46.724304+0000 mgr.y (mgr.24407) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:38:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:47 vm00 bash[20701]: cluster 2026-03-10T07:38:46.724304+0000 mgr.y (mgr.24407) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:38:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:47 vm00 bash[20701]: cluster 2026-03-10T07:38:46.724304+0000 mgr.y (mgr.24407) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:38:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:47 vm03 bash[23382]: cluster 2026-03-10T07:38:46.724304+0000 mgr.y (mgr.24407) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:38:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:47 vm03 bash[23382]: cluster 2026-03-10T07:38:46.724304+0000 mgr.y (mgr.24407) 524 : cluster [DBG] pgmap v893: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:38:50.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:49 vm00 bash[28005]: cluster 2026-03-10T07:38:48.724642+0000 mgr.y (mgr.24407) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 939 B/s rd, 402 B/s wr, 1 op/s 2026-03-10T07:38:50.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:49 vm00 bash[28005]: cluster 2026-03-10T07:38:48.724642+0000 mgr.y (mgr.24407) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 939 B/s rd, 402 B/s wr, 1 op/s 2026-03-10T07:38:50.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:49 vm00 bash[20701]: cluster 2026-03-10T07:38:48.724642+0000 mgr.y (mgr.24407) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 939 B/s rd, 402 B/s wr, 1 op/s 2026-03-10T07:38:50.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:49 vm00 bash[20701]: cluster 2026-03-10T07:38:48.724642+0000 mgr.y (mgr.24407) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 939 B/s rd, 402 B/s wr, 1 op/s 2026-03-10T07:38:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:49 vm03 bash[23382]: cluster 2026-03-10T07:38:48.724642+0000 mgr.y (mgr.24407) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 939 B/s rd, 402 B/s wr, 1 op/s 2026-03-10T07:38:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:49 vm03 bash[23382]: cluster 2026-03-10T07:38:48.724642+0000 mgr.y (mgr.24407) 525 : cluster [DBG] pgmap v894: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 939 B/s rd, 402 B/s wr, 1 op/s 2026-03-10T07:38:51.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:38:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:38:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:38:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:51 vm00 bash[28005]: cluster 2026-03-10T07:38:50.725363+0000 mgr.y (mgr.24407) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 355 B/s wr, 2 op/s 2026-03-10T07:38:52.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:51 vm00 bash[28005]: cluster 2026-03-10T07:38:50.725363+0000 mgr.y (mgr.24407) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 355 B/s wr, 2 op/s 2026-03-10T07:38:52.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:51 vm00 bash[20701]: cluster 2026-03-10T07:38:50.725363+0000 mgr.y (mgr.24407) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 355 B/s wr, 2 op/s 2026-03-10T07:38:52.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:51 vm00 bash[20701]: cluster 2026-03-10T07:38:50.725363+0000 mgr.y (mgr.24407) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 355 B/s wr, 2 op/s 2026-03-10T07:38:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:51 vm03 bash[23382]: cluster 2026-03-10T07:38:50.725363+0000 mgr.y (mgr.24407) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 355 B/s wr, 2 op/s 2026-03-10T07:38:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:51 vm03 bash[23382]: cluster 2026-03-10T07:38:50.725363+0000 mgr.y (mgr.24407) 526 : cluster [DBG] pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 355 B/s wr, 2 op/s 2026-03-10T07:38:53.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:38:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:38:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:53 vm00 bash[28005]: cluster 2026-03-10T07:38:52.725707+0000 mgr.y (mgr.24407) 527 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T07:38:54.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:53 vm00 bash[28005]: cluster 2026-03-10T07:38:52.725707+0000 mgr.y (mgr.24407) 527 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T07:38:54.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:53 vm00 bash[20701]: cluster 2026-03-10T07:38:52.725707+0000 mgr.y (mgr.24407) 527 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T07:38:54.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:53 vm00 bash[20701]: cluster 2026-03-10T07:38:52.725707+0000 mgr.y (mgr.24407) 527 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T07:38:54.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:53 vm03 bash[23382]: cluster 2026-03-10T07:38:52.725707+0000 mgr.y (mgr.24407) 527 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T07:38:54.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:53 vm03 bash[23382]: cluster 2026-03-10T07:38:52.725707+0000 mgr.y (mgr.24407) 527 : cluster [DBG] pgmap v896: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T07:38:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:54 vm00 bash[28005]: audit 2026-03-10T07:38:53.435956+0000 mgr.y (mgr.24407) 528 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:54 vm00 bash[28005]: audit 2026-03-10T07:38:53.435956+0000 mgr.y (mgr.24407) 528 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:54 vm00 bash[28005]: audit 2026-03-10T07:38:54.700444+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:54 vm00 bash[28005]: audit 2026-03-10T07:38:54.700444+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:54 vm00 bash[28005]: audit 2026-03-10T07:38:54.704417+0000 mon.c (mon.2) 344 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:55.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:54 vm00 bash[28005]: audit 2026-03-10T07:38:54.704417+0000 mon.c (mon.2) 344 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:55.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:54 vm00 bash[20701]: audit 2026-03-10T07:38:53.435956+0000 mgr.y (mgr.24407) 528 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:55.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:54 vm00 bash[20701]: audit 2026-03-10T07:38:53.435956+0000 mgr.y (mgr.24407) 528 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:55.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:54 vm00 bash[20701]: audit 2026-03-10T07:38:54.700444+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:55.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:54 vm00 bash[20701]: audit 2026-03-10T07:38:54.700444+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:55.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:54 vm00 bash[20701]: audit 2026-03-10T07:38:54.704417+0000 mon.c (mon.2) 344 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:55.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:54 vm00 bash[20701]: audit 2026-03-10T07:38:54.704417+0000 mon.c (mon.2) 344 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:54 vm03 bash[23382]: audit 2026-03-10T07:38:53.435956+0000 mgr.y (mgr.24407) 528 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:54 vm03 bash[23382]: audit 2026-03-10T07:38:53.435956+0000 mgr.y (mgr.24407) 528 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:38:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:54 vm03 bash[23382]: audit 2026-03-10T07:38:54.700444+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:54 vm03 bash[23382]: audit 2026-03-10T07:38:54.700444+0000 mon.a (mon.0) 3072 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:38:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:54 vm03 bash[23382]: audit 2026-03-10T07:38:54.704417+0000 mon.c (mon.2) 344 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:54 vm03 bash[23382]: audit 2026-03-10T07:38:54.704417+0000 mon.c (mon.2) 344 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:38:56.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:55 vm00 bash[28005]: cluster 2026-03-10T07:38:54.726244+0000 mgr.y (mgr.24407) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 264 B/s wr, 1 op/s 2026-03-10T07:38:56.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:55 vm00 bash[28005]: cluster 2026-03-10T07:38:54.726244+0000 mgr.y (mgr.24407) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 264 B/s wr, 1 op/s 2026-03-10T07:38:56.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:55 vm00 bash[20701]: cluster 2026-03-10T07:38:54.726244+0000 mgr.y (mgr.24407) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 264 B/s wr, 1 op/s 2026-03-10T07:38:56.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:55 vm00 bash[20701]: cluster 2026-03-10T07:38:54.726244+0000 mgr.y (mgr.24407) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 264 B/s wr, 1 op/s 2026-03-10T07:38:56.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:55 vm03 bash[23382]: cluster 2026-03-10T07:38:54.726244+0000 mgr.y (mgr.24407) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 264 B/s wr, 1 op/s 2026-03-10T07:38:56.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:55 vm03 bash[23382]: cluster 2026-03-10T07:38:54.726244+0000 mgr.y (mgr.24407) 529 : cluster [DBG] pgmap v897: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 264 B/s wr, 1 op/s 2026-03-10T07:38:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:57 vm00 bash[28005]: cluster 2026-03-10T07:38:56.726779+0000 mgr.y (mgr.24407) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:38:58.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:57 vm00 bash[28005]: cluster 2026-03-10T07:38:56.726779+0000 mgr.y (mgr.24407) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:38:58.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:57 vm00 bash[20701]: cluster 2026-03-10T07:38:56.726779+0000 mgr.y (mgr.24407) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:38:58.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:57 vm00 bash[20701]: cluster 2026-03-10T07:38:56.726779+0000 mgr.y (mgr.24407) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:38:58.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:57 vm03 bash[23382]: cluster 2026-03-10T07:38:56.726779+0000 mgr.y (mgr.24407) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:38:58.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:57 vm03 bash[23382]: cluster 2026-03-10T07:38:56.726779+0000 mgr.y (mgr.24407) 530 : cluster [DBG] pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:39:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:59 vm03 bash[23382]: cluster 2026-03-10T07:38:58.727185+0000 mgr.y (mgr.24407) 531 : cluster [DBG] pgmap v899: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:39:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:59 vm03 bash[23382]: cluster 2026-03-10T07:38:58.727185+0000 mgr.y (mgr.24407) 531 : cluster [DBG] pgmap v899: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:39:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:59 vm03 bash[23382]: cluster 2026-03-10T07:38:58.904190+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T07:39:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:59 vm03 bash[23382]: cluster 2026-03-10T07:38:58.904190+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T07:39:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:59 vm03 bash[23382]: audit 2026-03-10T07:38:58.951641+0000 mon.b (mon.1) 550 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:59 vm03 bash[23382]: audit 2026-03-10T07:38:58.951641+0000 mon.b (mon.1) 550 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:59 vm03 bash[23382]: audit 2026-03-10T07:38:58.952213+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:38:59 vm03 bash[23382]: audit 2026-03-10T07:38:58.952213+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:59 vm00 bash[28005]: cluster 2026-03-10T07:38:58.727185+0000 mgr.y (mgr.24407) 531 : cluster [DBG] pgmap v899: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:39:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:59 vm00 bash[28005]: cluster 2026-03-10T07:38:58.727185+0000 mgr.y (mgr.24407) 531 : cluster [DBG] pgmap v899: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:39:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:59 vm00 bash[28005]: cluster 2026-03-10T07:38:58.904190+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T07:39:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:59 vm00 bash[28005]: cluster 2026-03-10T07:38:58.904190+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T07:39:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:59 vm00 bash[28005]: audit 2026-03-10T07:38:58.951641+0000 mon.b (mon.1) 550 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:59 vm00 bash[28005]: audit 2026-03-10T07:38:58.951641+0000 mon.b (mon.1) 550 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:59 vm00 bash[28005]: audit 2026-03-10T07:38:58.952213+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:38:59 vm00 bash[28005]: audit 2026-03-10T07:38:58.952213+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:59 vm00 bash[20701]: cluster 2026-03-10T07:38:58.727185+0000 mgr.y (mgr.24407) 531 : cluster [DBG] pgmap v899: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:39:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:59 vm00 bash[20701]: cluster 2026-03-10T07:38:58.727185+0000 mgr.y (mgr.24407) 531 : cluster [DBG] pgmap v899: 268 pgs: 268 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:39:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:59 vm00 bash[20701]: cluster 2026-03-10T07:38:58.904190+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T07:39:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:59 vm00 bash[20701]: cluster 2026-03-10T07:38:58.904190+0000 mon.a (mon.0) 3073 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T07:39:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:59 vm00 bash[20701]: audit 2026-03-10T07:38:58.951641+0000 mon.b (mon.1) 550 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:59 vm00 bash[20701]: audit 2026-03-10T07:38:58.951641+0000 mon.b (mon.1) 550 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:59 vm00 bash[20701]: audit 2026-03-10T07:38:58.952213+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:38:59 vm00 bash[20701]: audit 2026-03-10T07:38:58.952213+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:01.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: audit 2026-03-10T07:38:59.900679+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:01.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: audit 2026-03-10T07:38:59.900679+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:01.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: cluster 2026-03-10T07:38:59.904488+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T07:39:01.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: cluster 2026-03-10T07:38:59.904488+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T07:39:01.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: audit 2026-03-10T07:38:59.906632+0000 mon.b (mon.1) 551 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: audit 2026-03-10T07:38:59.906632+0000 mon.b (mon.1) 551 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: audit 2026-03-10T07:38:59.909125+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: audit 2026-03-10T07:38:59.909125+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: cluster 2026-03-10T07:39:00.900900+0000 mon.a (mon.0) 3078 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:01.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: cluster 2026-03-10T07:39:00.900900+0000 mon.a (mon.0) 3078 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:01.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: audit 2026-03-10T07:39:00.904284+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:39:01.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:00 vm03 bash[23382]: audit 2026-03-10T07:39:00.904284+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: audit 2026-03-10T07:38:59.900679+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: audit 2026-03-10T07:38:59.900679+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: cluster 2026-03-10T07:38:59.904488+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: cluster 2026-03-10T07:38:59.904488+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: audit 2026-03-10T07:38:59.906632+0000 mon.b (mon.1) 551 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: audit 2026-03-10T07:38:59.906632+0000 mon.b (mon.1) 551 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: audit 2026-03-10T07:38:59.909125+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: audit 2026-03-10T07:38:59.909125+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: cluster 2026-03-10T07:39:00.900900+0000 mon.a (mon.0) 3078 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: cluster 2026-03-10T07:39:00.900900+0000 mon.a (mon.0) 3078 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: audit 2026-03-10T07:39:00.904284+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:39:01.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:00 vm00 bash[28005]: audit 2026-03-10T07:39:00.904284+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: audit 2026-03-10T07:38:59.900679+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: audit 2026-03-10T07:38:59.900679+0000 mon.a (mon.0) 3075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: cluster 2026-03-10T07:38:59.904488+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: cluster 2026-03-10T07:38:59.904488+0000 mon.a (mon.0) 3076 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: audit 2026-03-10T07:38:59.906632+0000 mon.b (mon.1) 551 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: audit 2026-03-10T07:38:59.906632+0000 mon.b (mon.1) 551 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: audit 2026-03-10T07:38:59.909125+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: audit 2026-03-10T07:38:59.909125+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]: dispatch 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: cluster 2026-03-10T07:39:00.900900+0000 mon.a (mon.0) 3078 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: cluster 2026-03-10T07:39:00.900900+0000 mon.a (mon.0) 3078 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: audit 2026-03-10T07:39:00.904284+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:00 vm00 bash[20701]: audit 2026-03-10T07:39:00.904284+0000 mon.a (mon.0) 3079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-116"}]': finished 2026-03-10T07:39:01.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:39:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:39:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:39:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:01 vm03 bash[23382]: cluster 2026-03-10T07:39:00.727542+0000 mgr.y (mgr.24407) 532 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:39:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:01 vm03 bash[23382]: cluster 2026-03-10T07:39:00.727542+0000 mgr.y (mgr.24407) 532 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:39:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:01 vm03 bash[23382]: cluster 2026-03-10T07:39:00.913145+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T07:39:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:01 vm03 bash[23382]: cluster 2026-03-10T07:39:00.913145+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T07:39:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:01 vm03 bash[23382]: cluster 2026-03-10T07:39:01.565347+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T07:39:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:01 vm03 bash[23382]: cluster 2026-03-10T07:39:01.565347+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T07:39:02.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:01 vm00 bash[28005]: cluster 2026-03-10T07:39:00.727542+0000 mgr.y (mgr.24407) 532 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:39:02.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:01 vm00 bash[28005]: cluster 2026-03-10T07:39:00.727542+0000 mgr.y (mgr.24407) 532 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:39:02.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:01 vm00 bash[28005]: cluster 2026-03-10T07:39:00.913145+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T07:39:02.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:01 vm00 bash[28005]: cluster 2026-03-10T07:39:00.913145+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T07:39:02.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:01 vm00 bash[28005]: cluster 2026-03-10T07:39:01.565347+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T07:39:02.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:01 vm00 bash[28005]: cluster 2026-03-10T07:39:01.565347+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T07:39:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:01 vm00 bash[20701]: cluster 2026-03-10T07:39:00.727542+0000 mgr.y (mgr.24407) 532 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:39:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:01 vm00 bash[20701]: cluster 2026-03-10T07:39:00.727542+0000 mgr.y (mgr.24407) 532 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:39:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:01 vm00 bash[20701]: cluster 2026-03-10T07:39:00.913145+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T07:39:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:01 vm00 bash[20701]: cluster 2026-03-10T07:39:00.913145+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T07:39:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:01 vm00 bash[20701]: cluster 2026-03-10T07:39:01.565347+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T07:39:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:01 vm00 bash[20701]: cluster 2026-03-10T07:39:01.565347+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T07:39:03.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:39:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:39:03.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:03 vm03 bash[23382]: cluster 2026-03-10T07:39:02.560068+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T07:39:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:03 vm03 bash[23382]: cluster 2026-03-10T07:39:02.560068+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T07:39:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:03 vm03 bash[23382]: audit 2026-03-10T07:39:02.568275+0000 mon.b (mon.1) 552 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:03 vm03 bash[23382]: audit 2026-03-10T07:39:02.568275+0000 mon.b (mon.1) 552 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:03 vm03 bash[23382]: audit 2026-03-10T07:39:02.569293+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:03 vm03 bash[23382]: audit 2026-03-10T07:39:02.569293+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:03 vm03 bash[23382]: cluster 2026-03-10T07:39:02.728022+0000 mgr.y (mgr.24407) 533 : cluster [DBG] pgmap v906: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 802 B/s rd, 1 op/s 2026-03-10T07:39:03.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:03 vm03 bash[23382]: cluster 2026-03-10T07:39:02.728022+0000 mgr.y (mgr.24407) 533 : cluster [DBG] pgmap v906: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 802 B/s rd, 1 op/s 2026-03-10T07:39:03.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:03 vm00 bash[28005]: cluster 2026-03-10T07:39:02.560068+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T07:39:03.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:03 vm00 bash[28005]: cluster 2026-03-10T07:39:02.560068+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T07:39:03.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:03 vm00 bash[28005]: audit 2026-03-10T07:39:02.568275+0000 mon.b (mon.1) 552 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:03 vm00 bash[28005]: audit 2026-03-10T07:39:02.568275+0000 mon.b (mon.1) 552 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:03 vm00 bash[28005]: audit 2026-03-10T07:39:02.569293+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:03 vm00 bash[28005]: audit 2026-03-10T07:39:02.569293+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:03 vm00 bash[28005]: cluster 2026-03-10T07:39:02.728022+0000 mgr.y (mgr.24407) 533 : cluster [DBG] pgmap v906: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 802 B/s rd, 1 op/s 2026-03-10T07:39:03.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:03 vm00 bash[28005]: cluster 2026-03-10T07:39:02.728022+0000 mgr.y (mgr.24407) 533 : cluster [DBG] pgmap v906: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 802 B/s rd, 1 op/s 2026-03-10T07:39:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:03 vm00 bash[20701]: cluster 2026-03-10T07:39:02.560068+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T07:39:03.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:03 vm00 bash[20701]: cluster 2026-03-10T07:39:02.560068+0000 mon.a (mon.0) 3082 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T07:39:03.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:03 vm00 bash[20701]: audit 2026-03-10T07:39:02.568275+0000 mon.b (mon.1) 552 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:03 vm00 bash[20701]: audit 2026-03-10T07:39:02.568275+0000 mon.b (mon.1) 552 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:03 vm00 bash[20701]: audit 2026-03-10T07:39:02.569293+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:03 vm00 bash[20701]: audit 2026-03-10T07:39:02.569293+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:03.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:03 vm00 bash[20701]: cluster 2026-03-10T07:39:02.728022+0000 mgr.y (mgr.24407) 533 : cluster [DBG] pgmap v906: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 802 B/s rd, 1 op/s 2026-03-10T07:39:03.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:03 vm00 bash[20701]: cluster 2026-03-10T07:39:02.728022+0000 mgr.y (mgr.24407) 533 : cluster [DBG] pgmap v906: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1001 MiB used, 159 GiB / 160 GiB avail; 802 B/s rd, 1 op/s 2026-03-10T07:39:04.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:04 vm00 bash[28005]: audit 2026-03-10T07:39:03.446433+0000 mgr.y (mgr.24407) 534 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:04.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:04 vm00 bash[28005]: audit 2026-03-10T07:39:03.446433+0000 mgr.y (mgr.24407) 534 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:04.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:04 vm00 bash[28005]: cluster 2026-03-10T07:39:03.556180+0000 mon.a (mon.0) 3084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:04 vm00 bash[28005]: cluster 2026-03-10T07:39:03.556180+0000 mon.a (mon.0) 3084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:04 vm00 bash[28005]: audit 2026-03-10T07:39:03.564513+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:04 vm00 bash[28005]: audit 2026-03-10T07:39:03.564513+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:04 vm00 bash[28005]: cluster 2026-03-10T07:39:03.582097+0000 mon.a (mon.0) 3086 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:04 vm00 bash[28005]: cluster 2026-03-10T07:39:03.582097+0000 mon.a (mon.0) 3086 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:04 vm00 bash[20701]: audit 2026-03-10T07:39:03.446433+0000 mgr.y (mgr.24407) 534 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:04 vm00 bash[20701]: audit 2026-03-10T07:39:03.446433+0000 mgr.y (mgr.24407) 534 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:04 vm00 bash[20701]: cluster 2026-03-10T07:39:03.556180+0000 mon.a (mon.0) 3084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:04 vm00 bash[20701]: cluster 2026-03-10T07:39:03.556180+0000 mon.a (mon.0) 3084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:04 vm00 bash[20701]: audit 2026-03-10T07:39:03.564513+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:04 vm00 bash[20701]: audit 2026-03-10T07:39:03.564513+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:04 vm00 bash[20701]: cluster 2026-03-10T07:39:03.582097+0000 mon.a (mon.0) 3086 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T07:39:04.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:04 vm00 bash[20701]: cluster 2026-03-10T07:39:03.582097+0000 mon.a (mon.0) 3086 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T07:39:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:04 vm03 bash[23382]: audit 2026-03-10T07:39:03.446433+0000 mgr.y (mgr.24407) 534 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:04 vm03 bash[23382]: audit 2026-03-10T07:39:03.446433+0000 mgr.y (mgr.24407) 534 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:04 vm03 bash[23382]: cluster 2026-03-10T07:39:03.556180+0000 mon.a (mon.0) 3084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:04 vm03 bash[23382]: cluster 2026-03-10T07:39:03.556180+0000 mon.a (mon.0) 3084 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:04 vm03 bash[23382]: audit 2026-03-10T07:39:03.564513+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:04 vm03 bash[23382]: audit 2026-03-10T07:39:03.564513+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:04 vm03 bash[23382]: cluster 2026-03-10T07:39:03.582097+0000 mon.a (mon.0) 3086 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T07:39:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:04 vm03 bash[23382]: cluster 2026-03-10T07:39:03.582097+0000 mon.a (mon.0) 3086 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:05 vm00 bash[28005]: cluster 2026-03-10T07:39:04.572317+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:05 vm00 bash[28005]: cluster 2026-03-10T07:39:04.572317+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:05 vm00 bash[28005]: audit 2026-03-10T07:39:04.589352+0000 mon.b (mon.1) 553 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:05 vm00 bash[28005]: audit 2026-03-10T07:39:04.589352+0000 mon.b (mon.1) 553 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:05 vm00 bash[28005]: audit 2026-03-10T07:39:04.649027+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:05 vm00 bash[28005]: audit 2026-03-10T07:39:04.649027+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:05 vm00 bash[28005]: cluster 2026-03-10T07:39:04.728632+0000 mgr.y (mgr.24407) 535 : cluster [DBG] pgmap v909: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:05 vm00 bash[28005]: cluster 2026-03-10T07:39:04.728632+0000 mgr.y (mgr.24407) 535 : cluster [DBG] pgmap v909: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:05 vm00 bash[20701]: cluster 2026-03-10T07:39:04.572317+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:05 vm00 bash[20701]: cluster 2026-03-10T07:39:04.572317+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:05 vm00 bash[20701]: audit 2026-03-10T07:39:04.589352+0000 mon.b (mon.1) 553 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:05 vm00 bash[20701]: audit 2026-03-10T07:39:04.589352+0000 mon.b (mon.1) 553 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:05 vm00 bash[20701]: audit 2026-03-10T07:39:04.649027+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:05 vm00 bash[20701]: audit 2026-03-10T07:39:04.649027+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:05 vm00 bash[20701]: cluster 2026-03-10T07:39:04.728632+0000 mgr.y (mgr.24407) 535 : cluster [DBG] pgmap v909: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:05.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:05 vm00 bash[20701]: cluster 2026-03-10T07:39:04.728632+0000 mgr.y (mgr.24407) 535 : cluster [DBG] pgmap v909: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:05 vm03 bash[23382]: cluster 2026-03-10T07:39:04.572317+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T07:39:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:05 vm03 bash[23382]: cluster 2026-03-10T07:39:04.572317+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T07:39:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:05 vm03 bash[23382]: audit 2026-03-10T07:39:04.589352+0000 mon.b (mon.1) 553 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:05 vm03 bash[23382]: audit 2026-03-10T07:39:04.589352+0000 mon.b (mon.1) 553 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:05 vm03 bash[23382]: audit 2026-03-10T07:39:04.649027+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:05 vm03 bash[23382]: audit 2026-03-10T07:39:04.649027+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:05 vm03 bash[23382]: cluster 2026-03-10T07:39:04.728632+0000 mgr.y (mgr.24407) 535 : cluster [DBG] pgmap v909: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:05 vm03 bash[23382]: cluster 2026-03-10T07:39:04.728632+0000 mgr.y (mgr.24407) 535 : cluster [DBG] pgmap v909: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1002 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:06.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:05.591905+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:05.591905+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:05.599451+0000 mon.b (mon.1) 554 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:05.599451+0000 mon.b (mon.1) 554 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: cluster 2026-03-10T07:39:05.612867+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: cluster 2026-03-10T07:39:05.612867+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:05.613652+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:05.613652+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:06.554652+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:06.554652+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: cluster 2026-03-10T07:39:06.558467+0000 mon.a (mon.0) 3093 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: cluster 2026-03-10T07:39:06.558467+0000 mon.a (mon.0) 3093 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:06.563628+0000 mon.b (mon.1) 555 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:06.563628+0000 mon.b (mon.1) 555 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:06.564031+0000 mon.a (mon.0) 3094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:06 vm00 bash[28005]: audit 2026-03-10T07:39:06.564031+0000 mon.a (mon.0) 3094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:05.591905+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:05.591905+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:05.599451+0000 mon.b (mon.1) 554 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:05.599451+0000 mon.b (mon.1) 554 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: cluster 2026-03-10T07:39:05.612867+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: cluster 2026-03-10T07:39:05.612867+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:05.613652+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:05.613652+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:06.554652+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:06.554652+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: cluster 2026-03-10T07:39:06.558467+0000 mon.a (mon.0) 3093 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: cluster 2026-03-10T07:39:06.558467+0000 mon.a (mon.0) 3093 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:06.563628+0000 mon.b (mon.1) 555 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:06.563628+0000 mon.b (mon.1) 555 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:06.564031+0000 mon.a (mon.0) 3094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:06.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:06 vm00 bash[20701]: audit 2026-03-10T07:39:06.564031+0000 mon.a (mon.0) 3094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:05.591905+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:05.591905+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:05.599451+0000 mon.b (mon.1) 554 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:05.599451+0000 mon.b (mon.1) 554 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: cluster 2026-03-10T07:39:05.612867+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: cluster 2026-03-10T07:39:05.612867+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:05.613652+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:05.613652+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:06.554652+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:06.554652+0000 mon.a (mon.0) 3092 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: cluster 2026-03-10T07:39:06.558467+0000 mon.a (mon.0) 3093 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: cluster 2026-03-10T07:39:06.558467+0000 mon.a (mon.0) 3093 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:06.563628+0000 mon.b (mon.1) 555 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:07.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:06.563628+0000 mon.b (mon.1) 555 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:07.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:06.564031+0000 mon.a (mon.0) 3094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:07.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:06 vm03 bash[23382]: audit 2026-03-10T07:39:06.564031+0000 mon.a (mon.0) 3094 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]: dispatch 2026-03-10T07:39:07.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:07 vm00 bash[28005]: cluster 2026-03-10T07:39:06.729126+0000 mgr.y (mgr.24407) 536 : cluster [DBG] pgmap v912: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:07 vm00 bash[28005]: cluster 2026-03-10T07:39:06.729126+0000 mgr.y (mgr.24407) 536 : cluster [DBG] pgmap v912: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:07 vm00 bash[28005]: cluster 2026-03-10T07:39:07.554708+0000 mon.a (mon.0) 3095 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:07 vm00 bash[28005]: cluster 2026-03-10T07:39:07.554708+0000 mon.a (mon.0) 3095 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:07 vm00 bash[28005]: audit 2026-03-10T07:39:07.559204+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]': finished 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:07 vm00 bash[28005]: audit 2026-03-10T07:39:07.559204+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]': finished 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:07 vm00 bash[28005]: cluster 2026-03-10T07:39:07.564071+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:07 vm00 bash[28005]: cluster 2026-03-10T07:39:07.564071+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:07 vm00 bash[20701]: cluster 2026-03-10T07:39:06.729126+0000 mgr.y (mgr.24407) 536 : cluster [DBG] pgmap v912: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:07 vm00 bash[20701]: cluster 2026-03-10T07:39:06.729126+0000 mgr.y (mgr.24407) 536 : cluster [DBG] pgmap v912: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:07 vm00 bash[20701]: cluster 2026-03-10T07:39:07.554708+0000 mon.a (mon.0) 3095 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:07 vm00 bash[20701]: cluster 2026-03-10T07:39:07.554708+0000 mon.a (mon.0) 3095 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:07 vm00 bash[20701]: audit 2026-03-10T07:39:07.559204+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]': finished 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:07 vm00 bash[20701]: audit 2026-03-10T07:39:07.559204+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]': finished 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:07 vm00 bash[20701]: cluster 2026-03-10T07:39:07.564071+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T07:39:07.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:07 vm00 bash[20701]: cluster 2026-03-10T07:39:07.564071+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T07:39:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:07 vm03 bash[23382]: cluster 2026-03-10T07:39:06.729126+0000 mgr.y (mgr.24407) 536 : cluster [DBG] pgmap v912: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:39:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:07 vm03 bash[23382]: cluster 2026-03-10T07:39:06.729126+0000 mgr.y (mgr.24407) 536 : cluster [DBG] pgmap v912: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T07:39:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:07 vm03 bash[23382]: cluster 2026-03-10T07:39:07.554708+0000 mon.a (mon.0) 3095 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:07 vm03 bash[23382]: cluster 2026-03-10T07:39:07.554708+0000 mon.a (mon.0) 3095 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:07 vm03 bash[23382]: audit 2026-03-10T07:39:07.559204+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]': finished 2026-03-10T07:39:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:07 vm03 bash[23382]: audit 2026-03-10T07:39:07.559204+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-118", "mode": "writeback"}]': finished 2026-03-10T07:39:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:07 vm03 bash[23382]: cluster 2026-03-10T07:39:07.564071+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T07:39:08.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:07 vm03 bash[23382]: cluster 2026-03-10T07:39:07.564071+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:09 vm00 bash[28005]: cluster 2026-03-10T07:39:08.565210+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:09 vm00 bash[28005]: cluster 2026-03-10T07:39:08.565210+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:09 vm00 bash[28005]: audit 2026-03-10T07:39:08.611796+0000 mon.b (mon.1) 556 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:09 vm00 bash[28005]: audit 2026-03-10T07:39:08.611796+0000 mon.b (mon.1) 556 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:09 vm00 bash[28005]: audit 2026-03-10T07:39:08.612301+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:09 vm00 bash[28005]: audit 2026-03-10T07:39:08.612301+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:09 vm00 bash[28005]: cluster 2026-03-10T07:39:08.729537+0000 mgr.y (mgr.24407) 537 : cluster [DBG] pgmap v915: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:09 vm00 bash[28005]: cluster 2026-03-10T07:39:08.729537+0000 mgr.y (mgr.24407) 537 : cluster [DBG] pgmap v915: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:09 vm00 bash[20701]: cluster 2026-03-10T07:39:08.565210+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:09 vm00 bash[20701]: cluster 2026-03-10T07:39:08.565210+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:09 vm00 bash[20701]: audit 2026-03-10T07:39:08.611796+0000 mon.b (mon.1) 556 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:09 vm00 bash[20701]: audit 2026-03-10T07:39:08.611796+0000 mon.b (mon.1) 556 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:09 vm00 bash[20701]: audit 2026-03-10T07:39:08.612301+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:09 vm00 bash[20701]: audit 2026-03-10T07:39:08.612301+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:09 vm00 bash[20701]: cluster 2026-03-10T07:39:08.729537+0000 mgr.y (mgr.24407) 537 : cluster [DBG] pgmap v915: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:09 vm00 bash[20701]: cluster 2026-03-10T07:39:08.729537+0000 mgr.y (mgr.24407) 537 : cluster [DBG] pgmap v915: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:09 vm03 bash[23382]: cluster 2026-03-10T07:39:08.565210+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T07:39:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:09 vm03 bash[23382]: cluster 2026-03-10T07:39:08.565210+0000 mon.a (mon.0) 3098 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T07:39:10.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:09 vm03 bash[23382]: audit 2026-03-10T07:39:08.611796+0000 mon.b (mon.1) 556 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:10.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:09 vm03 bash[23382]: audit 2026-03-10T07:39:08.611796+0000 mon.b (mon.1) 556 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:10.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:09 vm03 bash[23382]: audit 2026-03-10T07:39:08.612301+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:10.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:09 vm03 bash[23382]: audit 2026-03-10T07:39:08.612301+0000 mon.a (mon.0) 3099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:10.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:09 vm03 bash[23382]: cluster 2026-03-10T07:39:08.729537+0000 mgr.y (mgr.24407) 537 : cluster [DBG] pgmap v915: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:10.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:09 vm03 bash[23382]: cluster 2026-03-10T07:39:08.729537+0000 mgr.y (mgr.24407) 537 : cluster [DBG] pgmap v915: 268 pgs: 268 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:10 vm00 bash[28005]: audit 2026-03-10T07:39:09.572853+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:10 vm00 bash[28005]: audit 2026-03-10T07:39:09.572853+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:10 vm00 bash[28005]: cluster 2026-03-10T07:39:09.580373+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:10 vm00 bash[28005]: cluster 2026-03-10T07:39:09.580373+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:10 vm00 bash[28005]: audit 2026-03-10T07:39:09.587441+0000 mon.b (mon.1) 557 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:10 vm00 bash[28005]: audit 2026-03-10T07:39:09.587441+0000 mon.b (mon.1) 557 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:10 vm00 bash[28005]: audit 2026-03-10T07:39:09.587727+0000 mon.a (mon.0) 3102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:10 vm00 bash[28005]: audit 2026-03-10T07:39:09.587727+0000 mon.a (mon.0) 3102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:10 vm00 bash[28005]: audit 2026-03-10T07:39:09.711185+0000 mon.c (mon.2) 345 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:10 vm00 bash[28005]: audit 2026-03-10T07:39:09.711185+0000 mon.c (mon.2) 345 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:10 vm00 bash[20701]: audit 2026-03-10T07:39:09.572853+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:10 vm00 bash[20701]: audit 2026-03-10T07:39:09.572853+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:10 vm00 bash[20701]: cluster 2026-03-10T07:39:09.580373+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:10 vm00 bash[20701]: cluster 2026-03-10T07:39:09.580373+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:10 vm00 bash[20701]: audit 2026-03-10T07:39:09.587441+0000 mon.b (mon.1) 557 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:10 vm00 bash[20701]: audit 2026-03-10T07:39:09.587441+0000 mon.b (mon.1) 557 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:10 vm00 bash[20701]: audit 2026-03-10T07:39:09.587727+0000 mon.a (mon.0) 3102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:10 vm00 bash[20701]: audit 2026-03-10T07:39:09.587727+0000 mon.a (mon.0) 3102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:10 vm00 bash[20701]: audit 2026-03-10T07:39:09.711185+0000 mon.c (mon.2) 345 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:10 vm00 bash[20701]: audit 2026-03-10T07:39:09.711185+0000 mon.c (mon.2) 345 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:10 vm03 bash[23382]: audit 2026-03-10T07:39:09.572853+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:10 vm03 bash[23382]: audit 2026-03-10T07:39:09.572853+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:10 vm03 bash[23382]: cluster 2026-03-10T07:39:09.580373+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T07:39:11.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:10 vm03 bash[23382]: cluster 2026-03-10T07:39:09.580373+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T07:39:11.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:10 vm03 bash[23382]: audit 2026-03-10T07:39:09.587441+0000 mon.b (mon.1) 557 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:11.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:10 vm03 bash[23382]: audit 2026-03-10T07:39:09.587441+0000 mon.b (mon.1) 557 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:11.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:10 vm03 bash[23382]: audit 2026-03-10T07:39:09.587727+0000 mon.a (mon.0) 3102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:11.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:10 vm03 bash[23382]: audit 2026-03-10T07:39:09.587727+0000 mon.a (mon.0) 3102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]: dispatch 2026-03-10T07:39:11.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:10 vm03 bash[23382]: audit 2026-03-10T07:39:09.711185+0000 mon.c (mon.2) 345 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:11.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:10 vm03 bash[23382]: audit 2026-03-10T07:39:09.711185+0000 mon.c (mon.2) 345 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:11.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:39:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:39:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:39:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:11 vm03 bash[23382]: cluster 2026-03-10T07:39:10.573257+0000 mon.a (mon.0) 3103 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:11 vm03 bash[23382]: cluster 2026-03-10T07:39:10.573257+0000 mon.a (mon.0) 3103 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:11 vm03 bash[23382]: audit 2026-03-10T07:39:10.576209+0000 mon.a (mon.0) 3104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:11 vm03 bash[23382]: audit 2026-03-10T07:39:10.576209+0000 mon.a (mon.0) 3104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:11 vm03 bash[23382]: cluster 2026-03-10T07:39:10.583667+0000 mon.a (mon.0) 3105 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T07:39:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:11 vm03 bash[23382]: cluster 2026-03-10T07:39:10.583667+0000 mon.a (mon.0) 3105 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T07:39:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:11 vm03 bash[23382]: cluster 2026-03-10T07:39:10.729908+0000 mgr.y (mgr.24407) 538 : cluster [DBG] pgmap v918: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:39:12.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:11 vm03 bash[23382]: cluster 2026-03-10T07:39:10.729908+0000 mgr.y (mgr.24407) 538 : cluster [DBG] pgmap v918: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:39:12.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:11 vm03 bash[23382]: cluster 2026-03-10T07:39:11.552716+0000 mon.a (mon.0) 3106 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:12.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:11 vm03 bash[23382]: cluster 2026-03-10T07:39:11.552716+0000 mon.a (mon.0) 3106 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:11 vm00 bash[28005]: cluster 2026-03-10T07:39:10.573257+0000 mon.a (mon.0) 3103 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:11 vm00 bash[28005]: cluster 2026-03-10T07:39:10.573257+0000 mon.a (mon.0) 3103 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:11 vm00 bash[28005]: audit 2026-03-10T07:39:10.576209+0000 mon.a (mon.0) 3104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:11 vm00 bash[28005]: audit 2026-03-10T07:39:10.576209+0000 mon.a (mon.0) 3104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:11 vm00 bash[28005]: cluster 2026-03-10T07:39:10.583667+0000 mon.a (mon.0) 3105 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:11 vm00 bash[28005]: cluster 2026-03-10T07:39:10.583667+0000 mon.a (mon.0) 3105 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:11 vm00 bash[28005]: cluster 2026-03-10T07:39:10.729908+0000 mgr.y (mgr.24407) 538 : cluster [DBG] pgmap v918: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:11 vm00 bash[28005]: cluster 2026-03-10T07:39:10.729908+0000 mgr.y (mgr.24407) 538 : cluster [DBG] pgmap v918: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:11 vm00 bash[28005]: cluster 2026-03-10T07:39:11.552716+0000 mon.a (mon.0) 3106 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:11 vm00 bash[28005]: cluster 2026-03-10T07:39:11.552716+0000 mon.a (mon.0) 3106 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:11 vm00 bash[20701]: cluster 2026-03-10T07:39:10.573257+0000 mon.a (mon.0) 3103 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:11 vm00 bash[20701]: cluster 2026-03-10T07:39:10.573257+0000 mon.a (mon.0) 3103 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:11 vm00 bash[20701]: audit 2026-03-10T07:39:10.576209+0000 mon.a (mon.0) 3104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:11 vm00 bash[20701]: audit 2026-03-10T07:39:10.576209+0000 mon.a (mon.0) 3104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-118"}]': finished 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:11 vm00 bash[20701]: cluster 2026-03-10T07:39:10.583667+0000 mon.a (mon.0) 3105 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:11 vm00 bash[20701]: cluster 2026-03-10T07:39:10.583667+0000 mon.a (mon.0) 3105 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:11 vm00 bash[20701]: cluster 2026-03-10T07:39:10.729908+0000 mgr.y (mgr.24407) 538 : cluster [DBG] pgmap v918: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:11 vm00 bash[20701]: cluster 2026-03-10T07:39:10.729908+0000 mgr.y (mgr.24407) 538 : cluster [DBG] pgmap v918: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:11 vm00 bash[20701]: cluster 2026-03-10T07:39:11.552716+0000 mon.a (mon.0) 3106 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:12.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:11 vm00 bash[20701]: cluster 2026-03-10T07:39:11.552716+0000 mon.a (mon.0) 3106 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:12 vm03 bash[23382]: cluster 2026-03-10T07:39:11.672906+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T07:39:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:12 vm03 bash[23382]: cluster 2026-03-10T07:39:11.672906+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T07:39:13.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:12 vm00 bash[28005]: cluster 2026-03-10T07:39:11.672906+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T07:39:13.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:12 vm00 bash[28005]: cluster 2026-03-10T07:39:11.672906+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T07:39:13.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:12 vm00 bash[20701]: cluster 2026-03-10T07:39:11.672906+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T07:39:13.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:12 vm00 bash[20701]: cluster 2026-03-10T07:39:11.672906+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T07:39:13.738 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:13 vm03 bash[23382]: cluster 2026-03-10T07:39:12.730413+0000 mgr.y (mgr.24407) 539 : cluster [DBG] pgmap v920: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 984 B/s rd, 1 op/s 2026-03-10T07:39:13.738 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:13 vm03 bash[23382]: cluster 2026-03-10T07:39:12.730413+0000 mgr.y (mgr.24407) 539 : cluster [DBG] pgmap v920: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 984 B/s rd, 1 op/s 2026-03-10T07:39:13.739 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:39:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:39:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:13 vm03 bash[23382]: cluster 2026-03-10T07:39:12.739290+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T07:39:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:13 vm03 bash[23382]: cluster 2026-03-10T07:39:12.739290+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T07:39:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:13 vm03 bash[23382]: audit 2026-03-10T07:39:12.751474+0000 mon.b (mon.1) 558 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:13 vm03 bash[23382]: audit 2026-03-10T07:39:12.751474+0000 mon.b (mon.1) 558 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:13 vm03 bash[23382]: audit 2026-03-10T07:39:12.753606+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:13 vm03 bash[23382]: audit 2026-03-10T07:39:12.753606+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:13 vm03 bash[23382]: audit 2026-03-10T07:39:13.437669+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:39:14.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:13 vm03 bash[23382]: audit 2026-03-10T07:39:13.437669+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:13 vm00 bash[28005]: cluster 2026-03-10T07:39:12.730413+0000 mgr.y (mgr.24407) 539 : cluster [DBG] pgmap v920: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 984 B/s rd, 1 op/s 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:13 vm00 bash[28005]: cluster 2026-03-10T07:39:12.730413+0000 mgr.y (mgr.24407) 539 : cluster [DBG] pgmap v920: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 984 B/s rd, 1 op/s 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:13 vm00 bash[28005]: cluster 2026-03-10T07:39:12.739290+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:13 vm00 bash[28005]: cluster 2026-03-10T07:39:12.739290+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:13 vm00 bash[28005]: audit 2026-03-10T07:39:12.751474+0000 mon.b (mon.1) 558 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:13 vm00 bash[28005]: audit 2026-03-10T07:39:12.751474+0000 mon.b (mon.1) 558 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:13 vm00 bash[28005]: audit 2026-03-10T07:39:12.753606+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:13 vm00 bash[28005]: audit 2026-03-10T07:39:12.753606+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:13 vm00 bash[28005]: audit 2026-03-10T07:39:13.437669+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:13 vm00 bash[28005]: audit 2026-03-10T07:39:13.437669+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:13 vm00 bash[20701]: cluster 2026-03-10T07:39:12.730413+0000 mgr.y (mgr.24407) 539 : cluster [DBG] pgmap v920: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 984 B/s rd, 1 op/s 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:13 vm00 bash[20701]: cluster 2026-03-10T07:39:12.730413+0000 mgr.y (mgr.24407) 539 : cluster [DBG] pgmap v920: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 984 B/s rd, 1 op/s 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:13 vm00 bash[20701]: cluster 2026-03-10T07:39:12.739290+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:13 vm00 bash[20701]: cluster 2026-03-10T07:39:12.739290+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:13 vm00 bash[20701]: audit 2026-03-10T07:39:12.751474+0000 mon.b (mon.1) 558 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:13 vm00 bash[20701]: audit 2026-03-10T07:39:12.751474+0000 mon.b (mon.1) 558 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:13 vm00 bash[20701]: audit 2026-03-10T07:39:12.753606+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:13 vm00 bash[20701]: audit 2026-03-10T07:39:12.753606+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:13 vm00 bash[20701]: audit 2026-03-10T07:39:13.437669+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:39:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:13 vm00 bash[20701]: audit 2026-03-10T07:39:13.437669+0000 mon.c (mon.2) 346 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.454621+0000 mgr.y (mgr.24407) 540 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.454621+0000 mgr.y (mgr.24407) 540 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.768087+0000 mon.a (mon.0) 3110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.768087+0000 mon.a (mon.0) 3110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: cluster 2026-03-10T07:39:13.799038+0000 mon.a (mon.0) 3111 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: cluster 2026-03-10T07:39:13.799038+0000 mon.a (mon.0) 3111 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.804479+0000 mon.b (mon.1) 559 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.804479+0000 mon.b (mon.1) 559 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.814324+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.814324+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.826514+0000 mon.c (mon.2) 347 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.826514+0000 mon.c (mon.2) 347 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.827708+0000 mon.c (mon.2) 348 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.827708+0000 mon.c (mon.2) 348 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.835585+0000 mon.a (mon.0) 3113 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:14 vm00 bash[28005]: audit 2026-03-10T07:39:13.835585+0000 mon.a (mon.0) 3113 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.454621+0000 mgr.y (mgr.24407) 540 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.454621+0000 mgr.y (mgr.24407) 540 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.768087+0000 mon.a (mon.0) 3110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.768087+0000 mon.a (mon.0) 3110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: cluster 2026-03-10T07:39:13.799038+0000 mon.a (mon.0) 3111 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: cluster 2026-03-10T07:39:13.799038+0000 mon.a (mon.0) 3111 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.804479+0000 mon.b (mon.1) 559 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.804479+0000 mon.b (mon.1) 559 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.814324+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.814324+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.826514+0000 mon.c (mon.2) 347 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.826514+0000 mon.c (mon.2) 347 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.827708+0000 mon.c (mon.2) 348 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.827708+0000 mon.c (mon.2) 348 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.835585+0000 mon.a (mon.0) 3113 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:14 vm00 bash[20701]: audit 2026-03-10T07:39:13.835585+0000 mon.a (mon.0) 3113 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:15.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.454621+0000 mgr.y (mgr.24407) 540 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.454621+0000 mgr.y (mgr.24407) 540 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.768087+0000 mon.a (mon.0) 3110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.768087+0000 mon.a (mon.0) 3110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: cluster 2026-03-10T07:39:13.799038+0000 mon.a (mon.0) 3111 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: cluster 2026-03-10T07:39:13.799038+0000 mon.a (mon.0) 3111 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.804479+0000 mon.b (mon.1) 559 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.804479+0000 mon.b (mon.1) 559 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.814324+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.814324+0000 mon.a (mon.0) 3112 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.826514+0000 mon.c (mon.2) 347 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.826514+0000 mon.c (mon.2) 347 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.827708+0000 mon.c (mon.2) 348 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.827708+0000 mon.c (mon.2) 348 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.835585+0000 mon.a (mon.0) 3113 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:15.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:14 vm03 bash[23382]: audit 2026-03-10T07:39:13.835585+0000 mon.a (mon.0) 3113 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: cluster 2026-03-10T07:39:14.730970+0000 mgr.y (mgr.24407) 541 : cluster [DBG] pgmap v923: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: cluster 2026-03-10T07:39:14.730970+0000 mgr.y (mgr.24407) 541 : cluster [DBG] pgmap v923: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:14.776744+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:14.776744+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: cluster 2026-03-10T07:39:14.779505+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: cluster 2026-03-10T07:39:14.779505+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:14.789325+0000 mon.b (mon.1) 560 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:14.789325+0000 mon.b (mon.1) 560 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:14.790530+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:14.790530+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:15.781307+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:15.781307+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:15.789764+0000 mon.b (mon.1) 561 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:15.789764+0000 mon.b (mon.1) 561 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: cluster 2026-03-10T07:39:15.789900+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: cluster 2026-03-10T07:39:15.789900+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:15.792476+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:15 vm00 bash[28005]: audit 2026-03-10T07:39:15.792476+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: cluster 2026-03-10T07:39:14.730970+0000 mgr.y (mgr.24407) 541 : cluster [DBG] pgmap v923: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: cluster 2026-03-10T07:39:14.730970+0000 mgr.y (mgr.24407) 541 : cluster [DBG] pgmap v923: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:14.776744+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:14.776744+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: cluster 2026-03-10T07:39:14.779505+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: cluster 2026-03-10T07:39:14.779505+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:14.789325+0000 mon.b (mon.1) 560 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:14.789325+0000 mon.b (mon.1) 560 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:14.790530+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:14.790530+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:15.781307+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:15.781307+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:15.789764+0000 mon.b (mon.1) 561 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:15.789764+0000 mon.b (mon.1) 561 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: cluster 2026-03-10T07:39:15.789900+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: cluster 2026-03-10T07:39:15.789900+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:15.792476+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:15 vm00 bash[20701]: audit 2026-03-10T07:39:15.792476+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: cluster 2026-03-10T07:39:14.730970+0000 mgr.y (mgr.24407) 541 : cluster [DBG] pgmap v923: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: cluster 2026-03-10T07:39:14.730970+0000 mgr.y (mgr.24407) 541 : cluster [DBG] pgmap v923: 268 pgs: 9 unknown, 259 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:14.776744+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:14.776744+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: cluster 2026-03-10T07:39:14.779505+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: cluster 2026-03-10T07:39:14.779505+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:14.789325+0000 mon.b (mon.1) 560 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:14.789325+0000 mon.b (mon.1) 560 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:14.790530+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:14.790530+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:15.781307+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:15.781307+0000 mon.a (mon.0) 3117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:15.789764+0000 mon.b (mon.1) 561 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:15.789764+0000 mon.b (mon.1) 561 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: cluster 2026-03-10T07:39:15.789900+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: cluster 2026-03-10T07:39:15.789900+0000 mon.a (mon.0) 3118 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:15.792476+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:16.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:15 vm03 bash[23382]: audit 2026-03-10T07:39:15.792476+0000 mon.a (mon.0) 3119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]: dispatch 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:16 vm00 bash[28005]: cluster 2026-03-10T07:39:16.554122+0000 mon.a (mon.0) 3120 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:16 vm00 bash[28005]: cluster 2026-03-10T07:39:16.554122+0000 mon.a (mon.0) 3120 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:16 vm00 bash[28005]: audit 2026-03-10T07:39:16.557696+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]': finished 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:16 vm00 bash[28005]: audit 2026-03-10T07:39:16.557696+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]': finished 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:16 vm00 bash[28005]: cluster 2026-03-10T07:39:16.563220+0000 mon.a (mon.0) 3122 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:16 vm00 bash[28005]: cluster 2026-03-10T07:39:16.563220+0000 mon.a (mon.0) 3122 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:16 vm00 bash[28005]: audit 2026-03-10T07:39:16.616489+0000 mon.b (mon.1) 562 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:16 vm00 bash[28005]: audit 2026-03-10T07:39:16.616489+0000 mon.b (mon.1) 562 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:16 vm00 bash[28005]: audit 2026-03-10T07:39:16.616968+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:16 vm00 bash[28005]: audit 2026-03-10T07:39:16.616968+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:16 vm00 bash[20701]: cluster 2026-03-10T07:39:16.554122+0000 mon.a (mon.0) 3120 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:16 vm00 bash[20701]: cluster 2026-03-10T07:39:16.554122+0000 mon.a (mon.0) 3120 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:16 vm00 bash[20701]: audit 2026-03-10T07:39:16.557696+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]': finished 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:16 vm00 bash[20701]: audit 2026-03-10T07:39:16.557696+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]': finished 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:16 vm00 bash[20701]: cluster 2026-03-10T07:39:16.563220+0000 mon.a (mon.0) 3122 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:16 vm00 bash[20701]: cluster 2026-03-10T07:39:16.563220+0000 mon.a (mon.0) 3122 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:16 vm00 bash[20701]: audit 2026-03-10T07:39:16.616489+0000 mon.b (mon.1) 562 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:16 vm00 bash[20701]: audit 2026-03-10T07:39:16.616489+0000 mon.b (mon.1) 562 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:16 vm00 bash[20701]: audit 2026-03-10T07:39:16.616968+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:17.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:16 vm00 bash[20701]: audit 2026-03-10T07:39:16.616968+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:16 vm03 bash[23382]: cluster 2026-03-10T07:39:16.554122+0000 mon.a (mon.0) 3120 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:16 vm03 bash[23382]: cluster 2026-03-10T07:39:16.554122+0000 mon.a (mon.0) 3120 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:16 vm03 bash[23382]: audit 2026-03-10T07:39:16.557696+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]': finished 2026-03-10T07:39:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:16 vm03 bash[23382]: audit 2026-03-10T07:39:16.557696+0000 mon.a (mon.0) 3121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-120", "mode": "writeback"}]': finished 2026-03-10T07:39:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:16 vm03 bash[23382]: cluster 2026-03-10T07:39:16.563220+0000 mon.a (mon.0) 3122 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T07:39:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:16 vm03 bash[23382]: cluster 2026-03-10T07:39:16.563220+0000 mon.a (mon.0) 3122 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T07:39:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:16 vm03 bash[23382]: audit 2026-03-10T07:39:16.616489+0000 mon.b (mon.1) 562 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:16 vm03 bash[23382]: audit 2026-03-10T07:39:16.616489+0000 mon.b (mon.1) 562 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:16 vm03 bash[23382]: audit 2026-03-10T07:39:16.616968+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:17.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:16 vm03 bash[23382]: audit 2026-03-10T07:39:16.616968+0000 mon.a (mon.0) 3123 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:17 vm00 bash[28005]: cluster 2026-03-10T07:39:16.731378+0000 mgr.y (mgr.24407) 542 : cluster [DBG] pgmap v927: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 256 B/s wr, 1 op/s 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:17 vm00 bash[28005]: cluster 2026-03-10T07:39:16.731378+0000 mgr.y (mgr.24407) 542 : cluster [DBG] pgmap v927: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 256 B/s wr, 1 op/s 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:17 vm00 bash[28005]: audit 2026-03-10T07:39:17.581738+0000 mon.a (mon.0) 3124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:17 vm00 bash[28005]: audit 2026-03-10T07:39:17.581738+0000 mon.a (mon.0) 3124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:17 vm00 bash[28005]: audit 2026-03-10T07:39:17.585318+0000 mon.b (mon.1) 563 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:17 vm00 bash[28005]: audit 2026-03-10T07:39:17.585318+0000 mon.b (mon.1) 563 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:17 vm00 bash[28005]: cluster 2026-03-10T07:39:17.593867+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:17 vm00 bash[28005]: cluster 2026-03-10T07:39:17.593867+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:17 vm00 bash[28005]: audit 2026-03-10T07:39:17.595927+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:17 vm00 bash[28005]: audit 2026-03-10T07:39:17.595927+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:17 vm00 bash[20701]: cluster 2026-03-10T07:39:16.731378+0000 mgr.y (mgr.24407) 542 : cluster [DBG] pgmap v927: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 256 B/s wr, 1 op/s 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:17 vm00 bash[20701]: cluster 2026-03-10T07:39:16.731378+0000 mgr.y (mgr.24407) 542 : cluster [DBG] pgmap v927: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 256 B/s wr, 1 op/s 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:17 vm00 bash[20701]: audit 2026-03-10T07:39:17.581738+0000 mon.a (mon.0) 3124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:17 vm00 bash[20701]: audit 2026-03-10T07:39:17.581738+0000 mon.a (mon.0) 3124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:17 vm00 bash[20701]: audit 2026-03-10T07:39:17.585318+0000 mon.b (mon.1) 563 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:17 vm00 bash[20701]: audit 2026-03-10T07:39:17.585318+0000 mon.b (mon.1) 563 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:17 vm00 bash[20701]: cluster 2026-03-10T07:39:17.593867+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:17 vm00 bash[20701]: cluster 2026-03-10T07:39:17.593867+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:17 vm00 bash[20701]: audit 2026-03-10T07:39:17.595927+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:18.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:17 vm00 bash[20701]: audit 2026-03-10T07:39:17.595927+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:17 vm03 bash[23382]: cluster 2026-03-10T07:39:16.731378+0000 mgr.y (mgr.24407) 542 : cluster [DBG] pgmap v927: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 256 B/s wr, 1 op/s 2026-03-10T07:39:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:17 vm03 bash[23382]: cluster 2026-03-10T07:39:16.731378+0000 mgr.y (mgr.24407) 542 : cluster [DBG] pgmap v927: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 256 B/s wr, 1 op/s 2026-03-10T07:39:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:17 vm03 bash[23382]: audit 2026-03-10T07:39:17.581738+0000 mon.a (mon.0) 3124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:17 vm03 bash[23382]: audit 2026-03-10T07:39:17.581738+0000 mon.a (mon.0) 3124 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:17 vm03 bash[23382]: audit 2026-03-10T07:39:17.585318+0000 mon.b (mon.1) 563 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:17 vm03 bash[23382]: audit 2026-03-10T07:39:17.585318+0000 mon.b (mon.1) 563 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:17 vm03 bash[23382]: cluster 2026-03-10T07:39:17.593867+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T07:39:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:17 vm03 bash[23382]: cluster 2026-03-10T07:39:17.593867+0000 mon.a (mon.0) 3125 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T07:39:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:17 vm03 bash[23382]: audit 2026-03-10T07:39:17.595927+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:18.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:17 vm03 bash[23382]: audit 2026-03-10T07:39:17.595927+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]: dispatch 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:18 vm00 bash[28005]: cluster 2026-03-10T07:39:18.581902+0000 mon.a (mon.0) 3127 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:18 vm00 bash[28005]: cluster 2026-03-10T07:39:18.581902+0000 mon.a (mon.0) 3127 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:18 vm00 bash[28005]: audit 2026-03-10T07:39:18.584999+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:18 vm00 bash[28005]: audit 2026-03-10T07:39:18.584999+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:18 vm00 bash[28005]: cluster 2026-03-10T07:39:18.593934+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:18 vm00 bash[28005]: cluster 2026-03-10T07:39:18.593934+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:18 vm00 bash[20701]: cluster 2026-03-10T07:39:18.581902+0000 mon.a (mon.0) 3127 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:18 vm00 bash[20701]: cluster 2026-03-10T07:39:18.581902+0000 mon.a (mon.0) 3127 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:18 vm00 bash[20701]: audit 2026-03-10T07:39:18.584999+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:18 vm00 bash[20701]: audit 2026-03-10T07:39:18.584999+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:18 vm00 bash[20701]: cluster 2026-03-10T07:39:18.593934+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T07:39:19.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:18 vm00 bash[20701]: cluster 2026-03-10T07:39:18.593934+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T07:39:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:18 vm03 bash[23382]: cluster 2026-03-10T07:39:18.581902+0000 mon.a (mon.0) 3127 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:18 vm03 bash[23382]: cluster 2026-03-10T07:39:18.581902+0000 mon.a (mon.0) 3127 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:18 vm03 bash[23382]: audit 2026-03-10T07:39:18.584999+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:18 vm03 bash[23382]: audit 2026-03-10T07:39:18.584999+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-120"}]': finished 2026-03-10T07:39:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:18 vm03 bash[23382]: cluster 2026-03-10T07:39:18.593934+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T07:39:19.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:18 vm03 bash[23382]: cluster 2026-03-10T07:39:18.593934+0000 mon.a (mon.0) 3129 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:19 vm00 bash[28005]: cluster 2026-03-10T07:39:18.731789+0000 mgr.y (mgr.24407) 543 : cluster [DBG] pgmap v930: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:19 vm00 bash[28005]: cluster 2026-03-10T07:39:18.731789+0000 mgr.y (mgr.24407) 543 : cluster [DBG] pgmap v930: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:19 vm00 bash[28005]: cluster 2026-03-10T07:39:19.584923+0000 mon.a (mon.0) 3130 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:19 vm00 bash[28005]: cluster 2026-03-10T07:39:19.584923+0000 mon.a (mon.0) 3130 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:19 vm00 bash[28005]: cluster 2026-03-10T07:39:19.596619+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:19 vm00 bash[28005]: cluster 2026-03-10T07:39:19.596619+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:19 vm00 bash[20701]: cluster 2026-03-10T07:39:18.731789+0000 mgr.y (mgr.24407) 543 : cluster [DBG] pgmap v930: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:19 vm00 bash[20701]: cluster 2026-03-10T07:39:18.731789+0000 mgr.y (mgr.24407) 543 : cluster [DBG] pgmap v930: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:19 vm00 bash[20701]: cluster 2026-03-10T07:39:19.584923+0000 mon.a (mon.0) 3130 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:19 vm00 bash[20701]: cluster 2026-03-10T07:39:19.584923+0000 mon.a (mon.0) 3130 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:19 vm00 bash[20701]: cluster 2026-03-10T07:39:19.596619+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T07:39:20.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:19 vm00 bash[20701]: cluster 2026-03-10T07:39:19.596619+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T07:39:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:19 vm03 bash[23382]: cluster 2026-03-10T07:39:18.731789+0000 mgr.y (mgr.24407) 543 : cluster [DBG] pgmap v930: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:19 vm03 bash[23382]: cluster 2026-03-10T07:39:18.731789+0000 mgr.y (mgr.24407) 543 : cluster [DBG] pgmap v930: 268 pgs: 268 active+clean; 455 KiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:19 vm03 bash[23382]: cluster 2026-03-10T07:39:19.584923+0000 mon.a (mon.0) 3130 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:19 vm03 bash[23382]: cluster 2026-03-10T07:39:19.584923+0000 mon.a (mon.0) 3130 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:19 vm03 bash[23382]: cluster 2026-03-10T07:39:19.596619+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T07:39:20.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:19 vm03 bash[23382]: cluster 2026-03-10T07:39:19.596619+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T07:39:21.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:39:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:39:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:21 vm00 bash[28005]: cluster 2026-03-10T07:39:20.607931+0000 mon.a (mon.0) 3132 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:21 vm00 bash[28005]: cluster 2026-03-10T07:39:20.607931+0000 mon.a (mon.0) 3132 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:21 vm00 bash[28005]: audit 2026-03-10T07:39:20.609879+0000 mon.b (mon.1) 564 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:21 vm00 bash[28005]: audit 2026-03-10T07:39:20.609879+0000 mon.b (mon.1) 564 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:21 vm00 bash[28005]: audit 2026-03-10T07:39:20.610159+0000 mon.a (mon.0) 3133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:21 vm00 bash[28005]: audit 2026-03-10T07:39:20.610159+0000 mon.a (mon.0) 3133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:21 vm00 bash[28005]: cluster 2026-03-10T07:39:20.732186+0000 mgr.y (mgr.24407) 544 : cluster [DBG] pgmap v933: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:21 vm00 bash[28005]: cluster 2026-03-10T07:39:20.732186+0000 mgr.y (mgr.24407) 544 : cluster [DBG] pgmap v933: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:21 vm00 bash[20701]: cluster 2026-03-10T07:39:20.607931+0000 mon.a (mon.0) 3132 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:21 vm00 bash[20701]: cluster 2026-03-10T07:39:20.607931+0000 mon.a (mon.0) 3132 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:21 vm00 bash[20701]: audit 2026-03-10T07:39:20.609879+0000 mon.b (mon.1) 564 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:21 vm00 bash[20701]: audit 2026-03-10T07:39:20.609879+0000 mon.b (mon.1) 564 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:21 vm00 bash[20701]: audit 2026-03-10T07:39:20.610159+0000 mon.a (mon.0) 3133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:21 vm00 bash[20701]: audit 2026-03-10T07:39:20.610159+0000 mon.a (mon.0) 3133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:21 vm00 bash[20701]: cluster 2026-03-10T07:39:20.732186+0000 mgr.y (mgr.24407) 544 : cluster [DBG] pgmap v933: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:21 vm00 bash[20701]: cluster 2026-03-10T07:39:20.732186+0000 mgr.y (mgr.24407) 544 : cluster [DBG] pgmap v933: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:22.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:21 vm03 bash[23382]: cluster 2026-03-10T07:39:20.607931+0000 mon.a (mon.0) 3132 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T07:39:22.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:21 vm03 bash[23382]: cluster 2026-03-10T07:39:20.607931+0000 mon.a (mon.0) 3132 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T07:39:22.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:21 vm03 bash[23382]: audit 2026-03-10T07:39:20.609879+0000 mon.b (mon.1) 564 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:22.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:21 vm03 bash[23382]: audit 2026-03-10T07:39:20.609879+0000 mon.b (mon.1) 564 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:22.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:21 vm03 bash[23382]: audit 2026-03-10T07:39:20.610159+0000 mon.a (mon.0) 3133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:22.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:21 vm03 bash[23382]: audit 2026-03-10T07:39:20.610159+0000 mon.a (mon.0) 3133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:22.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:21 vm03 bash[23382]: cluster 2026-03-10T07:39:20.732186+0000 mgr.y (mgr.24407) 544 : cluster [DBG] pgmap v933: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:22.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:21 vm03 bash[23382]: cluster 2026-03-10T07:39:20.732186+0000 mgr.y (mgr.24407) 544 : cluster [DBG] pgmap v933: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:22 vm03 bash[23382]: audit 2026-03-10T07:39:21.597700+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:22 vm03 bash[23382]: audit 2026-03-10T07:39:21.597700+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:22 vm03 bash[23382]: cluster 2026-03-10T07:39:21.611164+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T07:39:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:22 vm03 bash[23382]: cluster 2026-03-10T07:39:21.611164+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T07:39:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:22 vm03 bash[23382]: audit 2026-03-10T07:39:21.640675+0000 mon.b (mon.1) 565 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:22 vm03 bash[23382]: audit 2026-03-10T07:39:21.640675+0000 mon.b (mon.1) 565 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:22 vm03 bash[23382]: audit 2026-03-10T07:39:21.640750+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:22 vm03 bash[23382]: audit 2026-03-10T07:39:21.640750+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:22 vm00 bash[28005]: audit 2026-03-10T07:39:21.597700+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:23.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:22 vm00 bash[28005]: audit 2026-03-10T07:39:21.597700+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:22 vm00 bash[28005]: cluster 2026-03-10T07:39:21.611164+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:22 vm00 bash[28005]: cluster 2026-03-10T07:39:21.611164+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:22 vm00 bash[28005]: audit 2026-03-10T07:39:21.640675+0000 mon.b (mon.1) 565 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:22 vm00 bash[28005]: audit 2026-03-10T07:39:21.640675+0000 mon.b (mon.1) 565 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:22 vm00 bash[28005]: audit 2026-03-10T07:39:21.640750+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:22 vm00 bash[28005]: audit 2026-03-10T07:39:21.640750+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:22 vm00 bash[20701]: audit 2026-03-10T07:39:21.597700+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:22 vm00 bash[20701]: audit 2026-03-10T07:39:21.597700+0000 mon.a (mon.0) 3134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:22 vm00 bash[20701]: cluster 2026-03-10T07:39:21.611164+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:22 vm00 bash[20701]: cluster 2026-03-10T07:39:21.611164+0000 mon.a (mon.0) 3135 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:22 vm00 bash[20701]: audit 2026-03-10T07:39:21.640675+0000 mon.b (mon.1) 565 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:22 vm00 bash[20701]: audit 2026-03-10T07:39:21.640675+0000 mon.b (mon.1) 565 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:22 vm00 bash[20701]: audit 2026-03-10T07:39:21.640750+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:22 vm00 bash[20701]: audit 2026-03-10T07:39:21.640750+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:23.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:39:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:39:23.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: audit 2026-03-10T07:39:22.627390+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:23.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: audit 2026-03-10T07:39:22.627390+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: cluster 2026-03-10T07:39:22.630913+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: cluster 2026-03-10T07:39:22.630913+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: audit 2026-03-10T07:39:22.631669+0000 mon.b (mon.1) 566 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: audit 2026-03-10T07:39:22.631669+0000 mon.b (mon.1) 566 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: audit 2026-03-10T07:39:22.631994+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: audit 2026-03-10T07:39:22.631994+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: cluster 2026-03-10T07:39:22.732609+0000 mgr.y (mgr.24407) 545 : cluster [DBG] pgmap v936: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: cluster 2026-03-10T07:39:22.732609+0000 mgr.y (mgr.24407) 545 : cluster [DBG] pgmap v936: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: audit 2026-03-10T07:39:23.630100+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: audit 2026-03-10T07:39:23.630100+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: audit 2026-03-10T07:39:23.633688+0000 mon.b (mon.1) 567 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: audit 2026-03-10T07:39:23.633688+0000 mon.b (mon.1) 567 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: cluster 2026-03-10T07:39:23.638565+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T07:39:23.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:23 vm03 bash[23382]: cluster 2026-03-10T07:39:23.638565+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: audit 2026-03-10T07:39:22.627390+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: audit 2026-03-10T07:39:22.627390+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: cluster 2026-03-10T07:39:22.630913+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: cluster 2026-03-10T07:39:22.630913+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: audit 2026-03-10T07:39:22.631669+0000 mon.b (mon.1) 566 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: audit 2026-03-10T07:39:22.631669+0000 mon.b (mon.1) 566 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: audit 2026-03-10T07:39:22.631994+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: audit 2026-03-10T07:39:22.631994+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: cluster 2026-03-10T07:39:22.732609+0000 mgr.y (mgr.24407) 545 : cluster [DBG] pgmap v936: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: cluster 2026-03-10T07:39:22.732609+0000 mgr.y (mgr.24407) 545 : cluster [DBG] pgmap v936: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: audit 2026-03-10T07:39:23.630100+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: audit 2026-03-10T07:39:23.630100+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: audit 2026-03-10T07:39:23.633688+0000 mon.b (mon.1) 567 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: audit 2026-03-10T07:39:23.633688+0000 mon.b (mon.1) 567 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: cluster 2026-03-10T07:39:23.638565+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:23 vm00 bash[28005]: cluster 2026-03-10T07:39:23.638565+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: audit 2026-03-10T07:39:22.627390+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: audit 2026-03-10T07:39:22.627390+0000 mon.a (mon.0) 3137 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: cluster 2026-03-10T07:39:22.630913+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T07:39:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: cluster 2026-03-10T07:39:22.630913+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: audit 2026-03-10T07:39:22.631669+0000 mon.b (mon.1) 566 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: audit 2026-03-10T07:39:22.631669+0000 mon.b (mon.1) 566 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: audit 2026-03-10T07:39:22.631994+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: audit 2026-03-10T07:39:22.631994+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: cluster 2026-03-10T07:39:22.732609+0000 mgr.y (mgr.24407) 545 : cluster [DBG] pgmap v936: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: cluster 2026-03-10T07:39:22.732609+0000 mgr.y (mgr.24407) 545 : cluster [DBG] pgmap v936: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: audit 2026-03-10T07:39:23.630100+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: audit 2026-03-10T07:39:23.630100+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: audit 2026-03-10T07:39:23.633688+0000 mon.b (mon.1) 567 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: audit 2026-03-10T07:39:23.633688+0000 mon.b (mon.1) 567 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: cluster 2026-03-10T07:39:23.638565+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T07:39:24.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:23 vm00 bash[20701]: cluster 2026-03-10T07:39:23.638565+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T07:39:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:24 vm03 bash[23382]: audit 2026-03-10T07:39:23.464987+0000 mgr.y (mgr.24407) 546 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:24 vm03 bash[23382]: audit 2026-03-10T07:39:23.464987+0000 mgr.y (mgr.24407) 546 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:24 vm03 bash[23382]: audit 2026-03-10T07:39:23.639209+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:24 vm03 bash[23382]: audit 2026-03-10T07:39:23.639209+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:24 vm03 bash[23382]: cluster 2026-03-10T07:39:24.630174+0000 mon.a (mon.0) 3143 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:24 vm03 bash[23382]: cluster 2026-03-10T07:39:24.630174+0000 mon.a (mon.0) 3143 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:24 vm03 bash[23382]: audit 2026-03-10T07:39:24.633678+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]': finished 2026-03-10T07:39:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:24 vm03 bash[23382]: audit 2026-03-10T07:39:24.633678+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]': finished 2026-03-10T07:39:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:24 vm03 bash[23382]: cluster 2026-03-10T07:39:24.642447+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T07:39:25.014 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:24 vm03 bash[23382]: cluster 2026-03-10T07:39:24.642447+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:24 vm00 bash[28005]: audit 2026-03-10T07:39:23.464987+0000 mgr.y (mgr.24407) 546 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:24 vm00 bash[28005]: audit 2026-03-10T07:39:23.464987+0000 mgr.y (mgr.24407) 546 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:24 vm00 bash[28005]: audit 2026-03-10T07:39:23.639209+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:24 vm00 bash[28005]: audit 2026-03-10T07:39:23.639209+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:24 vm00 bash[28005]: cluster 2026-03-10T07:39:24.630174+0000 mon.a (mon.0) 3143 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:24 vm00 bash[28005]: cluster 2026-03-10T07:39:24.630174+0000 mon.a (mon.0) 3143 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:24 vm00 bash[28005]: audit 2026-03-10T07:39:24.633678+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]': finished 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:24 vm00 bash[28005]: audit 2026-03-10T07:39:24.633678+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]': finished 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:24 vm00 bash[28005]: cluster 2026-03-10T07:39:24.642447+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:24 vm00 bash[28005]: cluster 2026-03-10T07:39:24.642447+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:24 vm00 bash[20701]: audit 2026-03-10T07:39:23.464987+0000 mgr.y (mgr.24407) 546 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:24 vm00 bash[20701]: audit 2026-03-10T07:39:23.464987+0000 mgr.y (mgr.24407) 546 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:24 vm00 bash[20701]: audit 2026-03-10T07:39:23.639209+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:24 vm00 bash[20701]: audit 2026-03-10T07:39:23.639209+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]: dispatch 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:24 vm00 bash[20701]: cluster 2026-03-10T07:39:24.630174+0000 mon.a (mon.0) 3143 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:24 vm00 bash[20701]: cluster 2026-03-10T07:39:24.630174+0000 mon.a (mon.0) 3143 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:24 vm00 bash[20701]: audit 2026-03-10T07:39:24.633678+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]': finished 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:24 vm00 bash[20701]: audit 2026-03-10T07:39:24.633678+0000 mon.a (mon.0) 3144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-122", "mode": "writeback"}]': finished 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:24 vm00 bash[20701]: cluster 2026-03-10T07:39:24.642447+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T07:39:25.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:24 vm00 bash[20701]: cluster 2026-03-10T07:39:24.642447+0000 mon.a (mon.0) 3145 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T07:39:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:25 vm03 bash[23382]: audit 2026-03-10T07:39:24.700922+0000 mon.b (mon.1) 568 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:25 vm03 bash[23382]: audit 2026-03-10T07:39:24.700922+0000 mon.b (mon.1) 568 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:25 vm03 bash[23382]: audit 2026-03-10T07:39:24.701052+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:25 vm03 bash[23382]: audit 2026-03-10T07:39:24.701052+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:25 vm03 bash[23382]: audit 2026-03-10T07:39:24.717744+0000 mon.c (mon.2) 349 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:25 vm03 bash[23382]: audit 2026-03-10T07:39:24.717744+0000 mon.c (mon.2) 349 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:25 vm03 bash[23382]: cluster 2026-03-10T07:39:24.733132+0000 mgr.y (mgr.24407) 547 : cluster [DBG] pgmap v939: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:39:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:25 vm03 bash[23382]: cluster 2026-03-10T07:39:24.733132+0000 mgr.y (mgr.24407) 547 : cluster [DBG] pgmap v939: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:25 vm00 bash[28005]: audit 2026-03-10T07:39:24.700922+0000 mon.b (mon.1) 568 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:25 vm00 bash[28005]: audit 2026-03-10T07:39:24.700922+0000 mon.b (mon.1) 568 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:25 vm00 bash[28005]: audit 2026-03-10T07:39:24.701052+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:25 vm00 bash[28005]: audit 2026-03-10T07:39:24.701052+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:25 vm00 bash[28005]: audit 2026-03-10T07:39:24.717744+0000 mon.c (mon.2) 349 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:25 vm00 bash[28005]: audit 2026-03-10T07:39:24.717744+0000 mon.c (mon.2) 349 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:25 vm00 bash[28005]: cluster 2026-03-10T07:39:24.733132+0000 mgr.y (mgr.24407) 547 : cluster [DBG] pgmap v939: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:25 vm00 bash[28005]: cluster 2026-03-10T07:39:24.733132+0000 mgr.y (mgr.24407) 547 : cluster [DBG] pgmap v939: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:25 vm00 bash[20701]: audit 2026-03-10T07:39:24.700922+0000 mon.b (mon.1) 568 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:25 vm00 bash[20701]: audit 2026-03-10T07:39:24.700922+0000 mon.b (mon.1) 568 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:25 vm00 bash[20701]: audit 2026-03-10T07:39:24.701052+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:25 vm00 bash[20701]: audit 2026-03-10T07:39:24.701052+0000 mon.a (mon.0) 3146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:25 vm00 bash[20701]: audit 2026-03-10T07:39:24.717744+0000 mon.c (mon.2) 349 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:25 vm00 bash[20701]: audit 2026-03-10T07:39:24.717744+0000 mon.c (mon.2) 349 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:25 vm00 bash[20701]: cluster 2026-03-10T07:39:24.733132+0000 mgr.y (mgr.24407) 547 : cluster [DBG] pgmap v939: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:39:26.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:25 vm00 bash[20701]: cluster 2026-03-10T07:39:24.733132+0000 mgr.y (mgr.24407) 547 : cluster [DBG] pgmap v939: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:39:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:26 vm03 bash[23382]: audit 2026-03-10T07:39:25.671724+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:26 vm03 bash[23382]: audit 2026-03-10T07:39:25.671724+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:26 vm03 bash[23382]: audit 2026-03-10T07:39:25.677865+0000 mon.b (mon.1) 569 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:26 vm03 bash[23382]: audit 2026-03-10T07:39:25.677865+0000 mon.b (mon.1) 569 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:26 vm03 bash[23382]: cluster 2026-03-10T07:39:25.682346+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T07:39:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:26 vm03 bash[23382]: cluster 2026-03-10T07:39:25.682346+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T07:39:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:26 vm03 bash[23382]: audit 2026-03-10T07:39:25.683143+0000 mon.a (mon.0) 3149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:26 vm03 bash[23382]: audit 2026-03-10T07:39:25.683143+0000 mon.a (mon.0) 3149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:26 vm03 bash[23382]: cluster 2026-03-10T07:39:26.555987+0000 mon.a (mon.0) 3150 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:27.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:26 vm03 bash[23382]: cluster 2026-03-10T07:39:26.555987+0000 mon.a (mon.0) 3150 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:26 vm00 bash[28005]: audit 2026-03-10T07:39:25.671724+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:26 vm00 bash[28005]: audit 2026-03-10T07:39:25.671724+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:26 vm00 bash[28005]: audit 2026-03-10T07:39:25.677865+0000 mon.b (mon.1) 569 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:26 vm00 bash[28005]: audit 2026-03-10T07:39:25.677865+0000 mon.b (mon.1) 569 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:26 vm00 bash[28005]: cluster 2026-03-10T07:39:25.682346+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:26 vm00 bash[28005]: cluster 2026-03-10T07:39:25.682346+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:26 vm00 bash[28005]: audit 2026-03-10T07:39:25.683143+0000 mon.a (mon.0) 3149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:26 vm00 bash[28005]: audit 2026-03-10T07:39:25.683143+0000 mon.a (mon.0) 3149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:26 vm00 bash[28005]: cluster 2026-03-10T07:39:26.555987+0000 mon.a (mon.0) 3150 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:26 vm00 bash[28005]: cluster 2026-03-10T07:39:26.555987+0000 mon.a (mon.0) 3150 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:26 vm00 bash[20701]: audit 2026-03-10T07:39:25.671724+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:26 vm00 bash[20701]: audit 2026-03-10T07:39:25.671724+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:26 vm00 bash[20701]: audit 2026-03-10T07:39:25.677865+0000 mon.b (mon.1) 569 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:26 vm00 bash[20701]: audit 2026-03-10T07:39:25.677865+0000 mon.b (mon.1) 569 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:26 vm00 bash[20701]: cluster 2026-03-10T07:39:25.682346+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:26 vm00 bash[20701]: cluster 2026-03-10T07:39:25.682346+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:26 vm00 bash[20701]: audit 2026-03-10T07:39:25.683143+0000 mon.a (mon.0) 3149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:26 vm00 bash[20701]: audit 2026-03-10T07:39:25.683143+0000 mon.a (mon.0) 3149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]: dispatch 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:26 vm00 bash[20701]: cluster 2026-03-10T07:39:26.555987+0000 mon.a (mon.0) 3150 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:27.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:26 vm00 bash[20701]: cluster 2026-03-10T07:39:26.555987+0000 mon.a (mon.0) 3150 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:28 vm00 bash[28005]: cluster 2026-03-10T07:39:26.671718+0000 mon.a (mon.0) 3151 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:28 vm00 bash[28005]: cluster 2026-03-10T07:39:26.671718+0000 mon.a (mon.0) 3151 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:28 vm00 bash[28005]: audit 2026-03-10T07:39:26.674789+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:28 vm00 bash[28005]: audit 2026-03-10T07:39:26.674789+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:28 vm00 bash[28005]: cluster 2026-03-10T07:39:26.694327+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:28 vm00 bash[28005]: cluster 2026-03-10T07:39:26.694327+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:28 vm00 bash[28005]: cluster 2026-03-10T07:39:26.733610+0000 mgr.y (mgr.24407) 548 : cluster [DBG] pgmap v942: 268 pgs: 268 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:28 vm00 bash[28005]: cluster 2026-03-10T07:39:26.733610+0000 mgr.y (mgr.24407) 548 : cluster [DBG] pgmap v942: 268 pgs: 268 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:28 vm00 bash[20701]: cluster 2026-03-10T07:39:26.671718+0000 mon.a (mon.0) 3151 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:28 vm00 bash[20701]: cluster 2026-03-10T07:39:26.671718+0000 mon.a (mon.0) 3151 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:28 vm00 bash[20701]: audit 2026-03-10T07:39:26.674789+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:28 vm00 bash[20701]: audit 2026-03-10T07:39:26.674789+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:28 vm00 bash[20701]: cluster 2026-03-10T07:39:26.694327+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:28 vm00 bash[20701]: cluster 2026-03-10T07:39:26.694327+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:28 vm00 bash[20701]: cluster 2026-03-10T07:39:26.733610+0000 mgr.y (mgr.24407) 548 : cluster [DBG] pgmap v942: 268 pgs: 268 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:39:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:28 vm00 bash[20701]: cluster 2026-03-10T07:39:26.733610+0000 mgr.y (mgr.24407) 548 : cluster [DBG] pgmap v942: 268 pgs: 268 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:39:28.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:28 vm03 bash[23382]: cluster 2026-03-10T07:39:26.671718+0000 mon.a (mon.0) 3151 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:28.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:28 vm03 bash[23382]: cluster 2026-03-10T07:39:26.671718+0000 mon.a (mon.0) 3151 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:28 vm03 bash[23382]: audit 2026-03-10T07:39:26.674789+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:28 vm03 bash[23382]: audit 2026-03-10T07:39:26.674789+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-122"}]': finished 2026-03-10T07:39:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:28 vm03 bash[23382]: cluster 2026-03-10T07:39:26.694327+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T07:39:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:28 vm03 bash[23382]: cluster 2026-03-10T07:39:26.694327+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T07:39:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:28 vm03 bash[23382]: cluster 2026-03-10T07:39:26.733610+0000 mgr.y (mgr.24407) 548 : cluster [DBG] pgmap v942: 268 pgs: 268 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:39:28.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:28 vm03 bash[23382]: cluster 2026-03-10T07:39:26.733610+0000 mgr.y (mgr.24407) 548 : cluster [DBG] pgmap v942: 268 pgs: 268 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T07:39:29.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:29 vm03 bash[23382]: cluster 2026-03-10T07:39:28.265297+0000 mon.a (mon.0) 3154 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T07:39:29.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:29 vm03 bash[23382]: cluster 2026-03-10T07:39:28.265297+0000 mon.a (mon.0) 3154 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T07:39:29.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:29 vm00 bash[28005]: cluster 2026-03-10T07:39:28.265297+0000 mon.a (mon.0) 3154 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T07:39:29.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:29 vm00 bash[28005]: cluster 2026-03-10T07:39:28.265297+0000 mon.a (mon.0) 3154 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T07:39:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:29 vm00 bash[20701]: cluster 2026-03-10T07:39:28.265297+0000 mon.a (mon.0) 3154 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T07:39:29.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:29 vm00 bash[20701]: cluster 2026-03-10T07:39:28.265297+0000 mon.a (mon.0) 3154 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T07:39:30.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:30 vm03 bash[23382]: cluster 2026-03-10T07:39:28.734083+0000 mgr.y (mgr.24407) 549 : cluster [DBG] pgmap v944: 236 pgs: 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s 2026-03-10T07:39:30.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:30 vm03 bash[23382]: cluster 2026-03-10T07:39:28.734083+0000 mgr.y (mgr.24407) 549 : cluster [DBG] pgmap v944: 236 pgs: 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s 2026-03-10T07:39:30.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:30 vm03 bash[23382]: cluster 2026-03-10T07:39:29.154350+0000 mon.a (mon.0) 3155 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T07:39:30.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:30 vm03 bash[23382]: cluster 2026-03-10T07:39:29.154350+0000 mon.a (mon.0) 3155 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T07:39:30.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:30 vm03 bash[23382]: audit 2026-03-10T07:39:29.157838+0000 mon.b (mon.1) 570 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:30.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:30 vm03 bash[23382]: audit 2026-03-10T07:39:29.157838+0000 mon.b (mon.1) 570 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:30.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:30 vm03 bash[23382]: audit 2026-03-10T07:39:29.158079+0000 mon.a (mon.0) 3156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:30.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:30 vm03 bash[23382]: audit 2026-03-10T07:39:29.158079+0000 mon.a (mon.0) 3156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:30.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:30 vm00 bash[28005]: cluster 2026-03-10T07:39:28.734083+0000 mgr.y (mgr.24407) 549 : cluster [DBG] pgmap v944: 236 pgs: 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:30 vm00 bash[28005]: cluster 2026-03-10T07:39:28.734083+0000 mgr.y (mgr.24407) 549 : cluster [DBG] pgmap v944: 236 pgs: 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:30 vm00 bash[28005]: cluster 2026-03-10T07:39:29.154350+0000 mon.a (mon.0) 3155 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:30 vm00 bash[28005]: cluster 2026-03-10T07:39:29.154350+0000 mon.a (mon.0) 3155 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:30 vm00 bash[28005]: audit 2026-03-10T07:39:29.157838+0000 mon.b (mon.1) 570 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:30 vm00 bash[28005]: audit 2026-03-10T07:39:29.157838+0000 mon.b (mon.1) 570 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:30 vm00 bash[28005]: audit 2026-03-10T07:39:29.158079+0000 mon.a (mon.0) 3156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:30 vm00 bash[28005]: audit 2026-03-10T07:39:29.158079+0000 mon.a (mon.0) 3156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:30 vm00 bash[20701]: cluster 2026-03-10T07:39:28.734083+0000 mgr.y (mgr.24407) 549 : cluster [DBG] pgmap v944: 236 pgs: 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:30 vm00 bash[20701]: cluster 2026-03-10T07:39:28.734083+0000 mgr.y (mgr.24407) 549 : cluster [DBG] pgmap v944: 236 pgs: 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 249 B/s wr, 1 op/s 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:30 vm00 bash[20701]: cluster 2026-03-10T07:39:29.154350+0000 mon.a (mon.0) 3155 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:30 vm00 bash[20701]: cluster 2026-03-10T07:39:29.154350+0000 mon.a (mon.0) 3155 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:30 vm00 bash[20701]: audit 2026-03-10T07:39:29.157838+0000 mon.b (mon.1) 570 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:30 vm00 bash[20701]: audit 2026-03-10T07:39:29.157838+0000 mon.b (mon.1) 570 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:30 vm00 bash[20701]: audit 2026-03-10T07:39:29.158079+0000 mon.a (mon.0) 3156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:30.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:30 vm00 bash[20701]: audit 2026-03-10T07:39:29.158079+0000 mon.a (mon.0) 3156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:31.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:39:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:39:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:39:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:31 vm00 bash[28005]: audit 2026-03-10T07:39:30.140457+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:31 vm00 bash[28005]: audit 2026-03-10T07:39:30.140457+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:31 vm00 bash[28005]: cluster 2026-03-10T07:39:30.149083+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T07:39:31.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:31 vm00 bash[28005]: cluster 2026-03-10T07:39:30.149083+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T07:39:31.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:31 vm00 bash[20701]: audit 2026-03-10T07:39:30.140457+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:31.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:31 vm00 bash[20701]: audit 2026-03-10T07:39:30.140457+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:31.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:31 vm00 bash[20701]: cluster 2026-03-10T07:39:30.149083+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T07:39:31.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:31 vm00 bash[20701]: cluster 2026-03-10T07:39:30.149083+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T07:39:32.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:31 vm03 bash[23382]: audit 2026-03-10T07:39:30.140457+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:32.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:31 vm03 bash[23382]: audit 2026-03-10T07:39:30.140457+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:32.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:31 vm03 bash[23382]: cluster 2026-03-10T07:39:30.149083+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T07:39:32.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:31 vm03 bash[23382]: cluster 2026-03-10T07:39:30.149083+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T07:39:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:32 vm03 bash[23382]: cluster 2026-03-10T07:39:30.734493+0000 mgr.y (mgr.24407) 550 : cluster [DBG] pgmap v947: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 252 B/s wr, 1 op/s 2026-03-10T07:39:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:32 vm03 bash[23382]: cluster 2026-03-10T07:39:30.734493+0000 mgr.y (mgr.24407) 550 : cluster [DBG] pgmap v947: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 252 B/s wr, 1 op/s 2026-03-10T07:39:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:32 vm03 bash[23382]: cluster 2026-03-10T07:39:31.703009+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T07:39:33.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:32 vm03 bash[23382]: cluster 2026-03-10T07:39:31.703009+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T07:39:33.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:32 vm00 bash[28005]: cluster 2026-03-10T07:39:30.734493+0000 mgr.y (mgr.24407) 550 : cluster [DBG] pgmap v947: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 252 B/s wr, 1 op/s 2026-03-10T07:39:33.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:32 vm00 bash[28005]: cluster 2026-03-10T07:39:30.734493+0000 mgr.y (mgr.24407) 550 : cluster [DBG] pgmap v947: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 252 B/s wr, 1 op/s 2026-03-10T07:39:33.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:32 vm00 bash[28005]: cluster 2026-03-10T07:39:31.703009+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T07:39:33.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:32 vm00 bash[28005]: cluster 2026-03-10T07:39:31.703009+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T07:39:33.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:32 vm00 bash[20701]: cluster 2026-03-10T07:39:30.734493+0000 mgr.y (mgr.24407) 550 : cluster [DBG] pgmap v947: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 252 B/s wr, 1 op/s 2026-03-10T07:39:33.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:32 vm00 bash[20701]: cluster 2026-03-10T07:39:30.734493+0000 mgr.y (mgr.24407) 550 : cluster [DBG] pgmap v947: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 252 B/s wr, 1 op/s 2026-03-10T07:39:33.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:32 vm00 bash[20701]: cluster 2026-03-10T07:39:31.703009+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T07:39:33.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:32 vm00 bash[20701]: cluster 2026-03-10T07:39:31.703009+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T07:39:33.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:39:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:39:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:33 vm03 bash[23382]: cluster 2026-03-10T07:39:32.734889+0000 mgr.y (mgr.24407) 551 : cluster [DBG] pgmap v949: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 222 B/s wr, 1 op/s 2026-03-10T07:39:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:33 vm03 bash[23382]: cluster 2026-03-10T07:39:32.734889+0000 mgr.y (mgr.24407) 551 : cluster [DBG] pgmap v949: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 222 B/s wr, 1 op/s 2026-03-10T07:39:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:33 vm03 bash[23382]: cluster 2026-03-10T07:39:32.904779+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T07:39:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:33 vm03 bash[23382]: cluster 2026-03-10T07:39:32.904779+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T07:39:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:33 vm03 bash[23382]: audit 2026-03-10T07:39:32.938877+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:34.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:33 vm03 bash[23382]: audit 2026-03-10T07:39:32.938877+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:34.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:33 vm03 bash[23382]: audit 2026-03-10T07:39:32.938926+0000 mon.b (mon.1) 571 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:34.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:33 vm03 bash[23382]: audit 2026-03-10T07:39:32.938926+0000 mon.b (mon.1) 571 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:33 vm00 bash[28005]: cluster 2026-03-10T07:39:32.734889+0000 mgr.y (mgr.24407) 551 : cluster [DBG] pgmap v949: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 222 B/s wr, 1 op/s 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:33 vm00 bash[28005]: cluster 2026-03-10T07:39:32.734889+0000 mgr.y (mgr.24407) 551 : cluster [DBG] pgmap v949: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 222 B/s wr, 1 op/s 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:33 vm00 bash[28005]: cluster 2026-03-10T07:39:32.904779+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:33 vm00 bash[28005]: cluster 2026-03-10T07:39:32.904779+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:33 vm00 bash[28005]: audit 2026-03-10T07:39:32.938877+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:33 vm00 bash[28005]: audit 2026-03-10T07:39:32.938877+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:33 vm00 bash[28005]: audit 2026-03-10T07:39:32.938926+0000 mon.b (mon.1) 571 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:33 vm00 bash[28005]: audit 2026-03-10T07:39:32.938926+0000 mon.b (mon.1) 571 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:33 vm00 bash[20701]: cluster 2026-03-10T07:39:32.734889+0000 mgr.y (mgr.24407) 551 : cluster [DBG] pgmap v949: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 222 B/s wr, 1 op/s 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:33 vm00 bash[20701]: cluster 2026-03-10T07:39:32.734889+0000 mgr.y (mgr.24407) 551 : cluster [DBG] pgmap v949: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 222 B/s wr, 1 op/s 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:33 vm00 bash[20701]: cluster 2026-03-10T07:39:32.904779+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:33 vm00 bash[20701]: cluster 2026-03-10T07:39:32.904779+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:33 vm00 bash[20701]: audit 2026-03-10T07:39:32.938877+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:33 vm00 bash[20701]: audit 2026-03-10T07:39:32.938877+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:33 vm00 bash[20701]: audit 2026-03-10T07:39:32.938926+0000 mon.b (mon.1) 571 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:34.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:33 vm00 bash[20701]: audit 2026-03-10T07:39:32.938926+0000 mon.b (mon.1) 571 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:35.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:34 vm03 bash[23382]: audit 2026-03-10T07:39:33.474382+0000 mgr.y (mgr.24407) 552 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:34 vm03 bash[23382]: audit 2026-03-10T07:39:33.474382+0000 mgr.y (mgr.24407) 552 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:34 vm03 bash[23382]: audit 2026-03-10T07:39:33.921936+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:34 vm03 bash[23382]: audit 2026-03-10T07:39:33.921936+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:34 vm03 bash[23382]: cluster 2026-03-10T07:39:33.926994+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T07:39:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:34 vm03 bash[23382]: cluster 2026-03-10T07:39:33.926994+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T07:39:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:34 vm03 bash[23382]: audit 2026-03-10T07:39:33.927150+0000 mon.b (mon.1) 572 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:34 vm03 bash[23382]: audit 2026-03-10T07:39:33.927150+0000 mon.b (mon.1) 572 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:34 vm03 bash[23382]: audit 2026-03-10T07:39:33.927634+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:35.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:34 vm03 bash[23382]: audit 2026-03-10T07:39:33.927634+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:34 vm00 bash[28005]: audit 2026-03-10T07:39:33.474382+0000 mgr.y (mgr.24407) 552 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:34 vm00 bash[28005]: audit 2026-03-10T07:39:33.474382+0000 mgr.y (mgr.24407) 552 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:34 vm00 bash[28005]: audit 2026-03-10T07:39:33.921936+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:34 vm00 bash[28005]: audit 2026-03-10T07:39:33.921936+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:34 vm00 bash[28005]: cluster 2026-03-10T07:39:33.926994+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:34 vm00 bash[28005]: cluster 2026-03-10T07:39:33.926994+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:34 vm00 bash[28005]: audit 2026-03-10T07:39:33.927150+0000 mon.b (mon.1) 572 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:34 vm00 bash[28005]: audit 2026-03-10T07:39:33.927150+0000 mon.b (mon.1) 572 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:34 vm00 bash[28005]: audit 2026-03-10T07:39:33.927634+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:34 vm00 bash[28005]: audit 2026-03-10T07:39:33.927634+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:34 vm00 bash[20701]: audit 2026-03-10T07:39:33.474382+0000 mgr.y (mgr.24407) 552 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:34 vm00 bash[20701]: audit 2026-03-10T07:39:33.474382+0000 mgr.y (mgr.24407) 552 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:34 vm00 bash[20701]: audit 2026-03-10T07:39:33.921936+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:34 vm00 bash[20701]: audit 2026-03-10T07:39:33.921936+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:34 vm00 bash[20701]: cluster 2026-03-10T07:39:33.926994+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:34 vm00 bash[20701]: cluster 2026-03-10T07:39:33.926994+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:34 vm00 bash[20701]: audit 2026-03-10T07:39:33.927150+0000 mon.b (mon.1) 572 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:34 vm00 bash[20701]: audit 2026-03-10T07:39:33.927150+0000 mon.b (mon.1) 572 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:34 vm00 bash[20701]: audit 2026-03-10T07:39:33.927634+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:34 vm00 bash[20701]: audit 2026-03-10T07:39:33.927634+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:36.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:35 vm03 bash[23382]: cluster 2026-03-10T07:39:34.735497+0000 mgr.y (mgr.24407) 553 : cluster [DBG] pgmap v952: 268 pgs: 15 creating+peering, 253 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 668 B/s wr, 0 op/s 2026-03-10T07:39:36.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:35 vm03 bash[23382]: cluster 2026-03-10T07:39:34.735497+0000 mgr.y (mgr.24407) 553 : cluster [DBG] pgmap v952: 268 pgs: 15 creating+peering, 253 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 668 B/s wr, 0 op/s 2026-03-10T07:39:36.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:35 vm03 bash[23382]: audit 2026-03-10T07:39:34.930928+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:36.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:35 vm03 bash[23382]: audit 2026-03-10T07:39:34.930928+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:36.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:35 vm03 bash[23382]: cluster 2026-03-10T07:39:34.945110+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T07:39:36.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:35 vm03 bash[23382]: cluster 2026-03-10T07:39:34.945110+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T07:39:36.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:35 vm03 bash[23382]: audit 2026-03-10T07:39:34.945203+0000 mon.b (mon.1) 573 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:36.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:35 vm03 bash[23382]: audit 2026-03-10T07:39:34.945203+0000 mon.b (mon.1) 573 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:36.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:35 vm03 bash[23382]: audit 2026-03-10T07:39:34.945812+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:36.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:35 vm03 bash[23382]: audit 2026-03-10T07:39:34.945812+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:35 vm00 bash[28005]: cluster 2026-03-10T07:39:34.735497+0000 mgr.y (mgr.24407) 553 : cluster [DBG] pgmap v952: 268 pgs: 15 creating+peering, 253 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 668 B/s wr, 0 op/s 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:35 vm00 bash[28005]: cluster 2026-03-10T07:39:34.735497+0000 mgr.y (mgr.24407) 553 : cluster [DBG] pgmap v952: 268 pgs: 15 creating+peering, 253 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 668 B/s wr, 0 op/s 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:35 vm00 bash[28005]: audit 2026-03-10T07:39:34.930928+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:35 vm00 bash[28005]: audit 2026-03-10T07:39:34.930928+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:35 vm00 bash[28005]: cluster 2026-03-10T07:39:34.945110+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:35 vm00 bash[28005]: cluster 2026-03-10T07:39:34.945110+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:35 vm00 bash[28005]: audit 2026-03-10T07:39:34.945203+0000 mon.b (mon.1) 573 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:35 vm00 bash[28005]: audit 2026-03-10T07:39:34.945203+0000 mon.b (mon.1) 573 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:35 vm00 bash[28005]: audit 2026-03-10T07:39:34.945812+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:35 vm00 bash[28005]: audit 2026-03-10T07:39:34.945812+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:35 vm00 bash[20701]: cluster 2026-03-10T07:39:34.735497+0000 mgr.y (mgr.24407) 553 : cluster [DBG] pgmap v952: 268 pgs: 15 creating+peering, 253 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 668 B/s wr, 0 op/s 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:35 vm00 bash[20701]: cluster 2026-03-10T07:39:34.735497+0000 mgr.y (mgr.24407) 553 : cluster [DBG] pgmap v952: 268 pgs: 15 creating+peering, 253 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 668 B/s wr, 0 op/s 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:35 vm00 bash[20701]: audit 2026-03-10T07:39:34.930928+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:35 vm00 bash[20701]: audit 2026-03-10T07:39:34.930928+0000 mon.a (mon.0) 3165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:35 vm00 bash[20701]: cluster 2026-03-10T07:39:34.945110+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:35 vm00 bash[20701]: cluster 2026-03-10T07:39:34.945110+0000 mon.a (mon.0) 3166 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:35 vm00 bash[20701]: audit 2026-03-10T07:39:34.945203+0000 mon.b (mon.1) 573 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:35 vm00 bash[20701]: audit 2026-03-10T07:39:34.945203+0000 mon.b (mon.1) 573 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:35 vm00 bash[20701]: audit 2026-03-10T07:39:34.945812+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:36.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:35 vm00 bash[20701]: audit 2026-03-10T07:39:34.945812+0000 mon.a (mon.0) 3167 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]: dispatch 2026-03-10T07:39:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:36 vm03 bash[23382]: cluster 2026-03-10T07:39:35.931190+0000 mon.a (mon.0) 3168 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:36 vm03 bash[23382]: cluster 2026-03-10T07:39:35.931190+0000 mon.a (mon.0) 3168 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:36 vm03 bash[23382]: audit 2026-03-10T07:39:35.935003+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]': finished 2026-03-10T07:39:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:36 vm03 bash[23382]: audit 2026-03-10T07:39:35.935003+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]': finished 2026-03-10T07:39:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:36 vm03 bash[23382]: cluster 2026-03-10T07:39:35.938174+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T07:39:37.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:36 vm03 bash[23382]: cluster 2026-03-10T07:39:35.938174+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:36 vm00 bash[28005]: cluster 2026-03-10T07:39:35.931190+0000 mon.a (mon.0) 3168 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:36 vm00 bash[28005]: cluster 2026-03-10T07:39:35.931190+0000 mon.a (mon.0) 3168 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:36 vm00 bash[28005]: audit 2026-03-10T07:39:35.935003+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]': finished 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:36 vm00 bash[28005]: audit 2026-03-10T07:39:35.935003+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]': finished 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:36 vm00 bash[28005]: cluster 2026-03-10T07:39:35.938174+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:36 vm00 bash[28005]: cluster 2026-03-10T07:39:35.938174+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:36 vm00 bash[20701]: cluster 2026-03-10T07:39:35.931190+0000 mon.a (mon.0) 3168 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:36 vm00 bash[20701]: cluster 2026-03-10T07:39:35.931190+0000 mon.a (mon.0) 3168 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:36 vm00 bash[20701]: audit 2026-03-10T07:39:35.935003+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]': finished 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:36 vm00 bash[20701]: audit 2026-03-10T07:39:35.935003+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-124", "mode": "writeback"}]': finished 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:36 vm00 bash[20701]: cluster 2026-03-10T07:39:35.938174+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T07:39:37.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:36 vm00 bash[20701]: cluster 2026-03-10T07:39:35.938174+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:38 vm00 bash[28005]: cluster 2026-03-10T07:39:36.736022+0000 mgr.y (mgr.24407) 554 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:38 vm00 bash[28005]: cluster 2026-03-10T07:39:36.736022+0000 mgr.y (mgr.24407) 554 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:38 vm00 bash[28005]: cluster 2026-03-10T07:39:36.978319+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:38 vm00 bash[28005]: cluster 2026-03-10T07:39:36.978319+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:38 vm00 bash[28005]: audit 2026-03-10T07:39:37.035972+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:38 vm00 bash[28005]: audit 2026-03-10T07:39:37.035972+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:38 vm00 bash[28005]: audit 2026-03-10T07:39:37.035983+0000 mon.b (mon.1) 574 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:38 vm00 bash[28005]: audit 2026-03-10T07:39:37.035983+0000 mon.b (mon.1) 574 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:38 vm00 bash[20701]: cluster 2026-03-10T07:39:36.736022+0000 mgr.y (mgr.24407) 554 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:38 vm00 bash[20701]: cluster 2026-03-10T07:39:36.736022+0000 mgr.y (mgr.24407) 554 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:38 vm00 bash[20701]: cluster 2026-03-10T07:39:36.978319+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:38 vm00 bash[20701]: cluster 2026-03-10T07:39:36.978319+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:38 vm00 bash[20701]: audit 2026-03-10T07:39:37.035972+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:38 vm00 bash[20701]: audit 2026-03-10T07:39:37.035972+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:38 vm00 bash[20701]: audit 2026-03-10T07:39:37.035983+0000 mon.b (mon.1) 574 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:38 vm00 bash[20701]: audit 2026-03-10T07:39:37.035983+0000 mon.b (mon.1) 574 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:38.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:38 vm03 bash[23382]: cluster 2026-03-10T07:39:36.736022+0000 mgr.y (mgr.24407) 554 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T07:39:38.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:38 vm03 bash[23382]: cluster 2026-03-10T07:39:36.736022+0000 mgr.y (mgr.24407) 554 : cluster [DBG] pgmap v955: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T07:39:38.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:38 vm03 bash[23382]: cluster 2026-03-10T07:39:36.978319+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T07:39:38.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:38 vm03 bash[23382]: cluster 2026-03-10T07:39:36.978319+0000 mon.a (mon.0) 3171 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T07:39:38.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:38 vm03 bash[23382]: audit 2026-03-10T07:39:37.035972+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:38.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:38 vm03 bash[23382]: audit 2026-03-10T07:39:37.035972+0000 mon.a (mon.0) 3172 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:38.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:38 vm03 bash[23382]: audit 2026-03-10T07:39:37.035983+0000 mon.b (mon.1) 574 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:38.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:38 vm03 bash[23382]: audit 2026-03-10T07:39:37.035983+0000 mon.b (mon.1) 574 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:39.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: audit 2026-03-10T07:39:38.148561+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:39.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: audit 2026-03-10T07:39:38.148561+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:39.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: cluster 2026-03-10T07:39:38.210176+0000 mon.a (mon.0) 3174 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T07:39:39.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: cluster 2026-03-10T07:39:38.210176+0000 mon.a (mon.0) 3174 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T07:39:39.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: audit 2026-03-10T07:39:38.237703+0000 mon.b (mon.1) 575 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: audit 2026-03-10T07:39:38.237703+0000 mon.b (mon.1) 575 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: audit 2026-03-10T07:39:38.249798+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: audit 2026-03-10T07:39:38.249798+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: cluster 2026-03-10T07:39:39.148582+0000 mon.a (mon.0) 3176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:39.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: cluster 2026-03-10T07:39:39.148582+0000 mon.a (mon.0) 3176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:39.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: audit 2026-03-10T07:39:39.191926+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:39.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: audit 2026-03-10T07:39:39.191926+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:39.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: cluster 2026-03-10T07:39:39.202645+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T07:39:39.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:39 vm03 bash[23382]: cluster 2026-03-10T07:39:39.202645+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: audit 2026-03-10T07:39:38.148561+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: audit 2026-03-10T07:39:38.148561+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: cluster 2026-03-10T07:39:38.210176+0000 mon.a (mon.0) 3174 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: cluster 2026-03-10T07:39:38.210176+0000 mon.a (mon.0) 3174 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: audit 2026-03-10T07:39:38.237703+0000 mon.b (mon.1) 575 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: audit 2026-03-10T07:39:38.237703+0000 mon.b (mon.1) 575 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: audit 2026-03-10T07:39:38.249798+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: audit 2026-03-10T07:39:38.249798+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: cluster 2026-03-10T07:39:39.148582+0000 mon.a (mon.0) 3176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: cluster 2026-03-10T07:39:39.148582+0000 mon.a (mon.0) 3176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: audit 2026-03-10T07:39:39.191926+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: audit 2026-03-10T07:39:39.191926+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: cluster 2026-03-10T07:39:39.202645+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:39 vm00 bash[28005]: cluster 2026-03-10T07:39:39.202645+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: audit 2026-03-10T07:39:38.148561+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: audit 2026-03-10T07:39:38.148561+0000 mon.a (mon.0) 3173 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: cluster 2026-03-10T07:39:38.210176+0000 mon.a (mon.0) 3174 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: cluster 2026-03-10T07:39:38.210176+0000 mon.a (mon.0) 3174 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: audit 2026-03-10T07:39:38.237703+0000 mon.b (mon.1) 575 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: audit 2026-03-10T07:39:38.237703+0000 mon.b (mon.1) 575 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: audit 2026-03-10T07:39:38.249798+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: audit 2026-03-10T07:39:38.249798+0000 mon.a (mon.0) 3175 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]: dispatch 2026-03-10T07:39:39.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: cluster 2026-03-10T07:39:39.148582+0000 mon.a (mon.0) 3176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:39.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: cluster 2026-03-10T07:39:39.148582+0000 mon.a (mon.0) 3176 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:39.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: audit 2026-03-10T07:39:39.191926+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:39.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: audit 2026-03-10T07:39:39.191926+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-124"}]': finished 2026-03-10T07:39:39.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: cluster 2026-03-10T07:39:39.202645+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T07:39:39.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:39 vm00 bash[20701]: cluster 2026-03-10T07:39:39.202645+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T07:39:40.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:40 vm03 bash[23382]: cluster 2026-03-10T07:39:38.736373+0000 mgr.y (mgr.24407) 555 : cluster [DBG] pgmap v958: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:39:40.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:40 vm03 bash[23382]: cluster 2026-03-10T07:39:38.736373+0000 mgr.y (mgr.24407) 555 : cluster [DBG] pgmap v958: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:39:40.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:40 vm03 bash[23382]: audit 2026-03-10T07:39:39.730401+0000 mon.a (mon.0) 3179 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:40.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:40 vm03 bash[23382]: audit 2026-03-10T07:39:39.730401+0000 mon.a (mon.0) 3179 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:40.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:40 vm03 bash[23382]: audit 2026-03-10T07:39:39.734854+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:40.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:40 vm03 bash[23382]: audit 2026-03-10T07:39:39.734854+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:40.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:40 vm03 bash[23382]: cluster 2026-03-10T07:39:40.199004+0000 mon.a (mon.0) 3180 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T07:39:40.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:40 vm03 bash[23382]: cluster 2026-03-10T07:39:40.199004+0000 mon.a (mon.0) 3180 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:40 vm00 bash[28005]: cluster 2026-03-10T07:39:38.736373+0000 mgr.y (mgr.24407) 555 : cluster [DBG] pgmap v958: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:40 vm00 bash[28005]: cluster 2026-03-10T07:39:38.736373+0000 mgr.y (mgr.24407) 555 : cluster [DBG] pgmap v958: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:40 vm00 bash[28005]: audit 2026-03-10T07:39:39.730401+0000 mon.a (mon.0) 3179 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:40 vm00 bash[28005]: audit 2026-03-10T07:39:39.730401+0000 mon.a (mon.0) 3179 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:40 vm00 bash[28005]: audit 2026-03-10T07:39:39.734854+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:40 vm00 bash[28005]: audit 2026-03-10T07:39:39.734854+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:40 vm00 bash[28005]: cluster 2026-03-10T07:39:40.199004+0000 mon.a (mon.0) 3180 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:40 vm00 bash[28005]: cluster 2026-03-10T07:39:40.199004+0000 mon.a (mon.0) 3180 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:40 vm00 bash[20701]: cluster 2026-03-10T07:39:38.736373+0000 mgr.y (mgr.24407) 555 : cluster [DBG] pgmap v958: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:40 vm00 bash[20701]: cluster 2026-03-10T07:39:38.736373+0000 mgr.y (mgr.24407) 555 : cluster [DBG] pgmap v958: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:40 vm00 bash[20701]: audit 2026-03-10T07:39:39.730401+0000 mon.a (mon.0) 3179 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:40 vm00 bash[20701]: audit 2026-03-10T07:39:39.730401+0000 mon.a (mon.0) 3179 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:40 vm00 bash[20701]: audit 2026-03-10T07:39:39.734854+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:40 vm00 bash[20701]: audit 2026-03-10T07:39:39.734854+0000 mon.c (mon.2) 350 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:40 vm00 bash[20701]: cluster 2026-03-10T07:39:40.199004+0000 mon.a (mon.0) 3180 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T07:39:40.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:40 vm00 bash[20701]: cluster 2026-03-10T07:39:40.199004+0000 mon.a (mon.0) 3180 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T07:39:41.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:39:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:39:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:39:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:42 vm03 bash[23382]: cluster 2026-03-10T07:39:40.736821+0000 mgr.y (mgr.24407) 556 : cluster [DBG] pgmap v961: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:42 vm03 bash[23382]: cluster 2026-03-10T07:39:40.736821+0000 mgr.y (mgr.24407) 556 : cluster [DBG] pgmap v961: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:42 vm03 bash[23382]: cluster 2026-03-10T07:39:41.206448+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T07:39:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:42 vm03 bash[23382]: cluster 2026-03-10T07:39:41.206448+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T07:39:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:42 vm03 bash[23382]: audit 2026-03-10T07:39:41.207508+0000 mon.b (mon.1) 576 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:42 vm03 bash[23382]: audit 2026-03-10T07:39:41.207508+0000 mon.b (mon.1) 576 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:42 vm03 bash[23382]: audit 2026-03-10T07:39:41.211621+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:42 vm03 bash[23382]: audit 2026-03-10T07:39:41.211621+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:42 vm00 bash[28005]: cluster 2026-03-10T07:39:40.736821+0000 mgr.y (mgr.24407) 556 : cluster [DBG] pgmap v961: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:42 vm00 bash[28005]: cluster 2026-03-10T07:39:40.736821+0000 mgr.y (mgr.24407) 556 : cluster [DBG] pgmap v961: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:42 vm00 bash[28005]: cluster 2026-03-10T07:39:41.206448+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:42 vm00 bash[28005]: cluster 2026-03-10T07:39:41.206448+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:42 vm00 bash[28005]: audit 2026-03-10T07:39:41.207508+0000 mon.b (mon.1) 576 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:42 vm00 bash[28005]: audit 2026-03-10T07:39:41.207508+0000 mon.b (mon.1) 576 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:42 vm00 bash[28005]: audit 2026-03-10T07:39:41.211621+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:42 vm00 bash[28005]: audit 2026-03-10T07:39:41.211621+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:42 vm00 bash[20701]: cluster 2026-03-10T07:39:40.736821+0000 mgr.y (mgr.24407) 556 : cluster [DBG] pgmap v961: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:42 vm00 bash[20701]: cluster 2026-03-10T07:39:40.736821+0000 mgr.y (mgr.24407) 556 : cluster [DBG] pgmap v961: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:42 vm00 bash[20701]: cluster 2026-03-10T07:39:41.206448+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:42 vm00 bash[20701]: cluster 2026-03-10T07:39:41.206448+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:42 vm00 bash[20701]: audit 2026-03-10T07:39:41.207508+0000 mon.b (mon.1) 576 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:42 vm00 bash[20701]: audit 2026-03-10T07:39:41.207508+0000 mon.b (mon.1) 576 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:42 vm00 bash[20701]: audit 2026-03-10T07:39:41.211621+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:42 vm00 bash[20701]: audit 2026-03-10T07:39:41.211621+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:42.204004+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:42.204004+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: cluster 2026-03-10T07:39:42.208746+0000 mon.a (mon.0) 3184 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: cluster 2026-03-10T07:39:42.208746+0000 mon.a (mon.0) 3184 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:42.213998+0000 mon.b (mon.1) 577 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:42.213998+0000 mon.b (mon.1) 577 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:42.218157+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:42.218157+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:43.207016+0000 mon.a (mon.0) 3186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:43.207016+0000 mon.a (mon.0) 3186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: cluster 2026-03-10T07:39:43.209941+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: cluster 2026-03-10T07:39:43.209941+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:43.211325+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:43.211325+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:43.211502+0000 mon.b (mon.1) 578 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.480 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:43 vm03 bash[23382]: audit 2026-03-10T07:39:43.211502+0000 mon.b (mon.1) 578 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:42.204004+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:42.204004+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: cluster 2026-03-10T07:39:42.208746+0000 mon.a (mon.0) 3184 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: cluster 2026-03-10T07:39:42.208746+0000 mon.a (mon.0) 3184 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:42.213998+0000 mon.b (mon.1) 577 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:42.213998+0000 mon.b (mon.1) 577 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:42.218157+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:42.218157+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:43.207016+0000 mon.a (mon.0) 3186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:43.207016+0000 mon.a (mon.0) 3186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: cluster 2026-03-10T07:39:43.209941+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: cluster 2026-03-10T07:39:43.209941+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:43.211325+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:43.211325+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:43.211502+0000 mon.b (mon.1) 578 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:43 vm00 bash[28005]: audit 2026-03-10T07:39:43.211502+0000 mon.b (mon.1) 578 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:42.204004+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:42.204004+0000 mon.a (mon.0) 3183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: cluster 2026-03-10T07:39:42.208746+0000 mon.a (mon.0) 3184 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: cluster 2026-03-10T07:39:42.208746+0000 mon.a (mon.0) 3184 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:42.213998+0000 mon.b (mon.1) 577 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:42.213998+0000 mon.b (mon.1) 577 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:42.218157+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:42.218157+0000 mon.a (mon.0) 3185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:43.207016+0000 mon.a (mon.0) 3186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:43.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:43.207016+0000 mon.a (mon.0) 3186 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: cluster 2026-03-10T07:39:43.209941+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T07:39:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: cluster 2026-03-10T07:39:43.209941+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T07:39:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:43.211325+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:43.211325+0000 mon.a (mon.0) 3188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:43.211502+0000 mon.b (mon.1) 578 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:43 vm00 bash[20701]: audit 2026-03-10T07:39:43.211502+0000 mon.b (mon.1) 578 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:43.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:39:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:39:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:44 vm03 bash[23382]: cluster 2026-03-10T07:39:42.737174+0000 mgr.y (mgr.24407) 557 : cluster [DBG] pgmap v964: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:44 vm03 bash[23382]: cluster 2026-03-10T07:39:42.737174+0000 mgr.y (mgr.24407) 557 : cluster [DBG] pgmap v964: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:44 vm03 bash[23382]: audit 2026-03-10T07:39:44.210112+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:44 vm03 bash[23382]: audit 2026-03-10T07:39:44.210112+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:44 vm03 bash[23382]: cluster 2026-03-10T07:39:44.213399+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T07:39:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:44 vm03 bash[23382]: cluster 2026-03-10T07:39:44.213399+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T07:39:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:44 vm03 bash[23382]: audit 2026-03-10T07:39:44.214271+0000 mon.b (mon.1) 579 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:44 vm03 bash[23382]: audit 2026-03-10T07:39:44.214271+0000 mon.b (mon.1) 579 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:44 vm03 bash[23382]: audit 2026-03-10T07:39:44.214749+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:44.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:44 vm03 bash[23382]: audit 2026-03-10T07:39:44.214749+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:44 vm00 bash[28005]: cluster 2026-03-10T07:39:42.737174+0000 mgr.y (mgr.24407) 557 : cluster [DBG] pgmap v964: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:44 vm00 bash[28005]: cluster 2026-03-10T07:39:42.737174+0000 mgr.y (mgr.24407) 557 : cluster [DBG] pgmap v964: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:44 vm00 bash[28005]: audit 2026-03-10T07:39:44.210112+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:44 vm00 bash[28005]: audit 2026-03-10T07:39:44.210112+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:44 vm00 bash[28005]: cluster 2026-03-10T07:39:44.213399+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:44 vm00 bash[28005]: cluster 2026-03-10T07:39:44.213399+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:44 vm00 bash[28005]: audit 2026-03-10T07:39:44.214271+0000 mon.b (mon.1) 579 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:44 vm00 bash[28005]: audit 2026-03-10T07:39:44.214271+0000 mon.b (mon.1) 579 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:44 vm00 bash[28005]: audit 2026-03-10T07:39:44.214749+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:44 vm00 bash[28005]: audit 2026-03-10T07:39:44.214749+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:44 vm00 bash[20701]: cluster 2026-03-10T07:39:42.737174+0000 mgr.y (mgr.24407) 557 : cluster [DBG] pgmap v964: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:44 vm00 bash[20701]: cluster 2026-03-10T07:39:42.737174+0000 mgr.y (mgr.24407) 557 : cluster [DBG] pgmap v964: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:44 vm00 bash[20701]: audit 2026-03-10T07:39:44.210112+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:44 vm00 bash[20701]: audit 2026-03-10T07:39:44.210112+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:44 vm00 bash[20701]: cluster 2026-03-10T07:39:44.213399+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:44 vm00 bash[20701]: cluster 2026-03-10T07:39:44.213399+0000 mon.a (mon.0) 3190 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:44 vm00 bash[20701]: audit 2026-03-10T07:39:44.214271+0000 mon.b (mon.1) 579 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:44 vm00 bash[20701]: audit 2026-03-10T07:39:44.214271+0000 mon.b (mon.1) 579 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:44 vm00 bash[20701]: audit 2026-03-10T07:39:44.214749+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:44 vm00 bash[20701]: audit 2026-03-10T07:39:44.214749+0000 mon.a (mon.0) 3191 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]: dispatch 2026-03-10T07:39:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:45 vm03 bash[23382]: audit 2026-03-10T07:39:43.484886+0000 mgr.y (mgr.24407) 558 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:45 vm03 bash[23382]: audit 2026-03-10T07:39:43.484886+0000 mgr.y (mgr.24407) 558 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:45 vm03 bash[23382]: cluster 2026-03-10T07:39:45.210141+0000 mon.a (mon.0) 3192 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:45 vm03 bash[23382]: cluster 2026-03-10T07:39:45.210141+0000 mon.a (mon.0) 3192 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:45 vm03 bash[23382]: audit 2026-03-10T07:39:45.213350+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]': finished 2026-03-10T07:39:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:45 vm03 bash[23382]: audit 2026-03-10T07:39:45.213350+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]': finished 2026-03-10T07:39:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:45 vm03 bash[23382]: cluster 2026-03-10T07:39:45.224136+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T07:39:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:45 vm03 bash[23382]: cluster 2026-03-10T07:39:45.224136+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:45 vm00 bash[28005]: audit 2026-03-10T07:39:43.484886+0000 mgr.y (mgr.24407) 558 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:45 vm00 bash[28005]: audit 2026-03-10T07:39:43.484886+0000 mgr.y (mgr.24407) 558 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:45 vm00 bash[28005]: cluster 2026-03-10T07:39:45.210141+0000 mon.a (mon.0) 3192 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:45 vm00 bash[28005]: cluster 2026-03-10T07:39:45.210141+0000 mon.a (mon.0) 3192 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:45 vm00 bash[28005]: audit 2026-03-10T07:39:45.213350+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]': finished 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:45 vm00 bash[28005]: audit 2026-03-10T07:39:45.213350+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]': finished 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:45 vm00 bash[28005]: cluster 2026-03-10T07:39:45.224136+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:45 vm00 bash[28005]: cluster 2026-03-10T07:39:45.224136+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:45 vm00 bash[20701]: audit 2026-03-10T07:39:43.484886+0000 mgr.y (mgr.24407) 558 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:45 vm00 bash[20701]: audit 2026-03-10T07:39:43.484886+0000 mgr.y (mgr.24407) 558 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:45 vm00 bash[20701]: cluster 2026-03-10T07:39:45.210141+0000 mon.a (mon.0) 3192 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:45 vm00 bash[20701]: cluster 2026-03-10T07:39:45.210141+0000 mon.a (mon.0) 3192 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:45 vm00 bash[20701]: audit 2026-03-10T07:39:45.213350+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]': finished 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:45 vm00 bash[20701]: audit 2026-03-10T07:39:45.213350+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-126", "mode": "writeback"}]': finished 2026-03-10T07:39:45.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:45 vm00 bash[20701]: cluster 2026-03-10T07:39:45.224136+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T07:39:45.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:45 vm00 bash[20701]: cluster 2026-03-10T07:39:45.224136+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T07:39:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:46 vm03 bash[23382]: cluster 2026-03-10T07:39:44.737715+0000 mgr.y (mgr.24407) 559 : cluster [DBG] pgmap v967: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:46 vm03 bash[23382]: cluster 2026-03-10T07:39:44.737715+0000 mgr.y (mgr.24407) 559 : cluster [DBG] pgmap v967: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:46 vm03 bash[23382]: audit 2026-03-10T07:39:45.284699+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:46 vm03 bash[23382]: audit 2026-03-10T07:39:45.284699+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:46 vm03 bash[23382]: audit 2026-03-10T07:39:45.284869+0000 mon.b (mon.1) 580 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:46 vm03 bash[23382]: audit 2026-03-10T07:39:45.284869+0000 mon.b (mon.1) 580 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:46 vm00 bash[28005]: cluster 2026-03-10T07:39:44.737715+0000 mgr.y (mgr.24407) 559 : cluster [DBG] pgmap v967: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:46 vm00 bash[28005]: cluster 2026-03-10T07:39:44.737715+0000 mgr.y (mgr.24407) 559 : cluster [DBG] pgmap v967: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:46 vm00 bash[28005]: audit 2026-03-10T07:39:45.284699+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:46 vm00 bash[28005]: audit 2026-03-10T07:39:45.284699+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:46 vm00 bash[28005]: audit 2026-03-10T07:39:45.284869+0000 mon.b (mon.1) 580 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:46 vm00 bash[28005]: audit 2026-03-10T07:39:45.284869+0000 mon.b (mon.1) 580 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:46 vm00 bash[20701]: cluster 2026-03-10T07:39:44.737715+0000 mgr.y (mgr.24407) 559 : cluster [DBG] pgmap v967: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:46 vm00 bash[20701]: cluster 2026-03-10T07:39:44.737715+0000 mgr.y (mgr.24407) 559 : cluster [DBG] pgmap v967: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:46 vm00 bash[20701]: audit 2026-03-10T07:39:45.284699+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:46 vm00 bash[20701]: audit 2026-03-10T07:39:45.284699+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:46 vm00 bash[20701]: audit 2026-03-10T07:39:45.284869+0000 mon.b (mon.1) 580 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:46.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:46 vm00 bash[20701]: audit 2026-03-10T07:39:45.284869+0000 mon.b (mon.1) 580 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: audit 2026-03-10T07:39:46.256559+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: audit 2026-03-10T07:39:46.256559+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: audit 2026-03-10T07:39:46.261969+0000 mon.b (mon.1) 581 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: audit 2026-03-10T07:39:46.261969+0000 mon.b (mon.1) 581 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: cluster 2026-03-10T07:39:46.266523+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T07:39:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: cluster 2026-03-10T07:39:46.266523+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T07:39:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: audit 2026-03-10T07:39:46.267099+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: audit 2026-03-10T07:39:46.267099+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: cluster 2026-03-10T07:39:46.784596+0000 mon.a (mon.0) 3199 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:47.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: cluster 2026-03-10T07:39:46.784596+0000 mon.a (mon.0) 3199 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:47.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: audit 2026-03-10T07:39:46.787616+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:47.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: audit 2026-03-10T07:39:46.787616+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:47.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: cluster 2026-03-10T07:39:46.793136+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T07:39:47.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:47 vm03 bash[23382]: cluster 2026-03-10T07:39:46.793136+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: audit 2026-03-10T07:39:46.256559+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: audit 2026-03-10T07:39:46.256559+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: audit 2026-03-10T07:39:46.261969+0000 mon.b (mon.1) 581 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: audit 2026-03-10T07:39:46.261969+0000 mon.b (mon.1) 581 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: cluster 2026-03-10T07:39:46.266523+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: cluster 2026-03-10T07:39:46.266523+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: audit 2026-03-10T07:39:46.267099+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: audit 2026-03-10T07:39:46.267099+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: cluster 2026-03-10T07:39:46.784596+0000 mon.a (mon.0) 3199 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: cluster 2026-03-10T07:39:46.784596+0000 mon.a (mon.0) 3199 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: audit 2026-03-10T07:39:46.787616+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: audit 2026-03-10T07:39:46.787616+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: cluster 2026-03-10T07:39:46.793136+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:47 vm00 bash[28005]: cluster 2026-03-10T07:39:46.793136+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: audit 2026-03-10T07:39:46.256559+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: audit 2026-03-10T07:39:46.256559+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: audit 2026-03-10T07:39:46.261969+0000 mon.b (mon.1) 581 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: audit 2026-03-10T07:39:46.261969+0000 mon.b (mon.1) 581 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: cluster 2026-03-10T07:39:46.266523+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: cluster 2026-03-10T07:39:46.266523+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T07:39:47.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: audit 2026-03-10T07:39:46.267099+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: audit 2026-03-10T07:39:46.267099+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]: dispatch 2026-03-10T07:39:47.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: cluster 2026-03-10T07:39:46.784596+0000 mon.a (mon.0) 3199 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:47.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: cluster 2026-03-10T07:39:46.784596+0000 mon.a (mon.0) 3199 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:39:47.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: audit 2026-03-10T07:39:46.787616+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:47.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: audit 2026-03-10T07:39:46.787616+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-126"}]': finished 2026-03-10T07:39:47.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: cluster 2026-03-10T07:39:46.793136+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T07:39:47.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:47 vm00 bash[20701]: cluster 2026-03-10T07:39:46.793136+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T07:39:47.804 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringFlush (9258 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapHasChunk 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapHasChunk (6056 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollback 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollback (5185 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollbackRefcount 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollbackRefcount (25908 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictRollback 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictRollback (13764 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PropagateBaseTierError 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PropagateBaseTierError (12168 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HelloWriteReturn 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 00000000 79 6f 75 20 6d 69 67 68 74 20 73 65 65 20 74 68 |you might see th| 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 00000010 69 73 |is| 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 00000012 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HelloWriteReturn (12362 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier (6161 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP (560881 ms total) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.Dirty 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.Dirty (1027 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.FlushWriteRaces 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.FlushWriteRaces (11137 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.CallForcesPromote 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.CallForcesPromote (18137 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.HitSetNone 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.HitSetNone (3 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP (30304 ms total) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Overlay 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Overlay (7433 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Promote 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Promote (8086 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnap 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: waiting for scrub... 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: done waiting 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnap (24630 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace (10108 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Whiteout 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Whiteout (7925 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Evict 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Evict (8668 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.EvictSnap 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.EvictSnap (11942 ms) 2026-03-10T07:39:47.805 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlush 2026-03-10T07:39:48.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:48 vm00 bash[28005]: cluster 2026-03-10T07:39:46.738088+0000 mgr.y (mgr.24407) 560 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:48.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:48 vm00 bash[28005]: cluster 2026-03-10T07:39:46.738088+0000 mgr.y (mgr.24407) 560 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:48.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:48 vm00 bash[28005]: cluster 2026-03-10T07:39:47.808975+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T07:39:48.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:48 vm00 bash[28005]: cluster 2026-03-10T07:39:47.808975+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T07:39:48.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:48 vm00 bash[20701]: cluster 2026-03-10T07:39:46.738088+0000 mgr.y (mgr.24407) 560 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:48.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:48 vm00 bash[20701]: cluster 2026-03-10T07:39:46.738088+0000 mgr.y (mgr.24407) 560 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:48.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:48 vm00 bash[20701]: cluster 2026-03-10T07:39:47.808975+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T07:39:48.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:48 vm00 bash[20701]: cluster 2026-03-10T07:39:47.808975+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T07:39:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:48 vm03 bash[23382]: cluster 2026-03-10T07:39:46.738088+0000 mgr.y (mgr.24407) 560 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:48 vm03 bash[23382]: cluster 2026-03-10T07:39:46.738088+0000 mgr.y (mgr.24407) 560 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:48 vm03 bash[23382]: cluster 2026-03-10T07:39:47.808975+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T07:39:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:48 vm03 bash[23382]: cluster 2026-03-10T07:39:47.808975+0000 mon.a (mon.0) 3202 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:49 vm00 bash[28005]: cluster 2026-03-10T07:39:48.738504+0000 mgr.y (mgr.24407) 561 : cluster [DBG] pgmap v973: 236 pgs: 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:49 vm00 bash[28005]: cluster 2026-03-10T07:39:48.738504+0000 mgr.y (mgr.24407) 561 : cluster [DBG] pgmap v973: 236 pgs: 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:49 vm00 bash[28005]: cluster 2026-03-10T07:39:48.810768+0000 mon.a (mon.0) 3203 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:49 vm00 bash[28005]: cluster 2026-03-10T07:39:48.810768+0000 mon.a (mon.0) 3203 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:49 vm00 bash[28005]: audit 2026-03-10T07:39:48.811942+0000 mon.b (mon.1) 582 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:49 vm00 bash[28005]: audit 2026-03-10T07:39:48.811942+0000 mon.b (mon.1) 582 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:49 vm00 bash[28005]: audit 2026-03-10T07:39:48.819775+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:49 vm00 bash[28005]: audit 2026-03-10T07:39:48.819775+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:49 vm00 bash[20701]: cluster 2026-03-10T07:39:48.738504+0000 mgr.y (mgr.24407) 561 : cluster [DBG] pgmap v973: 236 pgs: 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:49 vm00 bash[20701]: cluster 2026-03-10T07:39:48.738504+0000 mgr.y (mgr.24407) 561 : cluster [DBG] pgmap v973: 236 pgs: 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:49 vm00 bash[20701]: cluster 2026-03-10T07:39:48.810768+0000 mon.a (mon.0) 3203 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T07:39:50.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:49 vm00 bash[20701]: cluster 2026-03-10T07:39:48.810768+0000 mon.a (mon.0) 3203 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T07:39:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:49 vm00 bash[20701]: audit 2026-03-10T07:39:48.811942+0000 mon.b (mon.1) 582 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:49 vm00 bash[20701]: audit 2026-03-10T07:39:48.811942+0000 mon.b (mon.1) 582 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:49 vm00 bash[20701]: audit 2026-03-10T07:39:48.819775+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:49 vm00 bash[20701]: audit 2026-03-10T07:39:48.819775+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:49 vm03 bash[23382]: cluster 2026-03-10T07:39:48.738504+0000 mgr.y (mgr.24407) 561 : cluster [DBG] pgmap v973: 236 pgs: 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:49 vm03 bash[23382]: cluster 2026-03-10T07:39:48.738504+0000 mgr.y (mgr.24407) 561 : cluster [DBG] pgmap v973: 236 pgs: 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:39:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:49 vm03 bash[23382]: cluster 2026-03-10T07:39:48.810768+0000 mon.a (mon.0) 3203 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T07:39:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:49 vm03 bash[23382]: cluster 2026-03-10T07:39:48.810768+0000 mon.a (mon.0) 3203 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T07:39:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:49 vm03 bash[23382]: audit 2026-03-10T07:39:48.811942+0000 mon.b (mon.1) 582 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:49 vm03 bash[23382]: audit 2026-03-10T07:39:48.811942+0000 mon.b (mon.1) 582 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:49 vm03 bash[23382]: audit 2026-03-10T07:39:48.819775+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:50.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:49 vm03 bash[23382]: audit 2026-03-10T07:39:48.819775+0000 mon.a (mon.0) 3204 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:39:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:39:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:49.797661+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:49.797661+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:49.812016+0000 mon.b (mon.1) 583 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:49.812016+0000 mon.b (mon.1) 583 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: cluster 2026-03-10T07:39:49.819126+0000 mon.a (mon.0) 3206 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: cluster 2026-03-10T07:39:49.819126+0000 mon.a (mon.0) 3206 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:49.831734+0000 mon.a (mon.0) 3207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:49.831734+0000 mon.a (mon.0) 3207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:50.799120+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:50.799120+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:50.803741+0000 mon.b (mon.1) 584 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:50.803741+0000 mon.b (mon.1) 584 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: cluster 2026-03-10T07:39:50.810511+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: cluster 2026-03-10T07:39:50.810511+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:50.811255+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:50 vm00 bash[28005]: audit 2026-03-10T07:39:50.811255+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:49.797661+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:49.797661+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:49.812016+0000 mon.b (mon.1) 583 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:49.812016+0000 mon.b (mon.1) 583 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: cluster 2026-03-10T07:39:49.819126+0000 mon.a (mon.0) 3206 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T07:39:51.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: cluster 2026-03-10T07:39:49.819126+0000 mon.a (mon.0) 3206 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T07:39:51.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:49.831734+0000 mon.a (mon.0) 3207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:49.831734+0000 mon.a (mon.0) 3207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:50.799120+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:51.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:50.799120+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:51.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:50.803741+0000 mon.b (mon.1) 584 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:51.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:50.803741+0000 mon.b (mon.1) 584 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:51.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: cluster 2026-03-10T07:39:50.810511+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T07:39:51.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: cluster 2026-03-10T07:39:50.810511+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T07:39:51.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:50.811255+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:51.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:50 vm00 bash[20701]: audit 2026-03-10T07:39:50.811255+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:51.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:49.797661+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:51.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:49.797661+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:39:51.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:49.812016+0000 mon.b (mon.1) 583 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:49.812016+0000 mon.b (mon.1) 583 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: cluster 2026-03-10T07:39:49.819126+0000 mon.a (mon.0) 3206 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T07:39:51.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: cluster 2026-03-10T07:39:49.819126+0000 mon.a (mon.0) 3206 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T07:39:51.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:49.831734+0000 mon.a (mon.0) 3207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:49.831734+0000 mon.a (mon.0) 3207 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:39:51.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:50.799120+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:51.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:50.799120+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:39:51.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:50.803741+0000 mon.b (mon.1) 584 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:51.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:50.803741+0000 mon.b (mon.1) 584 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:51.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: cluster 2026-03-10T07:39:50.810511+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T07:39:51.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: cluster 2026-03-10T07:39:50.810511+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T07:39:51.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:50.811255+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:51.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:50 vm03 bash[23382]: audit 2026-03-10T07:39:50.811255+0000 mon.a (mon.0) 3210 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:51 vm00 bash[28005]: cluster 2026-03-10T07:39:50.738876+0000 mgr.y (mgr.24407) 562 : cluster [DBG] pgmap v976: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:51 vm00 bash[28005]: cluster 2026-03-10T07:39:50.738876+0000 mgr.y (mgr.24407) 562 : cluster [DBG] pgmap v976: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:51 vm00 bash[28005]: audit 2026-03-10T07:39:51.801534+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:51 vm00 bash[28005]: audit 2026-03-10T07:39:51.801534+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:51 vm00 bash[28005]: cluster 2026-03-10T07:39:51.804275+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:51 vm00 bash[28005]: cluster 2026-03-10T07:39:51.804275+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:51 vm00 bash[28005]: audit 2026-03-10T07:39:51.806203+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:51 vm00 bash[28005]: audit 2026-03-10T07:39:51.806203+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:51 vm00 bash[28005]: audit 2026-03-10T07:39:51.806491+0000 mon.b (mon.1) 585 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:51 vm00 bash[28005]: audit 2026-03-10T07:39:51.806491+0000 mon.b (mon.1) 585 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:51 vm00 bash[20701]: cluster 2026-03-10T07:39:50.738876+0000 mgr.y (mgr.24407) 562 : cluster [DBG] pgmap v976: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:51 vm00 bash[20701]: cluster 2026-03-10T07:39:50.738876+0000 mgr.y (mgr.24407) 562 : cluster [DBG] pgmap v976: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:51 vm00 bash[20701]: audit 2026-03-10T07:39:51.801534+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:51 vm00 bash[20701]: audit 2026-03-10T07:39:51.801534+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:51 vm00 bash[20701]: cluster 2026-03-10T07:39:51.804275+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:51 vm00 bash[20701]: cluster 2026-03-10T07:39:51.804275+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:51 vm00 bash[20701]: audit 2026-03-10T07:39:51.806203+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:51 vm00 bash[20701]: audit 2026-03-10T07:39:51.806203+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:52.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:51 vm00 bash[20701]: audit 2026-03-10T07:39:51.806491+0000 mon.b (mon.1) 585 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:52.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:51 vm00 bash[20701]: audit 2026-03-10T07:39:51.806491+0000 mon.b (mon.1) 585 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:51 vm03 bash[23382]: cluster 2026-03-10T07:39:50.738876+0000 mgr.y (mgr.24407) 562 : cluster [DBG] pgmap v976: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:51 vm03 bash[23382]: cluster 2026-03-10T07:39:50.738876+0000 mgr.y (mgr.24407) 562 : cluster [DBG] pgmap v976: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:51 vm03 bash[23382]: audit 2026-03-10T07:39:51.801534+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:39:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:51 vm03 bash[23382]: audit 2026-03-10T07:39:51.801534+0000 mon.a (mon.0) 3211 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:39:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:51 vm03 bash[23382]: cluster 2026-03-10T07:39:51.804275+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T07:39:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:51 vm03 bash[23382]: cluster 2026-03-10T07:39:51.804275+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T07:39:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:51 vm03 bash[23382]: audit 2026-03-10T07:39:51.806203+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:51 vm03 bash[23382]: audit 2026-03-10T07:39:51.806203+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:51 vm03 bash[23382]: audit 2026-03-10T07:39:51.806491+0000 mon.b (mon.1) 585 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:52.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:51 vm03 bash[23382]: audit 2026-03-10T07:39:51.806491+0000 mon.b (mon.1) 585 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]: dispatch 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:52 vm00 bash[28005]: cluster 2026-03-10T07:39:52.801621+0000 mon.a (mon.0) 3214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:52 vm00 bash[28005]: cluster 2026-03-10T07:39:52.801621+0000 mon.a (mon.0) 3214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:52 vm00 bash[28005]: audit 2026-03-10T07:39:52.805057+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]': finished 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:52 vm00 bash[28005]: audit 2026-03-10T07:39:52.805057+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]': finished 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:52 vm00 bash[28005]: cluster 2026-03-10T07:39:52.818188+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:52 vm00 bash[28005]: cluster 2026-03-10T07:39:52.818188+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:52 vm00 bash[20701]: cluster 2026-03-10T07:39:52.801621+0000 mon.a (mon.0) 3214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:52 vm00 bash[20701]: cluster 2026-03-10T07:39:52.801621+0000 mon.a (mon.0) 3214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:52 vm00 bash[20701]: audit 2026-03-10T07:39:52.805057+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]': finished 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:52 vm00 bash[20701]: audit 2026-03-10T07:39:52.805057+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]': finished 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:52 vm00 bash[20701]: cluster 2026-03-10T07:39:52.818188+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T07:39:53.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:52 vm00 bash[20701]: cluster 2026-03-10T07:39:52.818188+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T07:39:53.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:52 vm03 bash[23382]: cluster 2026-03-10T07:39:52.801621+0000 mon.a (mon.0) 3214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:53.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:52 vm03 bash[23382]: cluster 2026-03-10T07:39:52.801621+0000 mon.a (mon.0) 3214 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:39:53.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:52 vm03 bash[23382]: audit 2026-03-10T07:39:52.805057+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]': finished 2026-03-10T07:39:53.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:52 vm03 bash[23382]: audit 2026-03-10T07:39:52.805057+0000 mon.a (mon.0) 3215 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-128", "mode": "writeback"}]': finished 2026-03-10T07:39:53.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:52 vm03 bash[23382]: cluster 2026-03-10T07:39:52.818188+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T07:39:53.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:52 vm03 bash[23382]: cluster 2026-03-10T07:39:52.818188+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T07:39:53.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:39:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:39:54.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:53 vm00 bash[28005]: cluster 2026-03-10T07:39:52.739229+0000 mgr.y (mgr.24407) 563 : cluster [DBG] pgmap v979: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:54.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:53 vm00 bash[28005]: cluster 2026-03-10T07:39:52.739229+0000 mgr.y (mgr.24407) 563 : cluster [DBG] pgmap v979: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:53 vm00 bash[20701]: cluster 2026-03-10T07:39:52.739229+0000 mgr.y (mgr.24407) 563 : cluster [DBG] pgmap v979: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:54.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:53 vm00 bash[20701]: cluster 2026-03-10T07:39:52.739229+0000 mgr.y (mgr.24407) 563 : cluster [DBG] pgmap v979: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:54.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:53 vm03 bash[23382]: cluster 2026-03-10T07:39:52.739229+0000 mgr.y (mgr.24407) 563 : cluster [DBG] pgmap v979: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:54.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:53 vm03 bash[23382]: cluster 2026-03-10T07:39:52.739229+0000 mgr.y (mgr.24407) 563 : cluster [DBG] pgmap v979: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:54 vm00 bash[28005]: audit 2026-03-10T07:39:53.494752+0000 mgr.y (mgr.24407) 564 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:54 vm00 bash[28005]: audit 2026-03-10T07:39:53.494752+0000 mgr.y (mgr.24407) 564 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:54 vm00 bash[28005]: audit 2026-03-10T07:39:54.749404+0000 mon.a (mon.0) 3217 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:54 vm00 bash[28005]: audit 2026-03-10T07:39:54.749404+0000 mon.a (mon.0) 3217 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:54 vm00 bash[28005]: audit 2026-03-10T07:39:54.752699+0000 mon.c (mon.2) 351 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:54 vm00 bash[28005]: audit 2026-03-10T07:39:54.752699+0000 mon.c (mon.2) 351 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:54 vm00 bash[20701]: audit 2026-03-10T07:39:53.494752+0000 mgr.y (mgr.24407) 564 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:54 vm00 bash[20701]: audit 2026-03-10T07:39:53.494752+0000 mgr.y (mgr.24407) 564 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:54 vm00 bash[20701]: audit 2026-03-10T07:39:54.749404+0000 mon.a (mon.0) 3217 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:54 vm00 bash[20701]: audit 2026-03-10T07:39:54.749404+0000 mon.a (mon.0) 3217 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:54 vm00 bash[20701]: audit 2026-03-10T07:39:54.752699+0000 mon.c (mon.2) 351 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:55.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:54 vm00 bash[20701]: audit 2026-03-10T07:39:54.752699+0000 mon.c (mon.2) 351 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:54 vm03 bash[23382]: audit 2026-03-10T07:39:53.494752+0000 mgr.y (mgr.24407) 564 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:54 vm03 bash[23382]: audit 2026-03-10T07:39:53.494752+0000 mgr.y (mgr.24407) 564 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:39:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:54 vm03 bash[23382]: audit 2026-03-10T07:39:54.749404+0000 mon.a (mon.0) 3217 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:54 vm03 bash[23382]: audit 2026-03-10T07:39:54.749404+0000 mon.a (mon.0) 3217 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:39:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:54 vm03 bash[23382]: audit 2026-03-10T07:39:54.752699+0000 mon.c (mon.2) 351 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:55.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:54 vm03 bash[23382]: audit 2026-03-10T07:39:54.752699+0000 mon.c (mon.2) 351 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:39:56.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:55 vm00 bash[28005]: cluster 2026-03-10T07:39:54.739822+0000 mgr.y (mgr.24407) 565 : cluster [DBG] pgmap v981: 268 pgs: 16 creating+peering, 252 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 207 B/s rd, 414 B/s wr, 0 op/s 2026-03-10T07:39:56.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:55 vm00 bash[28005]: cluster 2026-03-10T07:39:54.739822+0000 mgr.y (mgr.24407) 565 : cluster [DBG] pgmap v981: 268 pgs: 16 creating+peering, 252 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 207 B/s rd, 414 B/s wr, 0 op/s 2026-03-10T07:39:56.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:55 vm00 bash[20701]: cluster 2026-03-10T07:39:54.739822+0000 mgr.y (mgr.24407) 565 : cluster [DBG] pgmap v981: 268 pgs: 16 creating+peering, 252 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 207 B/s rd, 414 B/s wr, 0 op/s 2026-03-10T07:39:56.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:55 vm00 bash[20701]: cluster 2026-03-10T07:39:54.739822+0000 mgr.y (mgr.24407) 565 : cluster [DBG] pgmap v981: 268 pgs: 16 creating+peering, 252 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 207 B/s rd, 414 B/s wr, 0 op/s 2026-03-10T07:39:56.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:55 vm03 bash[23382]: cluster 2026-03-10T07:39:54.739822+0000 mgr.y (mgr.24407) 565 : cluster [DBG] pgmap v981: 268 pgs: 16 creating+peering, 252 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 207 B/s rd, 414 B/s wr, 0 op/s 2026-03-10T07:39:56.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:55 vm03 bash[23382]: cluster 2026-03-10T07:39:54.739822+0000 mgr.y (mgr.24407) 565 : cluster [DBG] pgmap v981: 268 pgs: 16 creating+peering, 252 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 207 B/s rd, 414 B/s wr, 0 op/s 2026-03-10T07:39:58.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:57 vm00 bash[28005]: cluster 2026-03-10T07:39:56.740472+0000 mgr.y (mgr.24407) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s 2026-03-10T07:39:58.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:57 vm00 bash[28005]: cluster 2026-03-10T07:39:56.740472+0000 mgr.y (mgr.24407) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s 2026-03-10T07:39:58.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:57 vm00 bash[20701]: cluster 2026-03-10T07:39:56.740472+0000 mgr.y (mgr.24407) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s 2026-03-10T07:39:58.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:57 vm00 bash[20701]: cluster 2026-03-10T07:39:56.740472+0000 mgr.y (mgr.24407) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s 2026-03-10T07:39:58.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:57 vm03 bash[23382]: cluster 2026-03-10T07:39:56.740472+0000 mgr.y (mgr.24407) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s 2026-03-10T07:39:58.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:57 vm03 bash[23382]: cluster 2026-03-10T07:39:56.740472+0000 mgr.y (mgr.24407) 566 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s 2026-03-10T07:39:59.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:58 vm00 bash[28005]: audit 2026-03-10T07:39:57.883525+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:59.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:58 vm00 bash[28005]: audit 2026-03-10T07:39:57.883525+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:59.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:58 vm00 bash[28005]: audit 2026-03-10T07:39:57.883940+0000 mon.b (mon.1) 586 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:59.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:58 vm00 bash[28005]: audit 2026-03-10T07:39:57.883940+0000 mon.b (mon.1) 586 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:59.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:58 vm00 bash[20701]: audit 2026-03-10T07:39:57.883525+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:59.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:58 vm00 bash[20701]: audit 2026-03-10T07:39:57.883525+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:59.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:58 vm00 bash[20701]: audit 2026-03-10T07:39:57.883940+0000 mon.b (mon.1) 586 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:59.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:58 vm00 bash[20701]: audit 2026-03-10T07:39:57.883940+0000 mon.b (mon.1) 586 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:59.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:58 vm03 bash[23382]: audit 2026-03-10T07:39:57.883525+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:59.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:58 vm03 bash[23382]: audit 2026-03-10T07:39:57.883525+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:59.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:58 vm03 bash[23382]: audit 2026-03-10T07:39:57.883940+0000 mon.b (mon.1) 586 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:39:59.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:58 vm03 bash[23382]: audit 2026-03-10T07:39:57.883940+0000 mon.b (mon.1) 586 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:59 vm03 bash[23382]: cluster 2026-03-10T07:39:58.740842+0000 mgr.y (mgr.24407) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 773 B/s rd, 257 B/s wr, 1 op/s 2026-03-10T07:40:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:59 vm03 bash[23382]: cluster 2026-03-10T07:39:58.740842+0000 mgr.y (mgr.24407) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 773 B/s rd, 257 B/s wr, 1 op/s 2026-03-10T07:40:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:59 vm03 bash[23382]: audit 2026-03-10T07:39:58.891203+0000 mon.a (mon.0) 3219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:00.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:59 vm03 bash[23382]: audit 2026-03-10T07:39:58.891203+0000 mon.a (mon.0) 3219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:00.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:59 vm03 bash[23382]: cluster 2026-03-10T07:39:58.893266+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T07:40:00.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:59 vm03 bash[23382]: cluster 2026-03-10T07:39:58.893266+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T07:40:00.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:59 vm03 bash[23382]: audit 2026-03-10T07:39:58.897480+0000 mon.b (mon.1) 587 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:00.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:59 vm03 bash[23382]: audit 2026-03-10T07:39:58.897480+0000 mon.b (mon.1) 587 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:00.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:59 vm03 bash[23382]: audit 2026-03-10T07:39:58.898573+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:00.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:39:59 vm03 bash[23382]: audit 2026-03-10T07:39:58.898573+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:59 vm00 bash[28005]: cluster 2026-03-10T07:39:58.740842+0000 mgr.y (mgr.24407) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 773 B/s rd, 257 B/s wr, 1 op/s 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:59 vm00 bash[28005]: cluster 2026-03-10T07:39:58.740842+0000 mgr.y (mgr.24407) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 773 B/s rd, 257 B/s wr, 1 op/s 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:59 vm00 bash[28005]: audit 2026-03-10T07:39:58.891203+0000 mon.a (mon.0) 3219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:59 vm00 bash[28005]: audit 2026-03-10T07:39:58.891203+0000 mon.a (mon.0) 3219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:59 vm00 bash[28005]: cluster 2026-03-10T07:39:58.893266+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:59 vm00 bash[28005]: cluster 2026-03-10T07:39:58.893266+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:59 vm00 bash[28005]: audit 2026-03-10T07:39:58.897480+0000 mon.b (mon.1) 587 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:59 vm00 bash[28005]: audit 2026-03-10T07:39:58.897480+0000 mon.b (mon.1) 587 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:59 vm00 bash[28005]: audit 2026-03-10T07:39:58.898573+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:39:59 vm00 bash[28005]: audit 2026-03-10T07:39:58.898573+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:59 vm00 bash[20701]: cluster 2026-03-10T07:39:58.740842+0000 mgr.y (mgr.24407) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 773 B/s rd, 257 B/s wr, 1 op/s 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:59 vm00 bash[20701]: cluster 2026-03-10T07:39:58.740842+0000 mgr.y (mgr.24407) 567 : cluster [DBG] pgmap v983: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 773 B/s rd, 257 B/s wr, 1 op/s 2026-03-10T07:40:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:59 vm00 bash[20701]: audit 2026-03-10T07:39:58.891203+0000 mon.a (mon.0) 3219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:00.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:59 vm00 bash[20701]: audit 2026-03-10T07:39:58.891203+0000 mon.a (mon.0) 3219 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:00.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:59 vm00 bash[20701]: cluster 2026-03-10T07:39:58.893266+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T07:40:00.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:59 vm00 bash[20701]: cluster 2026-03-10T07:39:58.893266+0000 mon.a (mon.0) 3220 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T07:40:00.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:59 vm00 bash[20701]: audit 2026-03-10T07:39:58.897480+0000 mon.b (mon.1) 587 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:00.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:59 vm00 bash[20701]: audit 2026-03-10T07:39:58.897480+0000 mon.b (mon.1) 587 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:00.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:59 vm00 bash[20701]: audit 2026-03-10T07:39:58.898573+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:00.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:39:59 vm00 bash[20701]: audit 2026-03-10T07:39:58.898573+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]: dispatch 2026-03-10T07:40:01.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:00 vm03 bash[23382]: cluster 2026-03-10T07:39:59.891548+0000 mon.a (mon.0) 3222 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:01.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:00 vm03 bash[23382]: cluster 2026-03-10T07:39:59.891548+0000 mon.a (mon.0) 3222 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:01.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:00 vm03 bash[23382]: audit 2026-03-10T07:39:59.894594+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:40:01.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:00 vm03 bash[23382]: audit 2026-03-10T07:39:59.894594+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:40:01.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:00 vm03 bash[23382]: cluster 2026-03-10T07:39:59.902407+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T07:40:01.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:00 vm03 bash[23382]: cluster 2026-03-10T07:39:59.902407+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T07:40:01.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:00 vm03 bash[23382]: cluster 2026-03-10T07:40:00.000142+0000 mon.a (mon.0) 3225 : cluster [WRN] overall HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:40:01.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:00 vm03 bash[23382]: cluster 2026-03-10T07:40:00.000142+0000 mon.a (mon.0) 3225 : cluster [WRN] overall HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:00 vm00 bash[28005]: cluster 2026-03-10T07:39:59.891548+0000 mon.a (mon.0) 3222 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:00 vm00 bash[28005]: cluster 2026-03-10T07:39:59.891548+0000 mon.a (mon.0) 3222 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:00 vm00 bash[28005]: audit 2026-03-10T07:39:59.894594+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:00 vm00 bash[28005]: audit 2026-03-10T07:39:59.894594+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:00 vm00 bash[28005]: cluster 2026-03-10T07:39:59.902407+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:00 vm00 bash[28005]: cluster 2026-03-10T07:39:59.902407+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:00 vm00 bash[28005]: cluster 2026-03-10T07:40:00.000142+0000 mon.a (mon.0) 3225 : cluster [WRN] overall HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:00 vm00 bash[28005]: cluster 2026-03-10T07:40:00.000142+0000 mon.a (mon.0) 3225 : cluster [WRN] overall HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:40:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:40:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:00 vm00 bash[20701]: cluster 2026-03-10T07:39:59.891548+0000 mon.a (mon.0) 3222 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:00 vm00 bash[20701]: cluster 2026-03-10T07:39:59.891548+0000 mon.a (mon.0) 3222 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:00 vm00 bash[20701]: audit 2026-03-10T07:39:59.894594+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:00 vm00 bash[20701]: audit 2026-03-10T07:39:59.894594+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-128"}]': finished 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:00 vm00 bash[20701]: cluster 2026-03-10T07:39:59.902407+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:00 vm00 bash[20701]: cluster 2026-03-10T07:39:59.902407+0000 mon.a (mon.0) 3224 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T07:40:01.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:00 vm00 bash[20701]: cluster 2026-03-10T07:40:00.000142+0000 mon.a (mon.0) 3225 : cluster [WRN] overall HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:40:01.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:00 vm00 bash[20701]: cluster 2026-03-10T07:40:00.000142+0000 mon.a (mon.0) 3225 : cluster [WRN] overall HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-10T07:40:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:01 vm03 bash[23382]: cluster 2026-03-10T07:40:00.741323+0000 mgr.y (mgr.24407) 568 : cluster [DBG] pgmap v986: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 258 B/s wr, 1 op/s 2026-03-10T07:40:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:01 vm03 bash[23382]: cluster 2026-03-10T07:40:00.741323+0000 mgr.y (mgr.24407) 568 : cluster [DBG] pgmap v986: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 258 B/s wr, 1 op/s 2026-03-10T07:40:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:01 vm03 bash[23382]: cluster 2026-03-10T07:40:00.905155+0000 mon.a (mon.0) 3226 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:01 vm03 bash[23382]: cluster 2026-03-10T07:40:00.905155+0000 mon.a (mon.0) 3226 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:01 vm03 bash[23382]: cluster 2026-03-10T07:40:00.923832+0000 mon.a (mon.0) 3227 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T07:40:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:01 vm03 bash[23382]: cluster 2026-03-10T07:40:00.923832+0000 mon.a (mon.0) 3227 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:01 vm00 bash[28005]: cluster 2026-03-10T07:40:00.741323+0000 mgr.y (mgr.24407) 568 : cluster [DBG] pgmap v986: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 258 B/s wr, 1 op/s 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:01 vm00 bash[28005]: cluster 2026-03-10T07:40:00.741323+0000 mgr.y (mgr.24407) 568 : cluster [DBG] pgmap v986: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 258 B/s wr, 1 op/s 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:01 vm00 bash[28005]: cluster 2026-03-10T07:40:00.905155+0000 mon.a (mon.0) 3226 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:01 vm00 bash[28005]: cluster 2026-03-10T07:40:00.905155+0000 mon.a (mon.0) 3226 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:01 vm00 bash[28005]: cluster 2026-03-10T07:40:00.923832+0000 mon.a (mon.0) 3227 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:01 vm00 bash[28005]: cluster 2026-03-10T07:40:00.923832+0000 mon.a (mon.0) 3227 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:01 vm00 bash[20701]: cluster 2026-03-10T07:40:00.741323+0000 mgr.y (mgr.24407) 568 : cluster [DBG] pgmap v986: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 258 B/s wr, 1 op/s 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:01 vm00 bash[20701]: cluster 2026-03-10T07:40:00.741323+0000 mgr.y (mgr.24407) 568 : cluster [DBG] pgmap v986: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 258 B/s wr, 1 op/s 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:01 vm00 bash[20701]: cluster 2026-03-10T07:40:00.905155+0000 mon.a (mon.0) 3226 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:01 vm00 bash[20701]: cluster 2026-03-10T07:40:00.905155+0000 mon.a (mon.0) 3226 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:01 vm00 bash[20701]: cluster 2026-03-10T07:40:00.923832+0000 mon.a (mon.0) 3227 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T07:40:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:01 vm00 bash[20701]: cluster 2026-03-10T07:40:00.923832+0000 mon.a (mon.0) 3227 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T07:40:03.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:02 vm03 bash[23382]: cluster 2026-03-10T07:40:01.938598+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T07:40:03.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:02 vm03 bash[23382]: cluster 2026-03-10T07:40:01.938598+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T07:40:03.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:02 vm03 bash[23382]: audit 2026-03-10T07:40:01.949373+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:02 vm03 bash[23382]: audit 2026-03-10T07:40:01.949373+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:02 vm03 bash[23382]: audit 2026-03-10T07:40:01.949725+0000 mon.b (mon.1) 588 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:02 vm03 bash[23382]: audit 2026-03-10T07:40:01.949725+0000 mon.b (mon.1) 588 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:02 vm00 bash[28005]: cluster 2026-03-10T07:40:01.938598+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:02 vm00 bash[28005]: cluster 2026-03-10T07:40:01.938598+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:02 vm00 bash[28005]: audit 2026-03-10T07:40:01.949373+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:02 vm00 bash[28005]: audit 2026-03-10T07:40:01.949373+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:02 vm00 bash[28005]: audit 2026-03-10T07:40:01.949725+0000 mon.b (mon.1) 588 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:02 vm00 bash[28005]: audit 2026-03-10T07:40:01.949725+0000 mon.b (mon.1) 588 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:02 vm00 bash[20701]: cluster 2026-03-10T07:40:01.938598+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:02 vm00 bash[20701]: cluster 2026-03-10T07:40:01.938598+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:02 vm00 bash[20701]: audit 2026-03-10T07:40:01.949373+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:02 vm00 bash[20701]: audit 2026-03-10T07:40:01.949373+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:02 vm00 bash[20701]: audit 2026-03-10T07:40:01.949725+0000 mon.b (mon.1) 588 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:02 vm00 bash[20701]: audit 2026-03-10T07:40:01.949725+0000 mon.b (mon.1) 588 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:03.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:40:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:40:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:04 vm00 bash[28005]: cluster 2026-03-10T07:40:02.741699+0000 mgr.y (mgr.24407) 569 : cluster [DBG] pgmap v989: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:40:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:04 vm00 bash[28005]: cluster 2026-03-10T07:40:02.741699+0000 mgr.y (mgr.24407) 569 : cluster [DBG] pgmap v989: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:40:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:04 vm00 bash[28005]: audit 2026-03-10T07:40:02.932111+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:04 vm00 bash[28005]: audit 2026-03-10T07:40:02.932111+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:04 vm00 bash[28005]: cluster 2026-03-10T07:40:02.942247+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T07:40:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:04 vm00 bash[28005]: cluster 2026-03-10T07:40:02.942247+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T07:40:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:04 vm00 bash[28005]: audit 2026-03-10T07:40:02.953993+0000 mon.b (mon.1) 589 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:04 vm00 bash[28005]: audit 2026-03-10T07:40:02.953993+0000 mon.b (mon.1) 589 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:04 vm00 bash[28005]: audit 2026-03-10T07:40:02.961650+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:04 vm00 bash[28005]: audit 2026-03-10T07:40:02.961650+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:04 vm00 bash[20701]: cluster 2026-03-10T07:40:02.741699+0000 mgr.y (mgr.24407) 569 : cluster [DBG] pgmap v989: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:04 vm00 bash[20701]: cluster 2026-03-10T07:40:02.741699+0000 mgr.y (mgr.24407) 569 : cluster [DBG] pgmap v989: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:04 vm00 bash[20701]: audit 2026-03-10T07:40:02.932111+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:04 vm00 bash[20701]: audit 2026-03-10T07:40:02.932111+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:04 vm00 bash[20701]: cluster 2026-03-10T07:40:02.942247+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:04 vm00 bash[20701]: cluster 2026-03-10T07:40:02.942247+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:04 vm00 bash[20701]: audit 2026-03-10T07:40:02.953993+0000 mon.b (mon.1) 589 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:04 vm00 bash[20701]: audit 2026-03-10T07:40:02.953993+0000 mon.b (mon.1) 589 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:04 vm00 bash[20701]: audit 2026-03-10T07:40:02.961650+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:04 vm00 bash[20701]: audit 2026-03-10T07:40:02.961650+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:04 vm03 bash[23382]: cluster 2026-03-10T07:40:02.741699+0000 mgr.y (mgr.24407) 569 : cluster [DBG] pgmap v989: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:40:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:04 vm03 bash[23382]: cluster 2026-03-10T07:40:02.741699+0000 mgr.y (mgr.24407) 569 : cluster [DBG] pgmap v989: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:40:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:04 vm03 bash[23382]: audit 2026-03-10T07:40:02.932111+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:04 vm03 bash[23382]: audit 2026-03-10T07:40:02.932111+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:04 vm03 bash[23382]: cluster 2026-03-10T07:40:02.942247+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T07:40:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:04 vm03 bash[23382]: cluster 2026-03-10T07:40:02.942247+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T07:40:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:04 vm03 bash[23382]: audit 2026-03-10T07:40:02.953993+0000 mon.b (mon.1) 589 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:04 vm03 bash[23382]: audit 2026-03-10T07:40:02.953993+0000 mon.b (mon.1) 589 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:04 vm03 bash[23382]: audit 2026-03-10T07:40:02.961650+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:04.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:04 vm03 bash[23382]: audit 2026-03-10T07:40:02.961650+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:03.503000+0000 mgr.y (mgr.24407) 570 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:03.503000+0000 mgr.y (mgr.24407) 570 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:04.058245+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:04.058245+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: cluster 2026-03-10T07:40:04.061116+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: cluster 2026-03-10T07:40:04.061116+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:04.063711+0000 mon.b (mon.1) 590 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:04.063711+0000 mon.b (mon.1) 590 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:04.069594+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:04.069594+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:05.063522+0000 mon.a (mon.0) 3236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:05.063522+0000 mon.a (mon.0) 3236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: cluster 2026-03-10T07:40:05.068157+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: cluster 2026-03-10T07:40:05.068157+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:05.068511+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:05.068511+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:05.068837+0000 mon.b (mon.1) 591 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:05 vm00 bash[28005]: audit 2026-03-10T07:40:05.068837+0000 mon.b (mon.1) 591 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:03.503000+0000 mgr.y (mgr.24407) 570 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:03.503000+0000 mgr.y (mgr.24407) 570 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:04.058245+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:04.058245+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: cluster 2026-03-10T07:40:04.061116+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: cluster 2026-03-10T07:40:04.061116+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:04.063711+0000 mon.b (mon.1) 590 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:04.063711+0000 mon.b (mon.1) 590 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:04.069594+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:04.069594+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:05.063522+0000 mon.a (mon.0) 3236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:05.063522+0000 mon.a (mon.0) 3236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: cluster 2026-03-10T07:40:05.068157+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: cluster 2026-03-10T07:40:05.068157+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:05.068511+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:05.068511+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:05.068837+0000 mon.b (mon.1) 591 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:05 vm00 bash[20701]: audit 2026-03-10T07:40:05.068837+0000 mon.b (mon.1) 591 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:03.503000+0000 mgr.y (mgr.24407) 570 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:03.503000+0000 mgr.y (mgr.24407) 570 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:04.058245+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:04.058245+0000 mon.a (mon.0) 3233 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: cluster 2026-03-10T07:40:04.061116+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: cluster 2026-03-10T07:40:04.061116+0000 mon.a (mon.0) 3234 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:04.063711+0000 mon.b (mon.1) 590 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:04.063711+0000 mon.b (mon.1) 590 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:04.069594+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:04.069594+0000 mon.a (mon.0) 3235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:05.063522+0000 mon.a (mon.0) 3236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:05.063522+0000 mon.a (mon.0) 3236 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:05.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: cluster 2026-03-10T07:40:05.068157+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T07:40:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: cluster 2026-03-10T07:40:05.068157+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T07:40:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:05.068511+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:05.068511+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:05.068837+0000 mon.b (mon.1) 591 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:05.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:05 vm03 bash[23382]: audit 2026-03-10T07:40:05.068837+0000 mon.b (mon.1) 591 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]: dispatch 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:06 vm00 bash[28005]: cluster 2026-03-10T07:40:04.742328+0000 mgr.y (mgr.24407) 571 : cluster [DBG] pgmap v992: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:06 vm00 bash[28005]: cluster 2026-03-10T07:40:04.742328+0000 mgr.y (mgr.24407) 571 : cluster [DBG] pgmap v992: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:06 vm00 bash[28005]: cluster 2026-03-10T07:40:06.063792+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:06 vm00 bash[28005]: cluster 2026-03-10T07:40:06.063792+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:06 vm00 bash[28005]: audit 2026-03-10T07:40:06.067224+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]': finished 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:06 vm00 bash[28005]: audit 2026-03-10T07:40:06.067224+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]': finished 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:06 vm00 bash[28005]: cluster 2026-03-10T07:40:06.078066+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:06 vm00 bash[28005]: cluster 2026-03-10T07:40:06.078066+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:06 vm00 bash[20701]: cluster 2026-03-10T07:40:04.742328+0000 mgr.y (mgr.24407) 571 : cluster [DBG] pgmap v992: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:06 vm00 bash[20701]: cluster 2026-03-10T07:40:04.742328+0000 mgr.y (mgr.24407) 571 : cluster [DBG] pgmap v992: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:06 vm00 bash[20701]: cluster 2026-03-10T07:40:06.063792+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:06 vm00 bash[20701]: cluster 2026-03-10T07:40:06.063792+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:06 vm00 bash[20701]: audit 2026-03-10T07:40:06.067224+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]': finished 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:06 vm00 bash[20701]: audit 2026-03-10T07:40:06.067224+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]': finished 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:06 vm00 bash[20701]: cluster 2026-03-10T07:40:06.078066+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T07:40:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:06 vm00 bash[20701]: cluster 2026-03-10T07:40:06.078066+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T07:40:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:06 vm03 bash[23382]: cluster 2026-03-10T07:40:04.742328+0000 mgr.y (mgr.24407) 571 : cluster [DBG] pgmap v992: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:06 vm03 bash[23382]: cluster 2026-03-10T07:40:04.742328+0000 mgr.y (mgr.24407) 571 : cluster [DBG] pgmap v992: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:06 vm03 bash[23382]: cluster 2026-03-10T07:40:06.063792+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:06 vm03 bash[23382]: cluster 2026-03-10T07:40:06.063792+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:06 vm03 bash[23382]: audit 2026-03-10T07:40:06.067224+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]': finished 2026-03-10T07:40:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:06 vm03 bash[23382]: audit 2026-03-10T07:40:06.067224+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-130", "mode": "writeback"}]': finished 2026-03-10T07:40:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:06 vm03 bash[23382]: cluster 2026-03-10T07:40:06.078066+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T07:40:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:06 vm03 bash[23382]: cluster 2026-03-10T07:40:06.078066+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:07 vm00 bash[28005]: audit 2026-03-10T07:40:06.157840+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:07 vm00 bash[28005]: audit 2026-03-10T07:40:06.157840+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:07 vm00 bash[28005]: audit 2026-03-10T07:40:06.158321+0000 mon.b (mon.1) 592 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:07 vm00 bash[28005]: audit 2026-03-10T07:40:06.158321+0000 mon.b (mon.1) 592 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:07 vm00 bash[28005]: cluster 2026-03-10T07:40:06.788005+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:07 vm00 bash[28005]: cluster 2026-03-10T07:40:06.788005+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:07 vm00 bash[20701]: audit 2026-03-10T07:40:06.157840+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:07 vm00 bash[20701]: audit 2026-03-10T07:40:06.157840+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:07 vm00 bash[20701]: audit 2026-03-10T07:40:06.158321+0000 mon.b (mon.1) 592 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:07 vm00 bash[20701]: audit 2026-03-10T07:40:06.158321+0000 mon.b (mon.1) 592 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:07 vm00 bash[20701]: cluster 2026-03-10T07:40:06.788005+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:07.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:07 vm00 bash[20701]: cluster 2026-03-10T07:40:06.788005+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:07 vm03 bash[23382]: audit 2026-03-10T07:40:06.157840+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:07 vm03 bash[23382]: audit 2026-03-10T07:40:06.157840+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:07 vm03 bash[23382]: audit 2026-03-10T07:40:06.158321+0000 mon.b (mon.1) 592 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:07 vm03 bash[23382]: audit 2026-03-10T07:40:06.158321+0000 mon.b (mon.1) 592 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:07 vm03 bash[23382]: cluster 2026-03-10T07:40:06.788005+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:07 vm03 bash[23382]: cluster 2026-03-10T07:40:06.788005+0000 mon.a (mon.0) 3243 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:08 vm03 bash[23382]: cluster 2026-03-10T07:40:06.742691+0000 mgr.y (mgr.24407) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:08 vm03 bash[23382]: cluster 2026-03-10T07:40:06.742691+0000 mgr.y (mgr.24407) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:08 vm03 bash[23382]: audit 2026-03-10T07:40:07.094096+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:08 vm03 bash[23382]: audit 2026-03-10T07:40:07.094096+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:08 vm03 bash[23382]: cluster 2026-03-10T07:40:07.101086+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T07:40:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:08 vm03 bash[23382]: cluster 2026-03-10T07:40:07.101086+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T07:40:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:08 vm03 bash[23382]: audit 2026-03-10T07:40:07.101263+0000 mon.b (mon.1) 593 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:08 vm03 bash[23382]: audit 2026-03-10T07:40:07.101263+0000 mon.b (mon.1) 593 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:08 vm03 bash[23382]: audit 2026-03-10T07:40:07.106313+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:08 vm03 bash[23382]: audit 2026-03-10T07:40:07.106313+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:08 vm00 bash[28005]: cluster 2026-03-10T07:40:06.742691+0000 mgr.y (mgr.24407) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:08 vm00 bash[28005]: cluster 2026-03-10T07:40:06.742691+0000 mgr.y (mgr.24407) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:08 vm00 bash[28005]: audit 2026-03-10T07:40:07.094096+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:08 vm00 bash[28005]: audit 2026-03-10T07:40:07.094096+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:08 vm00 bash[28005]: cluster 2026-03-10T07:40:07.101086+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:08 vm00 bash[28005]: cluster 2026-03-10T07:40:07.101086+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:08 vm00 bash[28005]: audit 2026-03-10T07:40:07.101263+0000 mon.b (mon.1) 593 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:08 vm00 bash[28005]: audit 2026-03-10T07:40:07.101263+0000 mon.b (mon.1) 593 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:08 vm00 bash[28005]: audit 2026-03-10T07:40:07.106313+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:08 vm00 bash[28005]: audit 2026-03-10T07:40:07.106313+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:08 vm00 bash[20701]: cluster 2026-03-10T07:40:06.742691+0000 mgr.y (mgr.24407) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:08 vm00 bash[20701]: cluster 2026-03-10T07:40:06.742691+0000 mgr.y (mgr.24407) 572 : cluster [DBG] pgmap v995: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:08 vm00 bash[20701]: audit 2026-03-10T07:40:07.094096+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:08 vm00 bash[20701]: audit 2026-03-10T07:40:07.094096+0000 mon.a (mon.0) 3244 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:08 vm00 bash[20701]: cluster 2026-03-10T07:40:07.101086+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:08 vm00 bash[20701]: cluster 2026-03-10T07:40:07.101086+0000 mon.a (mon.0) 3245 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:08 vm00 bash[20701]: audit 2026-03-10T07:40:07.101263+0000 mon.b (mon.1) 593 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:08 vm00 bash[20701]: audit 2026-03-10T07:40:07.101263+0000 mon.b (mon.1) 593 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:08 vm00 bash[20701]: audit 2026-03-10T07:40:07.106313+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:08 vm00 bash[20701]: audit 2026-03-10T07:40:07.106313+0000 mon.a (mon.0) 3246 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]: dispatch 2026-03-10T07:40:09.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:09 vm03 bash[23382]: cluster 2026-03-10T07:40:08.094277+0000 mon.a (mon.0) 3247 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:09.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:09 vm03 bash[23382]: cluster 2026-03-10T07:40:08.094277+0000 mon.a (mon.0) 3247 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:09.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:09 vm03 bash[23382]: audit 2026-03-10T07:40:08.156388+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:09.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:09 vm03 bash[23382]: audit 2026-03-10T07:40:08.156388+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:09.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:09 vm03 bash[23382]: cluster 2026-03-10T07:40:08.158692+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T07:40:09.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:09 vm03 bash[23382]: cluster 2026-03-10T07:40:08.158692+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:09 vm00 bash[28005]: cluster 2026-03-10T07:40:08.094277+0000 mon.a (mon.0) 3247 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:09 vm00 bash[28005]: cluster 2026-03-10T07:40:08.094277+0000 mon.a (mon.0) 3247 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:09 vm00 bash[28005]: audit 2026-03-10T07:40:08.156388+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:09 vm00 bash[28005]: audit 2026-03-10T07:40:08.156388+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:09 vm00 bash[28005]: cluster 2026-03-10T07:40:08.158692+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:09 vm00 bash[28005]: cluster 2026-03-10T07:40:08.158692+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:09 vm00 bash[20701]: cluster 2026-03-10T07:40:08.094277+0000 mon.a (mon.0) 3247 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:09 vm00 bash[20701]: cluster 2026-03-10T07:40:08.094277+0000 mon.a (mon.0) 3247 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:09 vm00 bash[20701]: audit 2026-03-10T07:40:08.156388+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:09 vm00 bash[20701]: audit 2026-03-10T07:40:08.156388+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-130"}]': finished 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:09 vm00 bash[20701]: cluster 2026-03-10T07:40:08.158692+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T07:40:09.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:09 vm00 bash[20701]: cluster 2026-03-10T07:40:08.158692+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T07:40:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:10 vm03 bash[23382]: cluster 2026-03-10T07:40:08.743071+0000 mgr.y (mgr.24407) 573 : cluster [DBG] pgmap v998: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:10 vm03 bash[23382]: cluster 2026-03-10T07:40:08.743071+0000 mgr.y (mgr.24407) 573 : cluster [DBG] pgmap v998: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:10 vm03 bash[23382]: cluster 2026-03-10T07:40:09.211934+0000 mon.a (mon.0) 3250 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T07:40:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:10 vm03 bash[23382]: cluster 2026-03-10T07:40:09.211934+0000 mon.a (mon.0) 3250 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T07:40:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:10 vm03 bash[23382]: audit 2026-03-10T07:40:09.763722+0000 mon.a (mon.0) 3251 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:10 vm03 bash[23382]: audit 2026-03-10T07:40:09.763722+0000 mon.a (mon.0) 3251 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:10 vm03 bash[23382]: audit 2026-03-10T07:40:09.766879+0000 mon.c (mon.2) 352 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:10 vm03 bash[23382]: audit 2026-03-10T07:40:09.766879+0000 mon.c (mon.2) 352 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:10 vm00 bash[28005]: cluster 2026-03-10T07:40:08.743071+0000 mgr.y (mgr.24407) 573 : cluster [DBG] pgmap v998: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:10 vm00 bash[28005]: cluster 2026-03-10T07:40:08.743071+0000 mgr.y (mgr.24407) 573 : cluster [DBG] pgmap v998: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:10 vm00 bash[28005]: cluster 2026-03-10T07:40:09.211934+0000 mon.a (mon.0) 3250 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:10 vm00 bash[28005]: cluster 2026-03-10T07:40:09.211934+0000 mon.a (mon.0) 3250 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:10 vm00 bash[28005]: audit 2026-03-10T07:40:09.763722+0000 mon.a (mon.0) 3251 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:10 vm00 bash[28005]: audit 2026-03-10T07:40:09.763722+0000 mon.a (mon.0) 3251 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:10 vm00 bash[28005]: audit 2026-03-10T07:40:09.766879+0000 mon.c (mon.2) 352 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:10 vm00 bash[28005]: audit 2026-03-10T07:40:09.766879+0000 mon.c (mon.2) 352 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:10 vm00 bash[20701]: cluster 2026-03-10T07:40:08.743071+0000 mgr.y (mgr.24407) 573 : cluster [DBG] pgmap v998: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:10 vm00 bash[20701]: cluster 2026-03-10T07:40:08.743071+0000 mgr.y (mgr.24407) 573 : cluster [DBG] pgmap v998: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:10 vm00 bash[20701]: cluster 2026-03-10T07:40:09.211934+0000 mon.a (mon.0) 3250 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:10 vm00 bash[20701]: cluster 2026-03-10T07:40:09.211934+0000 mon.a (mon.0) 3250 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:10 vm00 bash[20701]: audit 2026-03-10T07:40:09.763722+0000 mon.a (mon.0) 3251 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:10 vm00 bash[20701]: audit 2026-03-10T07:40:09.763722+0000 mon.a (mon.0) 3251 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:10 vm00 bash[20701]: audit 2026-03-10T07:40:09.766879+0000 mon.c (mon.2) 352 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:10 vm00 bash[20701]: audit 2026-03-10T07:40:09.766879+0000 mon.c (mon.2) 352 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:11.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:40:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:40:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:11 vm00 bash[28005]: cluster 2026-03-10T07:40:10.228796+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:11 vm00 bash[28005]: cluster 2026-03-10T07:40:10.228796+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:11 vm00 bash[28005]: audit 2026-03-10T07:40:10.239891+0000 mon.b (mon.1) 594 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:11 vm00 bash[28005]: audit 2026-03-10T07:40:10.239891+0000 mon.b (mon.1) 594 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:11 vm00 bash[28005]: audit 2026-03-10T07:40:10.240720+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:11 vm00 bash[28005]: audit 2026-03-10T07:40:10.240720+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:11 vm00 bash[20701]: cluster 2026-03-10T07:40:10.228796+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:11 vm00 bash[20701]: cluster 2026-03-10T07:40:10.228796+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:11 vm00 bash[20701]: audit 2026-03-10T07:40:10.239891+0000 mon.b (mon.1) 594 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:11 vm00 bash[20701]: audit 2026-03-10T07:40:10.239891+0000 mon.b (mon.1) 594 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:11 vm00 bash[20701]: audit 2026-03-10T07:40:10.240720+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:11.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:11 vm00 bash[20701]: audit 2026-03-10T07:40:10.240720+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:11 vm03 bash[23382]: cluster 2026-03-10T07:40:10.228796+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T07:40:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:11 vm03 bash[23382]: cluster 2026-03-10T07:40:10.228796+0000 mon.a (mon.0) 3252 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T07:40:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:11 vm03 bash[23382]: audit 2026-03-10T07:40:10.239891+0000 mon.b (mon.1) 594 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:11 vm03 bash[23382]: audit 2026-03-10T07:40:10.239891+0000 mon.b (mon.1) 594 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:11 vm03 bash[23382]: audit 2026-03-10T07:40:10.240720+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:12.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:11 vm03 bash[23382]: audit 2026-03-10T07:40:10.240720+0000 mon.a (mon.0) 3253 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: cluster 2026-03-10T07:40:10.743441+0000 mgr.y (mgr.24407) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: cluster 2026-03-10T07:40:10.743441+0000 mgr.y (mgr.24407) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:11.358025+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:11.358025+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: cluster 2026-03-10T07:40:11.446585+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: cluster 2026-03-10T07:40:11.446585+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:11.513087+0000 mon.b (mon.1) 595 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:11.513087+0000 mon.b (mon.1) 595 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:11.513822+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:11.513822+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: cluster 2026-03-10T07:40:11.788671+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: cluster 2026-03-10T07:40:11.788671+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:12.367056+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:12.367056+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: cluster 2026-03-10T07:40:12.372066+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: cluster 2026-03-10T07:40:12.372066+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:12.374814+0000 mon.b (mon.1) 596 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:12.374814+0000 mon.b (mon.1) 596 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:12.375866+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:12 vm00 bash[28005]: audit 2026-03-10T07:40:12.375866+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: cluster 2026-03-10T07:40:10.743441+0000 mgr.y (mgr.24407) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: cluster 2026-03-10T07:40:10.743441+0000 mgr.y (mgr.24407) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:11.358025+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:11.358025+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: cluster 2026-03-10T07:40:11.446585+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: cluster 2026-03-10T07:40:11.446585+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:11.513087+0000 mon.b (mon.1) 595 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:11.513087+0000 mon.b (mon.1) 595 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:11.513822+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:11.513822+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: cluster 2026-03-10T07:40:11.788671+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: cluster 2026-03-10T07:40:11.788671+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:12.367056+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:12.367056+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: cluster 2026-03-10T07:40:12.372066+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: cluster 2026-03-10T07:40:12.372066+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:12.374814+0000 mon.b (mon.1) 596 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:12.374814+0000 mon.b (mon.1) 596 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:12.375866+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:12 vm00 bash[20701]: audit 2026-03-10T07:40:12.375866+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: cluster 2026-03-10T07:40:10.743441+0000 mgr.y (mgr.24407) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: cluster 2026-03-10T07:40:10.743441+0000 mgr.y (mgr.24407) 574 : cluster [DBG] pgmap v1001: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:11.358025+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:11.358025+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: cluster 2026-03-10T07:40:11.446585+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: cluster 2026-03-10T07:40:11.446585+0000 mon.a (mon.0) 3255 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:11.513087+0000 mon.b (mon.1) 595 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:11.513087+0000 mon.b (mon.1) 595 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:11.513822+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:11.513822+0000 mon.a (mon.0) 3256 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: cluster 2026-03-10T07:40:11.788671+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: cluster 2026-03-10T07:40:11.788671+0000 mon.a (mon.0) 3257 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:12.367056+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:12.367056+0000 mon.a (mon.0) 3258 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: cluster 2026-03-10T07:40:12.372066+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: cluster 2026-03-10T07:40:12.372066+0000 mon.a (mon.0) 3259 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:12.374814+0000 mon.b (mon.1) 596 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:12.374814+0000 mon.b (mon.1) 596 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:12.375866+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:13.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:12 vm03 bash[23382]: audit 2026-03-10T07:40:12.375866+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:13.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:40:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:40:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: cluster 2026-03-10T07:40:12.743737+0000 mgr.y (mgr.24407) 575 : cluster [DBG] pgmap v1004: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: cluster 2026-03-10T07:40:12.743737+0000 mgr.y (mgr.24407) 575 : cluster [DBG] pgmap v1004: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:13.369810+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:13.369810+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:13.377078+0000 mon.b (mon.1) 597 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:13.377078+0000 mon.b (mon.1) 597 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: cluster 2026-03-10T07:40:13.379011+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T07:40:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: cluster 2026-03-10T07:40:13.379011+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T07:40:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:13.379478+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:13.379478+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:13.875619+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:40:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:13.875619+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:40:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:14.179251+0000 mon.a (mon.0) 3264 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:14.179251+0000 mon.a (mon.0) 3264 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:14.190106+0000 mon.a (mon.0) 3265 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:14.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:14 vm03 bash[23382]: audit 2026-03-10T07:40:14.190106+0000 mon.a (mon.0) 3265 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: cluster 2026-03-10T07:40:12.743737+0000 mgr.y (mgr.24407) 575 : cluster [DBG] pgmap v1004: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: cluster 2026-03-10T07:40:12.743737+0000 mgr.y (mgr.24407) 575 : cluster [DBG] pgmap v1004: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:13.369810+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:13.369810+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:13.377078+0000 mon.b (mon.1) 597 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:13.377078+0000 mon.b (mon.1) 597 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: cluster 2026-03-10T07:40:13.379011+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: cluster 2026-03-10T07:40:13.379011+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:13.379478+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:13.379478+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:13.875619+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:13.875619+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:14.179251+0000 mon.a (mon.0) 3264 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:14.179251+0000 mon.a (mon.0) 3264 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:14.190106+0000 mon.a (mon.0) 3265 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:14 vm00 bash[28005]: audit 2026-03-10T07:40:14.190106+0000 mon.a (mon.0) 3265 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: cluster 2026-03-10T07:40:12.743737+0000 mgr.y (mgr.24407) 575 : cluster [DBG] pgmap v1004: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: cluster 2026-03-10T07:40:12.743737+0000 mgr.y (mgr.24407) 575 : cluster [DBG] pgmap v1004: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:13.369810+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:13.369810+0000 mon.a (mon.0) 3261 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:13.377078+0000 mon.b (mon.1) 597 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:13.377078+0000 mon.b (mon.1) 597 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: cluster 2026-03-10T07:40:13.379011+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: cluster 2026-03-10T07:40:13.379011+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:13.379478+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:13.379478+0000 mon.a (mon.0) 3263 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]: dispatch 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:13.875619+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:13.875619+0000 mon.c (mon.2) 353 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:14.179251+0000 mon.a (mon.0) 3264 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:14.179251+0000 mon.a (mon.0) 3264 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:14.190106+0000 mon.a (mon.0) 3265 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:14.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:14 vm00 bash[20701]: audit 2026-03-10T07:40:14.190106+0000 mon.a (mon.0) 3265 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: audit 2026-03-10T07:40:13.511650+0000 mgr.y (mgr.24407) 576 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: audit 2026-03-10T07:40:13.511650+0000 mgr.y (mgr.24407) 576 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: cluster 2026-03-10T07:40:14.369990+0000 mon.a (mon.0) 3266 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: cluster 2026-03-10T07:40:14.369990+0000 mon.a (mon.0) 3266 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: audit 2026-03-10T07:40:14.372635+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]': finished 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: audit 2026-03-10T07:40:14.372635+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]': finished 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: cluster 2026-03-10T07:40:14.385454+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: cluster 2026-03-10T07:40:14.385454+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: audit 2026-03-10T07:40:14.502872+0000 mon.c (mon.2) 354 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: audit 2026-03-10T07:40:14.502872+0000 mon.c (mon.2) 354 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: audit 2026-03-10T07:40:14.503672+0000 mon.c (mon.2) 355 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: audit 2026-03-10T07:40:14.503672+0000 mon.c (mon.2) 355 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: audit 2026-03-10T07:40:14.509784+0000 mon.a (mon.0) 3269 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: audit 2026-03-10T07:40:14.509784+0000 mon.a (mon.0) 3269 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: cluster 2026-03-10T07:40:14.744639+0000 mgr.y (mgr.24407) 577 : cluster [DBG] pgmap v1007: 268 pgs: 8 unknown, 260 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:40:15.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:15 vm03 bash[23382]: cluster 2026-03-10T07:40:14.744639+0000 mgr.y (mgr.24407) 577 : cluster [DBG] pgmap v1007: 268 pgs: 8 unknown, 260 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: audit 2026-03-10T07:40:13.511650+0000 mgr.y (mgr.24407) 576 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: audit 2026-03-10T07:40:13.511650+0000 mgr.y (mgr.24407) 576 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: cluster 2026-03-10T07:40:14.369990+0000 mon.a (mon.0) 3266 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: cluster 2026-03-10T07:40:14.369990+0000 mon.a (mon.0) 3266 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: audit 2026-03-10T07:40:14.372635+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]': finished 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: audit 2026-03-10T07:40:14.372635+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]': finished 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: cluster 2026-03-10T07:40:14.385454+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: cluster 2026-03-10T07:40:14.385454+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: audit 2026-03-10T07:40:14.502872+0000 mon.c (mon.2) 354 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: audit 2026-03-10T07:40:14.502872+0000 mon.c (mon.2) 354 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: audit 2026-03-10T07:40:14.503672+0000 mon.c (mon.2) 355 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: audit 2026-03-10T07:40:14.503672+0000 mon.c (mon.2) 355 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: audit 2026-03-10T07:40:14.509784+0000 mon.a (mon.0) 3269 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: audit 2026-03-10T07:40:14.509784+0000 mon.a (mon.0) 3269 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: cluster 2026-03-10T07:40:14.744639+0000 mgr.y (mgr.24407) 577 : cluster [DBG] pgmap v1007: 268 pgs: 8 unknown, 260 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:15 vm00 bash[28005]: cluster 2026-03-10T07:40:14.744639+0000 mgr.y (mgr.24407) 577 : cluster [DBG] pgmap v1007: 268 pgs: 8 unknown, 260 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: audit 2026-03-10T07:40:13.511650+0000 mgr.y (mgr.24407) 576 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: audit 2026-03-10T07:40:13.511650+0000 mgr.y (mgr.24407) 576 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: cluster 2026-03-10T07:40:14.369990+0000 mon.a (mon.0) 3266 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: cluster 2026-03-10T07:40:14.369990+0000 mon.a (mon.0) 3266 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: audit 2026-03-10T07:40:14.372635+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]': finished 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: audit 2026-03-10T07:40:14.372635+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-132", "mode": "writeback"}]': finished 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: cluster 2026-03-10T07:40:14.385454+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: cluster 2026-03-10T07:40:14.385454+0000 mon.a (mon.0) 3268 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: audit 2026-03-10T07:40:14.502872+0000 mon.c (mon.2) 354 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: audit 2026-03-10T07:40:14.502872+0000 mon.c (mon.2) 354 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: audit 2026-03-10T07:40:14.503672+0000 mon.c (mon.2) 355 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: audit 2026-03-10T07:40:14.503672+0000 mon.c (mon.2) 355 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: audit 2026-03-10T07:40:14.509784+0000 mon.a (mon.0) 3269 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: audit 2026-03-10T07:40:14.509784+0000 mon.a (mon.0) 3269 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: cluster 2026-03-10T07:40:14.744639+0000 mgr.y (mgr.24407) 577 : cluster [DBG] pgmap v1007: 268 pgs: 8 unknown, 260 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:40:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:15 vm00 bash[20701]: cluster 2026-03-10T07:40:14.744639+0000 mgr.y (mgr.24407) 577 : cluster [DBG] pgmap v1007: 268 pgs: 8 unknown, 260 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:40:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:16 vm03 bash[23382]: cluster 2026-03-10T07:40:15.396485+0000 mon.a (mon.0) 3270 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T07:40:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:16 vm03 bash[23382]: cluster 2026-03-10T07:40:15.396485+0000 mon.a (mon.0) 3270 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T07:40:16.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:16 vm00 bash[28005]: cluster 2026-03-10T07:40:15.396485+0000 mon.a (mon.0) 3270 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T07:40:16.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:16 vm00 bash[28005]: cluster 2026-03-10T07:40:15.396485+0000 mon.a (mon.0) 3270 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T07:40:16.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:16 vm00 bash[20701]: cluster 2026-03-10T07:40:15.396485+0000 mon.a (mon.0) 3270 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T07:40:16.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:16 vm00 bash[20701]: cluster 2026-03-10T07:40:15.396485+0000 mon.a (mon.0) 3270 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T07:40:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:17 vm03 bash[23382]: cluster 2026-03-10T07:40:16.417220+0000 mon.a (mon.0) 3271 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T07:40:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:17 vm03 bash[23382]: cluster 2026-03-10T07:40:16.417220+0000 mon.a (mon.0) 3271 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T07:40:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:17 vm03 bash[23382]: audit 2026-03-10T07:40:16.459295+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:17 vm03 bash[23382]: audit 2026-03-10T07:40:16.459295+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:17 vm03 bash[23382]: audit 2026-03-10T07:40:16.459755+0000 mon.b (mon.1) 598 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:17 vm03 bash[23382]: audit 2026-03-10T07:40:16.459755+0000 mon.b (mon.1) 598 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:17 vm03 bash[23382]: cluster 2026-03-10T07:40:16.745013+0000 mgr.y (mgr.24407) 578 : cluster [DBG] pgmap v1010: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:17 vm03 bash[23382]: cluster 2026-03-10T07:40:16.745013+0000 mgr.y (mgr.24407) 578 : cluster [DBG] pgmap v1010: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:17 vm03 bash[23382]: cluster 2026-03-10T07:40:16.789306+0000 mon.a (mon.0) 3273 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:17 vm03 bash[23382]: cluster 2026-03-10T07:40:16.789306+0000 mon.a (mon.0) 3273 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:17.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:17 vm00 bash[28005]: cluster 2026-03-10T07:40:16.417220+0000 mon.a (mon.0) 3271 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T07:40:17.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:17 vm00 bash[28005]: cluster 2026-03-10T07:40:16.417220+0000 mon.a (mon.0) 3271 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:17 vm00 bash[28005]: audit 2026-03-10T07:40:16.459295+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:17 vm00 bash[28005]: audit 2026-03-10T07:40:16.459295+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:17 vm00 bash[28005]: audit 2026-03-10T07:40:16.459755+0000 mon.b (mon.1) 598 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:17 vm00 bash[28005]: audit 2026-03-10T07:40:16.459755+0000 mon.b (mon.1) 598 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:17 vm00 bash[28005]: cluster 2026-03-10T07:40:16.745013+0000 mgr.y (mgr.24407) 578 : cluster [DBG] pgmap v1010: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:17 vm00 bash[28005]: cluster 2026-03-10T07:40:16.745013+0000 mgr.y (mgr.24407) 578 : cluster [DBG] pgmap v1010: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:17 vm00 bash[28005]: cluster 2026-03-10T07:40:16.789306+0000 mon.a (mon.0) 3273 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:17 vm00 bash[28005]: cluster 2026-03-10T07:40:16.789306+0000 mon.a (mon.0) 3273 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:17 vm00 bash[20701]: cluster 2026-03-10T07:40:16.417220+0000 mon.a (mon.0) 3271 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:17 vm00 bash[20701]: cluster 2026-03-10T07:40:16.417220+0000 mon.a (mon.0) 3271 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:17 vm00 bash[20701]: audit 2026-03-10T07:40:16.459295+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:17 vm00 bash[20701]: audit 2026-03-10T07:40:16.459295+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:17 vm00 bash[20701]: audit 2026-03-10T07:40:16.459755+0000 mon.b (mon.1) 598 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:17 vm00 bash[20701]: audit 2026-03-10T07:40:16.459755+0000 mon.b (mon.1) 598 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:17 vm00 bash[20701]: cluster 2026-03-10T07:40:16.745013+0000 mgr.y (mgr.24407) 578 : cluster [DBG] pgmap v1010: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:17 vm00 bash[20701]: cluster 2026-03-10T07:40:16.745013+0000 mgr.y (mgr.24407) 578 : cluster [DBG] pgmap v1010: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:17 vm00 bash[20701]: cluster 2026-03-10T07:40:16.789306+0000 mon.a (mon.0) 3273 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:17.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:17 vm00 bash[20701]: cluster 2026-03-10T07:40:16.789306+0000 mon.a (mon.0) 3273 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:18 vm03 bash[23382]: audit 2026-03-10T07:40:17.423297+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:18 vm03 bash[23382]: audit 2026-03-10T07:40:17.423297+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:18 vm03 bash[23382]: cluster 2026-03-10T07:40:17.435143+0000 mon.a (mon.0) 3275 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T07:40:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:18 vm03 bash[23382]: cluster 2026-03-10T07:40:17.435143+0000 mon.a (mon.0) 3275 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T07:40:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:18 vm03 bash[23382]: audit 2026-03-10T07:40:17.446600+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:18 vm03 bash[23382]: audit 2026-03-10T07:40:17.446600+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:18 vm03 bash[23382]: audit 2026-03-10T07:40:17.447046+0000 mon.b (mon.1) 599 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:18 vm03 bash[23382]: audit 2026-03-10T07:40:17.447046+0000 mon.b (mon.1) 599 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:18.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:18 vm00 bash[28005]: audit 2026-03-10T07:40:17.423297+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:18.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:18 vm00 bash[28005]: audit 2026-03-10T07:40:17.423297+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:18.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:18 vm00 bash[28005]: cluster 2026-03-10T07:40:17.435143+0000 mon.a (mon.0) 3275 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T07:40:18.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:18 vm00 bash[28005]: cluster 2026-03-10T07:40:17.435143+0000 mon.a (mon.0) 3275 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:18 vm00 bash[28005]: audit 2026-03-10T07:40:17.446600+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:18 vm00 bash[28005]: audit 2026-03-10T07:40:17.446600+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:18 vm00 bash[28005]: audit 2026-03-10T07:40:17.447046+0000 mon.b (mon.1) 599 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:18 vm00 bash[28005]: audit 2026-03-10T07:40:17.447046+0000 mon.b (mon.1) 599 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:18 vm00 bash[20701]: audit 2026-03-10T07:40:17.423297+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:18 vm00 bash[20701]: audit 2026-03-10T07:40:17.423297+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:18 vm00 bash[20701]: cluster 2026-03-10T07:40:17.435143+0000 mon.a (mon.0) 3275 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:18 vm00 bash[20701]: cluster 2026-03-10T07:40:17.435143+0000 mon.a (mon.0) 3275 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:18 vm00 bash[20701]: audit 2026-03-10T07:40:17.446600+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:18 vm00 bash[20701]: audit 2026-03-10T07:40:17.446600+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:18 vm00 bash[20701]: audit 2026-03-10T07:40:17.447046+0000 mon.b (mon.1) 599 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:18.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:18 vm00 bash[20701]: audit 2026-03-10T07:40:17.447046+0000 mon.b (mon.1) 599 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:19.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:19 vm03 bash[23382]: audit 2026-03-10T07:40:18.446552+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:19.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:19 vm03 bash[23382]: audit 2026-03-10T07:40:18.446552+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:19.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:19 vm03 bash[23382]: cluster 2026-03-10T07:40:18.454223+0000 mon.a (mon.0) 3278 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T07:40:19.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:19 vm03 bash[23382]: cluster 2026-03-10T07:40:18.454223+0000 mon.a (mon.0) 3278 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T07:40:19.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:19 vm03 bash[23382]: cluster 2026-03-10T07:40:18.745480+0000 mgr.y (mgr.24407) 579 : cluster [DBG] pgmap v1013: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:19.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:19 vm03 bash[23382]: cluster 2026-03-10T07:40:18.745480+0000 mgr.y (mgr.24407) 579 : cluster [DBG] pgmap v1013: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:19.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:19 vm00 bash[28005]: audit 2026-03-10T07:40:18.446552+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:19.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:19 vm00 bash[28005]: audit 2026-03-10T07:40:18.446552+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:19.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:19 vm00 bash[28005]: cluster 2026-03-10T07:40:18.454223+0000 mon.a (mon.0) 3278 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T07:40:19.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:19 vm00 bash[28005]: cluster 2026-03-10T07:40:18.454223+0000 mon.a (mon.0) 3278 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T07:40:19.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:19 vm00 bash[28005]: cluster 2026-03-10T07:40:18.745480+0000 mgr.y (mgr.24407) 579 : cluster [DBG] pgmap v1013: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:19.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:19 vm00 bash[28005]: cluster 2026-03-10T07:40:18.745480+0000 mgr.y (mgr.24407) 579 : cluster [DBG] pgmap v1013: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:19.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:19 vm00 bash[20701]: audit 2026-03-10T07:40:18.446552+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:19.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:19 vm00 bash[20701]: audit 2026-03-10T07:40:18.446552+0000 mon.a (mon.0) 3277 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:19 vm00 bash[20701]: cluster 2026-03-10T07:40:18.454223+0000 mon.a (mon.0) 3278 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T07:40:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:19 vm00 bash[20701]: cluster 2026-03-10T07:40:18.454223+0000 mon.a (mon.0) 3278 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T07:40:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:19 vm00 bash[20701]: cluster 2026-03-10T07:40:18.745480+0000 mgr.y (mgr.24407) 579 : cluster [DBG] pgmap v1013: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:19 vm00 bash[20701]: cluster 2026-03-10T07:40:18.745480+0000 mgr.y (mgr.24407) 579 : cluster [DBG] pgmap v1013: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:20 vm03 bash[23382]: cluster 2026-03-10T07:40:19.463589+0000 mon.a (mon.0) 3279 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T07:40:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:20 vm03 bash[23382]: cluster 2026-03-10T07:40:19.463589+0000 mon.a (mon.0) 3279 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T07:40:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:20 vm03 bash[23382]: audit 2026-03-10T07:40:19.493663+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:20 vm03 bash[23382]: audit 2026-03-10T07:40:19.493663+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:20 vm03 bash[23382]: audit 2026-03-10T07:40:19.493987+0000 mon.b (mon.1) 600 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:20 vm03 bash[23382]: audit 2026-03-10T07:40:19.493987+0000 mon.b (mon.1) 600 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:20.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:20 vm00 bash[28005]: cluster 2026-03-10T07:40:19.463589+0000 mon.a (mon.0) 3279 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T07:40:20.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:20 vm00 bash[28005]: cluster 2026-03-10T07:40:19.463589+0000 mon.a (mon.0) 3279 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T07:40:20.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:20 vm00 bash[28005]: audit 2026-03-10T07:40:19.493663+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:20.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:20 vm00 bash[28005]: audit 2026-03-10T07:40:19.493663+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:20.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:20 vm00 bash[28005]: audit 2026-03-10T07:40:19.493987+0000 mon.b (mon.1) 600 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:20.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:20 vm00 bash[28005]: audit 2026-03-10T07:40:19.493987+0000 mon.b (mon.1) 600 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:20 vm00 bash[20701]: cluster 2026-03-10T07:40:19.463589+0000 mon.a (mon.0) 3279 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T07:40:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:20 vm00 bash[20701]: cluster 2026-03-10T07:40:19.463589+0000 mon.a (mon.0) 3279 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T07:40:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:20 vm00 bash[20701]: audit 2026-03-10T07:40:19.493663+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:20 vm00 bash[20701]: audit 2026-03-10T07:40:19.493663+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:20 vm00 bash[20701]: audit 2026-03-10T07:40:19.493987+0000 mon.b (mon.1) 600 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:20.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:20 vm00 bash[20701]: audit 2026-03-10T07:40:19.493987+0000 mon.b (mon.1) 600 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:21.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:40:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:40:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:40:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:21 vm03 bash[23382]: audit 2026-03-10T07:40:20.468967+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:21 vm03 bash[23382]: audit 2026-03-10T07:40:20.468967+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:21 vm03 bash[23382]: cluster 2026-03-10T07:40:20.476409+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T07:40:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:21 vm03 bash[23382]: cluster 2026-03-10T07:40:20.476409+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T07:40:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:21 vm03 bash[23382]: audit 2026-03-10T07:40:20.476915+0000 mon.b (mon.1) 601 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:21 vm03 bash[23382]: audit 2026-03-10T07:40:20.476915+0000 mon.b (mon.1) 601 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:21 vm03 bash[23382]: audit 2026-03-10T07:40:20.477794+0000 mon.a (mon.0) 3283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:21 vm03 bash[23382]: audit 2026-03-10T07:40:20.477794+0000 mon.a (mon.0) 3283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:21 vm03 bash[23382]: cluster 2026-03-10T07:40:20.745838+0000 mgr.y (mgr.24407) 580 : cluster [DBG] pgmap v1016: 268 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 265 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:40:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:21 vm03 bash[23382]: cluster 2026-03-10T07:40:20.745838+0000 mgr.y (mgr.24407) 580 : cluster [DBG] pgmap v1016: 268 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 265 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:40:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:21 vm00 bash[28005]: audit 2026-03-10T07:40:20.468967+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:21 vm00 bash[28005]: audit 2026-03-10T07:40:20.468967+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:21 vm00 bash[28005]: cluster 2026-03-10T07:40:20.476409+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T07:40:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:21 vm00 bash[28005]: cluster 2026-03-10T07:40:20.476409+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T07:40:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:21 vm00 bash[28005]: audit 2026-03-10T07:40:20.476915+0000 mon.b (mon.1) 601 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:21 vm00 bash[28005]: audit 2026-03-10T07:40:20.476915+0000 mon.b (mon.1) 601 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:21 vm00 bash[28005]: audit 2026-03-10T07:40:20.477794+0000 mon.a (mon.0) 3283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:21 vm00 bash[28005]: audit 2026-03-10T07:40:20.477794+0000 mon.a (mon.0) 3283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:21 vm00 bash[28005]: cluster 2026-03-10T07:40:20.745838+0000 mgr.y (mgr.24407) 580 : cluster [DBG] pgmap v1016: 268 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 265 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:21 vm00 bash[28005]: cluster 2026-03-10T07:40:20.745838+0000 mgr.y (mgr.24407) 580 : cluster [DBG] pgmap v1016: 268 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 265 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:21 vm00 bash[20701]: audit 2026-03-10T07:40:20.468967+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:21 vm00 bash[20701]: audit 2026-03-10T07:40:20.468967+0000 mon.a (mon.0) 3281 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:21 vm00 bash[20701]: cluster 2026-03-10T07:40:20.476409+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:21 vm00 bash[20701]: cluster 2026-03-10T07:40:20.476409+0000 mon.a (mon.0) 3282 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:21 vm00 bash[20701]: audit 2026-03-10T07:40:20.476915+0000 mon.b (mon.1) 601 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:21 vm00 bash[20701]: audit 2026-03-10T07:40:20.476915+0000 mon.b (mon.1) 601 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:21 vm00 bash[20701]: audit 2026-03-10T07:40:20.477794+0000 mon.a (mon.0) 3283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:21 vm00 bash[20701]: audit 2026-03-10T07:40:20.477794+0000 mon.a (mon.0) 3283 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]: dispatch 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:21 vm00 bash[20701]: cluster 2026-03-10T07:40:20.745838+0000 mgr.y (mgr.24407) 580 : cluster [DBG] pgmap v1016: 268 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 265 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:40:21.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:21 vm00 bash[20701]: cluster 2026-03-10T07:40:20.745838+0000 mgr.y (mgr.24407) 580 : cluster [DBG] pgmap v1016: 268 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 265 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T07:40:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:22 vm00 bash[28005]: cluster 2026-03-10T07:40:21.470977+0000 mon.a (mon.0) 3284 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:22 vm00 bash[28005]: cluster 2026-03-10T07:40:21.470977+0000 mon.a (mon.0) 3284 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:22 vm00 bash[28005]: audit 2026-03-10T07:40:21.478302+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:22 vm00 bash[28005]: audit 2026-03-10T07:40:21.478302+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:22 vm00 bash[28005]: cluster 2026-03-10T07:40:21.509562+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T07:40:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:22 vm00 bash[28005]: cluster 2026-03-10T07:40:21.509562+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T07:40:22.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:22 vm00 bash[20701]: cluster 2026-03-10T07:40:21.470977+0000 mon.a (mon.0) 3284 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:22.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:22 vm00 bash[20701]: cluster 2026-03-10T07:40:21.470977+0000 mon.a (mon.0) 3284 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:22 vm00 bash[20701]: audit 2026-03-10T07:40:21.478302+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:22 vm00 bash[20701]: audit 2026-03-10T07:40:21.478302+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:22 vm00 bash[20701]: cluster 2026-03-10T07:40:21.509562+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T07:40:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:22 vm00 bash[20701]: cluster 2026-03-10T07:40:21.509562+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T07:40:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:22 vm03 bash[23382]: cluster 2026-03-10T07:40:21.470977+0000 mon.a (mon.0) 3284 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:22 vm03 bash[23382]: cluster 2026-03-10T07:40:21.470977+0000 mon.a (mon.0) 3284 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:22 vm03 bash[23382]: audit 2026-03-10T07:40:21.478302+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:22 vm03 bash[23382]: audit 2026-03-10T07:40:21.478302+0000 mon.a (mon.0) 3285 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-132"}]': finished 2026-03-10T07:40:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:22 vm03 bash[23382]: cluster 2026-03-10T07:40:21.509562+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T07:40:23.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:22 vm03 bash[23382]: cluster 2026-03-10T07:40:21.509562+0000 mon.a (mon.0) 3286 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T07:40:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:23 vm00 bash[28005]: cluster 2026-03-10T07:40:22.488070+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T07:40:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:23 vm00 bash[28005]: cluster 2026-03-10T07:40:22.488070+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T07:40:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:23 vm00 bash[28005]: cluster 2026-03-10T07:40:22.746192+0000 mgr.y (mgr.24407) 581 : cluster [DBG] pgmap v1019: 236 pgs: 1 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:40:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:23 vm00 bash[28005]: cluster 2026-03-10T07:40:22.746192+0000 mgr.y (mgr.24407) 581 : cluster [DBG] pgmap v1019: 236 pgs: 1 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:40:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:23 vm00 bash[28005]: cluster 2026-03-10T07:40:23.488381+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T07:40:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:23 vm00 bash[28005]: cluster 2026-03-10T07:40:23.488381+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T07:40:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:23 vm00 bash[28005]: audit 2026-03-10T07:40:23.495446+0000 mon.b (mon.1) 602 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:23 vm00 bash[28005]: audit 2026-03-10T07:40:23.495446+0000 mon.b (mon.1) 602 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:23 vm00 bash[28005]: audit 2026-03-10T07:40:23.511016+0000 mon.a (mon.0) 3289 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:23 vm00 bash[28005]: audit 2026-03-10T07:40:23.511016+0000 mon.a (mon.0) 3289 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:23 vm00 bash[20701]: cluster 2026-03-10T07:40:22.488070+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:23 vm00 bash[20701]: cluster 2026-03-10T07:40:22.488070+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:23 vm00 bash[20701]: cluster 2026-03-10T07:40:22.746192+0000 mgr.y (mgr.24407) 581 : cluster [DBG] pgmap v1019: 236 pgs: 1 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:23 vm00 bash[20701]: cluster 2026-03-10T07:40:22.746192+0000 mgr.y (mgr.24407) 581 : cluster [DBG] pgmap v1019: 236 pgs: 1 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:23 vm00 bash[20701]: cluster 2026-03-10T07:40:23.488381+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:23 vm00 bash[20701]: cluster 2026-03-10T07:40:23.488381+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:23 vm00 bash[20701]: audit 2026-03-10T07:40:23.495446+0000 mon.b (mon.1) 602 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:23 vm00 bash[20701]: audit 2026-03-10T07:40:23.495446+0000 mon.b (mon.1) 602 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:23 vm00 bash[20701]: audit 2026-03-10T07:40:23.511016+0000 mon.a (mon.0) 3289 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:23 vm00 bash[20701]: audit 2026-03-10T07:40:23.511016+0000 mon.a (mon.0) 3289 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:24.013 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:40:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:40:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:23 vm03 bash[23382]: cluster 2026-03-10T07:40:22.488070+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T07:40:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:23 vm03 bash[23382]: cluster 2026-03-10T07:40:22.488070+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T07:40:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:23 vm03 bash[23382]: cluster 2026-03-10T07:40:22.746192+0000 mgr.y (mgr.24407) 581 : cluster [DBG] pgmap v1019: 236 pgs: 1 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:40:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:23 vm03 bash[23382]: cluster 2026-03-10T07:40:22.746192+0000 mgr.y (mgr.24407) 581 : cluster [DBG] pgmap v1019: 236 pgs: 1 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T07:40:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:23 vm03 bash[23382]: cluster 2026-03-10T07:40:23.488381+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T07:40:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:23 vm03 bash[23382]: cluster 2026-03-10T07:40:23.488381+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T07:40:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:23 vm03 bash[23382]: audit 2026-03-10T07:40:23.495446+0000 mon.b (mon.1) 602 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:23 vm03 bash[23382]: audit 2026-03-10T07:40:23.495446+0000 mon.b (mon.1) 602 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:23 vm03 bash[23382]: audit 2026-03-10T07:40:23.511016+0000 mon.a (mon.0) 3289 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:24.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:23 vm03 bash[23382]: audit 2026-03-10T07:40:23.511016+0000 mon.a (mon.0) 3289 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:24 vm00 bash[28005]: audit 2026-03-10T07:40:23.521907+0000 mgr.y (mgr.24407) 582 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:24 vm00 bash[28005]: audit 2026-03-10T07:40:23.521907+0000 mgr.y (mgr.24407) 582 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:24 vm00 bash[28005]: audit 2026-03-10T07:40:24.489201+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:24 vm00 bash[28005]: audit 2026-03-10T07:40:24.489201+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:24 vm00 bash[28005]: cluster 2026-03-10T07:40:24.499586+0000 mon.a (mon.0) 3291 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:24 vm00 bash[28005]: cluster 2026-03-10T07:40:24.499586+0000 mon.a (mon.0) 3291 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:24 vm00 bash[28005]: audit 2026-03-10T07:40:24.517852+0000 mon.b (mon.1) 603 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:24 vm00 bash[28005]: audit 2026-03-10T07:40:24.517852+0000 mon.b (mon.1) 603 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:24 vm00 bash[28005]: audit 2026-03-10T07:40:24.521599+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:24 vm00 bash[28005]: audit 2026-03-10T07:40:24.521599+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:24 vm00 bash[20701]: audit 2026-03-10T07:40:23.521907+0000 mgr.y (mgr.24407) 582 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:24 vm00 bash[20701]: audit 2026-03-10T07:40:23.521907+0000 mgr.y (mgr.24407) 582 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:24 vm00 bash[20701]: audit 2026-03-10T07:40:24.489201+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:24 vm00 bash[20701]: audit 2026-03-10T07:40:24.489201+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:24 vm00 bash[20701]: cluster 2026-03-10T07:40:24.499586+0000 mon.a (mon.0) 3291 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:24 vm00 bash[20701]: cluster 2026-03-10T07:40:24.499586+0000 mon.a (mon.0) 3291 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:24 vm00 bash[20701]: audit 2026-03-10T07:40:24.517852+0000 mon.b (mon.1) 603 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:24 vm00 bash[20701]: audit 2026-03-10T07:40:24.517852+0000 mon.b (mon.1) 603 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:24 vm00 bash[20701]: audit 2026-03-10T07:40:24.521599+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:24 vm00 bash[20701]: audit 2026-03-10T07:40:24.521599+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:24 vm03 bash[23382]: audit 2026-03-10T07:40:23.521907+0000 mgr.y (mgr.24407) 582 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:24 vm03 bash[23382]: audit 2026-03-10T07:40:23.521907+0000 mgr.y (mgr.24407) 582 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:24 vm03 bash[23382]: audit 2026-03-10T07:40:24.489201+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:24 vm03 bash[23382]: audit 2026-03-10T07:40:24.489201+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:24 vm03 bash[23382]: cluster 2026-03-10T07:40:24.499586+0000 mon.a (mon.0) 3291 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T07:40:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:24 vm03 bash[23382]: cluster 2026-03-10T07:40:24.499586+0000 mon.a (mon.0) 3291 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T07:40:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:24 vm03 bash[23382]: audit 2026-03-10T07:40:24.517852+0000 mon.b (mon.1) 603 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:24 vm03 bash[23382]: audit 2026-03-10T07:40:24.517852+0000 mon.b (mon.1) 603 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:24 vm03 bash[23382]: audit 2026-03-10T07:40:24.521599+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:25.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:24 vm03 bash[23382]: audit 2026-03-10T07:40:24.521599+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: cluster 2026-03-10T07:40:24.746751+0000 mgr.y (mgr.24407) 583 : cluster [DBG] pgmap v1022: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: cluster 2026-03-10T07:40:24.746751+0000 mgr.y (mgr.24407) 583 : cluster [DBG] pgmap v1022: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: audit 2026-03-10T07:40:24.773057+0000 mon.c (mon.2) 356 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: audit 2026-03-10T07:40:24.773057+0000 mon.c (mon.2) 356 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: audit 2026-03-10T07:40:25.492703+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: audit 2026-03-10T07:40:25.492703+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: audit 2026-03-10T07:40:25.498654+0000 mon.b (mon.1) 604 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: audit 2026-03-10T07:40:25.498654+0000 mon.b (mon.1) 604 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: cluster 2026-03-10T07:40:25.502226+0000 mon.a (mon.0) 3294 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: cluster 2026-03-10T07:40:25.502226+0000 mon.a (mon.0) 3294 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: audit 2026-03-10T07:40:25.503620+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:25 vm00 bash[28005]: audit 2026-03-10T07:40:25.503620+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: cluster 2026-03-10T07:40:24.746751+0000 mgr.y (mgr.24407) 583 : cluster [DBG] pgmap v1022: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: cluster 2026-03-10T07:40:24.746751+0000 mgr.y (mgr.24407) 583 : cluster [DBG] pgmap v1022: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: audit 2026-03-10T07:40:24.773057+0000 mon.c (mon.2) 356 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: audit 2026-03-10T07:40:24.773057+0000 mon.c (mon.2) 356 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: audit 2026-03-10T07:40:25.492703+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: audit 2026-03-10T07:40:25.492703+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: audit 2026-03-10T07:40:25.498654+0000 mon.b (mon.1) 604 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: audit 2026-03-10T07:40:25.498654+0000 mon.b (mon.1) 604 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: cluster 2026-03-10T07:40:25.502226+0000 mon.a (mon.0) 3294 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: cluster 2026-03-10T07:40:25.502226+0000 mon.a (mon.0) 3294 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: audit 2026-03-10T07:40:25.503620+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:25 vm00 bash[20701]: audit 2026-03-10T07:40:25.503620+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: cluster 2026-03-10T07:40:24.746751+0000 mgr.y (mgr.24407) 583 : cluster [DBG] pgmap v1022: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: cluster 2026-03-10T07:40:24.746751+0000 mgr.y (mgr.24407) 583 : cluster [DBG] pgmap v1022: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: audit 2026-03-10T07:40:24.773057+0000 mon.c (mon.2) 356 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: audit 2026-03-10T07:40:24.773057+0000 mon.c (mon.2) 356 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: audit 2026-03-10T07:40:25.492703+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: audit 2026-03-10T07:40:25.492703+0000 mon.a (mon.0) 3293 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: audit 2026-03-10T07:40:25.498654+0000 mon.b (mon.1) 604 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: audit 2026-03-10T07:40:25.498654+0000 mon.b (mon.1) 604 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: cluster 2026-03-10T07:40:25.502226+0000 mon.a (mon.0) 3294 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: cluster 2026-03-10T07:40:25.502226+0000 mon.a (mon.0) 3294 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: audit 2026-03-10T07:40:25.503620+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:26.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:25 vm03 bash[23382]: audit 2026-03-10T07:40:25.503620+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.495749+0000 mon.a (mon.0) 3296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.495749+0000 mon.a (mon.0) 3296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.501928+0000 mon.b (mon.1) 605 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.501928+0000 mon.b (mon.1) 605 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: cluster 2026-03-10T07:40:26.502535+0000 mon.a (mon.0) 3297 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: cluster 2026-03-10T07:40:26.502535+0000 mon.a (mon.0) 3297 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.504943+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.504943+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: cluster 2026-03-10T07:40:26.747134+0000 mgr.y (mgr.24407) 584 : cluster [DBG] pgmap v1025: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: cluster 2026-03-10T07:40:26.747134+0000 mgr.y (mgr.24407) 584 : cluster [DBG] pgmap v1025: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: cluster 2026-03-10T07:40:26.791162+0000 mon.a (mon.0) 3299 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: cluster 2026-03-10T07:40:26.791162+0000 mon.a (mon.0) 3299 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.794126+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]': finished 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.794126+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]': finished 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: cluster 2026-03-10T07:40:26.804024+0000 mon.a (mon.0) 3301 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: cluster 2026-03-10T07:40:26.804024+0000 mon.a (mon.0) 3301 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.861146+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.861146+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.861767+0000 mon.b (mon.1) 606 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:27.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:27 vm03 bash[23382]: audit 2026-03-10T07:40:26.861767+0000 mon.b (mon.1) 606 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:27.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.495749+0000 mon.a (mon.0) 3296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:27.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.495749+0000 mon.a (mon.0) 3296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:27.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.501928+0000 mon.b (mon.1) 605 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.501928+0000 mon.b (mon.1) 605 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: cluster 2026-03-10T07:40:26.502535+0000 mon.a (mon.0) 3297 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: cluster 2026-03-10T07:40:26.502535+0000 mon.a (mon.0) 3297 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.504943+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.504943+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: cluster 2026-03-10T07:40:26.747134+0000 mgr.y (mgr.24407) 584 : cluster [DBG] pgmap v1025: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: cluster 2026-03-10T07:40:26.747134+0000 mgr.y (mgr.24407) 584 : cluster [DBG] pgmap v1025: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: cluster 2026-03-10T07:40:26.791162+0000 mon.a (mon.0) 3299 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: cluster 2026-03-10T07:40:26.791162+0000 mon.a (mon.0) 3299 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.794126+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]': finished 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.794126+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]': finished 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: cluster 2026-03-10T07:40:26.804024+0000 mon.a (mon.0) 3301 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: cluster 2026-03-10T07:40:26.804024+0000 mon.a (mon.0) 3301 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.861146+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.861146+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.861767+0000 mon.b (mon.1) 606 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:27 vm00 bash[28005]: audit 2026-03-10T07:40:26.861767+0000 mon.b (mon.1) 606 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.495749+0000 mon.a (mon.0) 3296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.495749+0000 mon.a (mon.0) 3296 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.501928+0000 mon.b (mon.1) 605 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.501928+0000 mon.b (mon.1) 605 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: cluster 2026-03-10T07:40:26.502535+0000 mon.a (mon.0) 3297 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: cluster 2026-03-10T07:40:26.502535+0000 mon.a (mon.0) 3297 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.504943+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.504943+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: cluster 2026-03-10T07:40:26.747134+0000 mgr.y (mgr.24407) 584 : cluster [DBG] pgmap v1025: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: cluster 2026-03-10T07:40:26.747134+0000 mgr.y (mgr.24407) 584 : cluster [DBG] pgmap v1025: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: cluster 2026-03-10T07:40:26.791162+0000 mon.a (mon.0) 3299 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: cluster 2026-03-10T07:40:26.791162+0000 mon.a (mon.0) 3299 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.794126+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]': finished 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.794126+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-134", "mode": "writeback"}]': finished 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: cluster 2026-03-10T07:40:26.804024+0000 mon.a (mon.0) 3301 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: cluster 2026-03-10T07:40:26.804024+0000 mon.a (mon.0) 3301 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.861146+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.861146+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.861767+0000 mon.b (mon.1) 606 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:27 vm00 bash[20701]: audit 2026-03-10T07:40:26.861767+0000 mon.b (mon.1) 606 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:29.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:28 vm00 bash[28005]: audit 2026-03-10T07:40:27.797973+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:29.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:28 vm00 bash[28005]: audit 2026-03-10T07:40:27.797973+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:28 vm00 bash[28005]: audit 2026-03-10T07:40:27.807410+0000 mon.b (mon.1) 607 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:28 vm00 bash[28005]: audit 2026-03-10T07:40:27.807410+0000 mon.b (mon.1) 607 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:28 vm00 bash[28005]: cluster 2026-03-10T07:40:27.807727+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:28 vm00 bash[28005]: cluster 2026-03-10T07:40:27.807727+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:28 vm00 bash[28005]: audit 2026-03-10T07:40:27.808467+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:28 vm00 bash[28005]: audit 2026-03-10T07:40:27.808467+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:28 vm00 bash[20701]: audit 2026-03-10T07:40:27.797973+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:28 vm00 bash[20701]: audit 2026-03-10T07:40:27.797973+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:28 vm00 bash[20701]: audit 2026-03-10T07:40:27.807410+0000 mon.b (mon.1) 607 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:28 vm00 bash[20701]: audit 2026-03-10T07:40:27.807410+0000 mon.b (mon.1) 607 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:28 vm00 bash[20701]: cluster 2026-03-10T07:40:27.807727+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:28 vm00 bash[20701]: cluster 2026-03-10T07:40:27.807727+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:28 vm00 bash[20701]: audit 2026-03-10T07:40:27.808467+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:28 vm00 bash[20701]: audit 2026-03-10T07:40:27.808467+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:28 vm03 bash[23382]: audit 2026-03-10T07:40:27.797973+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:28 vm03 bash[23382]: audit 2026-03-10T07:40:27.797973+0000 mon.a (mon.0) 3303 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:28 vm03 bash[23382]: audit 2026-03-10T07:40:27.807410+0000 mon.b (mon.1) 607 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:28 vm03 bash[23382]: audit 2026-03-10T07:40:27.807410+0000 mon.b (mon.1) 607 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:28 vm03 bash[23382]: cluster 2026-03-10T07:40:27.807727+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T07:40:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:28 vm03 bash[23382]: cluster 2026-03-10T07:40:27.807727+0000 mon.a (mon.0) 3304 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T07:40:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:28 vm03 bash[23382]: audit 2026-03-10T07:40:27.808467+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:29.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:28 vm03 bash[23382]: audit 2026-03-10T07:40:27.808467+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]: dispatch 2026-03-10T07:40:30.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:29 vm00 bash[28005]: cluster 2026-03-10T07:40:28.747568+0000 mgr.y (mgr.24407) 585 : cluster [DBG] pgmap v1028: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:30.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:29 vm00 bash[28005]: cluster 2026-03-10T07:40:28.747568+0000 mgr.y (mgr.24407) 585 : cluster [DBG] pgmap v1028: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:30.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:29 vm00 bash[28005]: cluster 2026-03-10T07:40:28.799010+0000 mon.a (mon.0) 3306 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:30.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:29 vm00 bash[28005]: cluster 2026-03-10T07:40:28.799010+0000 mon.a (mon.0) 3306 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:30.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:29 vm00 bash[28005]: audit 2026-03-10T07:40:28.806499+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:30.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:29 vm00 bash[28005]: audit 2026-03-10T07:40:28.806499+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:30.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:29 vm00 bash[28005]: cluster 2026-03-10T07:40:28.825702+0000 mon.a (mon.0) 3308 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T07:40:30.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:29 vm00 bash[28005]: cluster 2026-03-10T07:40:28.825702+0000 mon.a (mon.0) 3308 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T07:40:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:29 vm00 bash[20701]: cluster 2026-03-10T07:40:28.747568+0000 mgr.y (mgr.24407) 585 : cluster [DBG] pgmap v1028: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:29 vm00 bash[20701]: cluster 2026-03-10T07:40:28.747568+0000 mgr.y (mgr.24407) 585 : cluster [DBG] pgmap v1028: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:29 vm00 bash[20701]: cluster 2026-03-10T07:40:28.799010+0000 mon.a (mon.0) 3306 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:29 vm00 bash[20701]: cluster 2026-03-10T07:40:28.799010+0000 mon.a (mon.0) 3306 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:29 vm00 bash[20701]: audit 2026-03-10T07:40:28.806499+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:29 vm00 bash[20701]: audit 2026-03-10T07:40:28.806499+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:29 vm00 bash[20701]: cluster 2026-03-10T07:40:28.825702+0000 mon.a (mon.0) 3308 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T07:40:30.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:29 vm00 bash[20701]: cluster 2026-03-10T07:40:28.825702+0000 mon.a (mon.0) 3308 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T07:40:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:29 vm03 bash[23382]: cluster 2026-03-10T07:40:28.747568+0000 mgr.y (mgr.24407) 585 : cluster [DBG] pgmap v1028: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:29 vm03 bash[23382]: cluster 2026-03-10T07:40:28.747568+0000 mgr.y (mgr.24407) 585 : cluster [DBG] pgmap v1028: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:29 vm03 bash[23382]: cluster 2026-03-10T07:40:28.799010+0000 mon.a (mon.0) 3306 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:29 vm03 bash[23382]: cluster 2026-03-10T07:40:28.799010+0000 mon.a (mon.0) 3306 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:29 vm03 bash[23382]: audit 2026-03-10T07:40:28.806499+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:29 vm03 bash[23382]: audit 2026-03-10T07:40:28.806499+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-134"}]': finished 2026-03-10T07:40:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:29 vm03 bash[23382]: cluster 2026-03-10T07:40:28.825702+0000 mon.a (mon.0) 3308 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T07:40:30.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:29 vm03 bash[23382]: cluster 2026-03-10T07:40:28.825702+0000 mon.a (mon.0) 3308 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T07:40:31.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:31 vm00 bash[28005]: cluster 2026-03-10T07:40:29.871538+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T07:40:31.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:31 vm00 bash[28005]: cluster 2026-03-10T07:40:29.871538+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T07:40:31.382 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:40:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:40:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:40:31.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:31 vm00 bash[20701]: cluster 2026-03-10T07:40:29.871538+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T07:40:31.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:31 vm00 bash[20701]: cluster 2026-03-10T07:40:29.871538+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T07:40:31.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:31 vm03 bash[23382]: cluster 2026-03-10T07:40:29.871538+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T07:40:31.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:31 vm03 bash[23382]: cluster 2026-03-10T07:40:29.871538+0000 mon.a (mon.0) 3309 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T07:40:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:32 vm03 bash[23382]: cluster 2026-03-10T07:40:30.747982+0000 mgr.y (mgr.24407) 586 : cluster [DBG] pgmap v1031: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:32 vm03 bash[23382]: cluster 2026-03-10T07:40:30.747982+0000 mgr.y (mgr.24407) 586 : cluster [DBG] pgmap v1031: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:32 vm03 bash[23382]: cluster 2026-03-10T07:40:31.060776+0000 mon.a (mon.0) 3310 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T07:40:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:32 vm03 bash[23382]: cluster 2026-03-10T07:40:31.060776+0000 mon.a (mon.0) 3310 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T07:40:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:32 vm03 bash[23382]: audit 2026-03-10T07:40:31.073118+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:32 vm03 bash[23382]: audit 2026-03-10T07:40:31.073118+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:32 vm03 bash[23382]: audit 2026-03-10T07:40:31.073864+0000 mon.b (mon.1) 608 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:32 vm03 bash[23382]: audit 2026-03-10T07:40:31.073864+0000 mon.b (mon.1) 608 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:32 vm00 bash[28005]: cluster 2026-03-10T07:40:30.747982+0000 mgr.y (mgr.24407) 586 : cluster [DBG] pgmap v1031: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:32 vm00 bash[28005]: cluster 2026-03-10T07:40:30.747982+0000 mgr.y (mgr.24407) 586 : cluster [DBG] pgmap v1031: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:32 vm00 bash[28005]: cluster 2026-03-10T07:40:31.060776+0000 mon.a (mon.0) 3310 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:32 vm00 bash[28005]: cluster 2026-03-10T07:40:31.060776+0000 mon.a (mon.0) 3310 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:32 vm00 bash[28005]: audit 2026-03-10T07:40:31.073118+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:32 vm00 bash[28005]: audit 2026-03-10T07:40:31.073118+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:32 vm00 bash[28005]: audit 2026-03-10T07:40:31.073864+0000 mon.b (mon.1) 608 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:32 vm00 bash[28005]: audit 2026-03-10T07:40:31.073864+0000 mon.b (mon.1) 608 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:32 vm00 bash[20701]: cluster 2026-03-10T07:40:30.747982+0000 mgr.y (mgr.24407) 586 : cluster [DBG] pgmap v1031: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:32 vm00 bash[20701]: cluster 2026-03-10T07:40:30.747982+0000 mgr.y (mgr.24407) 586 : cluster [DBG] pgmap v1031: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:32 vm00 bash[20701]: cluster 2026-03-10T07:40:31.060776+0000 mon.a (mon.0) 3310 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:32 vm00 bash[20701]: cluster 2026-03-10T07:40:31.060776+0000 mon.a (mon.0) 3310 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:32 vm00 bash[20701]: audit 2026-03-10T07:40:31.073118+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:32 vm00 bash[20701]: audit 2026-03-10T07:40:31.073118+0000 mon.a (mon.0) 3311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:32 vm00 bash[20701]: audit 2026-03-10T07:40:31.073864+0000 mon.b (mon.1) 608 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:32 vm00 bash[20701]: audit 2026-03-10T07:40:31.073864+0000 mon.b (mon.1) 608 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:33.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:33 vm00 bash[28005]: audit 2026-03-10T07:40:32.016994+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:33.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:33 vm00 bash[28005]: audit 2026-03-10T07:40:32.016994+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:33.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:33 vm00 bash[28005]: cluster 2026-03-10T07:40:32.026567+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:33 vm00 bash[28005]: cluster 2026-03-10T07:40:32.026567+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:33 vm00 bash[28005]: audit 2026-03-10T07:40:32.028716+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:33 vm00 bash[28005]: audit 2026-03-10T07:40:32.028716+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:33 vm00 bash[28005]: audit 2026-03-10T07:40:32.028865+0000 mon.b (mon.1) 609 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:33 vm00 bash[28005]: audit 2026-03-10T07:40:32.028865+0000 mon.b (mon.1) 609 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:33 vm00 bash[20701]: audit 2026-03-10T07:40:32.016994+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:33 vm00 bash[20701]: audit 2026-03-10T07:40:32.016994+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:33 vm00 bash[20701]: cluster 2026-03-10T07:40:32.026567+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:33 vm00 bash[20701]: cluster 2026-03-10T07:40:32.026567+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:33 vm00 bash[20701]: audit 2026-03-10T07:40:32.028716+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:33 vm00 bash[20701]: audit 2026-03-10T07:40:32.028716+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:33 vm00 bash[20701]: audit 2026-03-10T07:40:32.028865+0000 mon.b (mon.1) 609 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:33.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:33 vm00 bash[20701]: audit 2026-03-10T07:40:32.028865+0000 mon.b (mon.1) 609 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:33 vm03 bash[23382]: audit 2026-03-10T07:40:32.016994+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:33 vm03 bash[23382]: audit 2026-03-10T07:40:32.016994+0000 mon.a (mon.0) 3312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:33 vm03 bash[23382]: cluster 2026-03-10T07:40:32.026567+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T07:40:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:33 vm03 bash[23382]: cluster 2026-03-10T07:40:32.026567+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T07:40:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:33 vm03 bash[23382]: audit 2026-03-10T07:40:32.028716+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:33 vm03 bash[23382]: audit 2026-03-10T07:40:32.028716+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:33 vm03 bash[23382]: audit 2026-03-10T07:40:32.028865+0000 mon.b (mon.1) 609 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:33.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:33 vm03 bash[23382]: audit 2026-03-10T07:40:32.028865+0000 mon.b (mon.1) 609 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:34.013 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:40:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:40:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:34 vm00 bash[28005]: cluster 2026-03-10T07:40:32.748322+0000 mgr.y (mgr.24407) 587 : cluster [DBG] pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:34 vm00 bash[28005]: cluster 2026-03-10T07:40:32.748322+0000 mgr.y (mgr.24407) 587 : cluster [DBG] pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:34 vm00 bash[28005]: audit 2026-03-10T07:40:33.026463+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:34 vm00 bash[28005]: audit 2026-03-10T07:40:33.026463+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:34 vm00 bash[28005]: audit 2026-03-10T07:40:33.033849+0000 mon.b (mon.1) 610 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:34 vm00 bash[28005]: audit 2026-03-10T07:40:33.033849+0000 mon.b (mon.1) 610 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:34 vm00 bash[28005]: cluster 2026-03-10T07:40:33.035459+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T07:40:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:34 vm00 bash[28005]: cluster 2026-03-10T07:40:33.035459+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:34 vm00 bash[28005]: audit 2026-03-10T07:40:33.036050+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:34 vm00 bash[28005]: audit 2026-03-10T07:40:33.036050+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:34 vm00 bash[20701]: cluster 2026-03-10T07:40:32.748322+0000 mgr.y (mgr.24407) 587 : cluster [DBG] pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:34 vm00 bash[20701]: cluster 2026-03-10T07:40:32.748322+0000 mgr.y (mgr.24407) 587 : cluster [DBG] pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:34 vm00 bash[20701]: audit 2026-03-10T07:40:33.026463+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:34 vm00 bash[20701]: audit 2026-03-10T07:40:33.026463+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:34 vm00 bash[20701]: audit 2026-03-10T07:40:33.033849+0000 mon.b (mon.1) 610 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:34 vm00 bash[20701]: audit 2026-03-10T07:40:33.033849+0000 mon.b (mon.1) 610 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:34 vm00 bash[20701]: cluster 2026-03-10T07:40:33.035459+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:34 vm00 bash[20701]: cluster 2026-03-10T07:40:33.035459+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:34 vm00 bash[20701]: audit 2026-03-10T07:40:33.036050+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:34.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:34 vm00 bash[20701]: audit 2026-03-10T07:40:33.036050+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:34 vm03 bash[23382]: cluster 2026-03-10T07:40:32.748322+0000 mgr.y (mgr.24407) 587 : cluster [DBG] pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:34 vm03 bash[23382]: cluster 2026-03-10T07:40:32.748322+0000 mgr.y (mgr.24407) 587 : cluster [DBG] pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T07:40:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:34 vm03 bash[23382]: audit 2026-03-10T07:40:33.026463+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:34 vm03 bash[23382]: audit 2026-03-10T07:40:33.026463+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:34 vm03 bash[23382]: audit 2026-03-10T07:40:33.033849+0000 mon.b (mon.1) 610 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:34 vm03 bash[23382]: audit 2026-03-10T07:40:33.033849+0000 mon.b (mon.1) 610 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:34 vm03 bash[23382]: cluster 2026-03-10T07:40:33.035459+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T07:40:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:34 vm03 bash[23382]: cluster 2026-03-10T07:40:33.035459+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T07:40:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:34 vm03 bash[23382]: audit 2026-03-10T07:40:33.036050+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:34.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:34 vm03 bash[23382]: audit 2026-03-10T07:40:33.036050+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: audit 2026-03-10T07:40:33.529744+0000 mgr.y (mgr.24407) 588 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: audit 2026-03-10T07:40:33.529744+0000 mgr.y (mgr.24407) 588 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: audit 2026-03-10T07:40:34.080175+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: audit 2026-03-10T07:40:34.080175+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: cluster 2026-03-10T07:40:34.084308+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: cluster 2026-03-10T07:40:34.084308+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: audit 2026-03-10T07:40:34.086547+0000 mon.b (mon.1) 611 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: audit 2026-03-10T07:40:34.086547+0000 mon.b (mon.1) 611 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: audit 2026-03-10T07:40:34.090376+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: audit 2026-03-10T07:40:34.090376+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: cluster 2026-03-10T07:40:35.080291+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: cluster 2026-03-10T07:40:35.080291+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: audit 2026-03-10T07:40:35.083545+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]': finished 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: audit 2026-03-10T07:40:35.083545+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]': finished 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: cluster 2026-03-10T07:40:35.086448+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:35 vm00 bash[28005]: cluster 2026-03-10T07:40:35.086448+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: audit 2026-03-10T07:40:33.529744+0000 mgr.y (mgr.24407) 588 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: audit 2026-03-10T07:40:33.529744+0000 mgr.y (mgr.24407) 588 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: audit 2026-03-10T07:40:34.080175+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: audit 2026-03-10T07:40:34.080175+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: cluster 2026-03-10T07:40:34.084308+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: cluster 2026-03-10T07:40:34.084308+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: audit 2026-03-10T07:40:34.086547+0000 mon.b (mon.1) 611 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: audit 2026-03-10T07:40:34.086547+0000 mon.b (mon.1) 611 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: audit 2026-03-10T07:40:34.090376+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: audit 2026-03-10T07:40:34.090376+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: cluster 2026-03-10T07:40:35.080291+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: cluster 2026-03-10T07:40:35.080291+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: audit 2026-03-10T07:40:35.083545+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]': finished 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: audit 2026-03-10T07:40:35.083545+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]': finished 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: cluster 2026-03-10T07:40:35.086448+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T07:40:35.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:35 vm00 bash[20701]: cluster 2026-03-10T07:40:35.086448+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T07:40:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: audit 2026-03-10T07:40:33.529744+0000 mgr.y (mgr.24407) 588 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: audit 2026-03-10T07:40:33.529744+0000 mgr.y (mgr.24407) 588 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: audit 2026-03-10T07:40:34.080175+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: audit 2026-03-10T07:40:34.080175+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: cluster 2026-03-10T07:40:34.084308+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T07:40:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: cluster 2026-03-10T07:40:34.084308+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T07:40:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: audit 2026-03-10T07:40:34.086547+0000 mon.b (mon.1) 611 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: audit 2026-03-10T07:40:34.086547+0000 mon.b (mon.1) 611 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: audit 2026-03-10T07:40:34.090376+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: audit 2026-03-10T07:40:34.090376+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]: dispatch 2026-03-10T07:40:35.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: cluster 2026-03-10T07:40:35.080291+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:35.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: cluster 2026-03-10T07:40:35.080291+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:40:35.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: audit 2026-03-10T07:40:35.083545+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]': finished 2026-03-10T07:40:35.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: audit 2026-03-10T07:40:35.083545+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-136", "mode": "writeback"}]': finished 2026-03-10T07:40:35.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: cluster 2026-03-10T07:40:35.086448+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T07:40:35.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:35 vm03 bash[23382]: cluster 2026-03-10T07:40:35.086448+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T07:40:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:36 vm00 bash[28005]: cluster 2026-03-10T07:40:34.748943+0000 mgr.y (mgr.24407) 589 : cluster [DBG] pgmap v1037: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:36 vm00 bash[28005]: cluster 2026-03-10T07:40:34.748943+0000 mgr.y (mgr.24407) 589 : cluster [DBG] pgmap v1037: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:36 vm00 bash[28005]: audit 2026-03-10T07:40:35.180622+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:36 vm00 bash[28005]: audit 2026-03-10T07:40:35.180622+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:36 vm00 bash[28005]: audit 2026-03-10T07:40:35.181120+0000 mon.b (mon.1) 612 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:36 vm00 bash[28005]: audit 2026-03-10T07:40:35.181120+0000 mon.b (mon.1) 612 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:36.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:36 vm00 bash[20701]: cluster 2026-03-10T07:40:34.748943+0000 mgr.y (mgr.24407) 589 : cluster [DBG] pgmap v1037: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:36.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:36 vm00 bash[20701]: cluster 2026-03-10T07:40:34.748943+0000 mgr.y (mgr.24407) 589 : cluster [DBG] pgmap v1037: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:36.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:36 vm00 bash[20701]: audit 2026-03-10T07:40:35.180622+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:36.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:36 vm00 bash[20701]: audit 2026-03-10T07:40:35.180622+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:36.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:36 vm00 bash[20701]: audit 2026-03-10T07:40:35.181120+0000 mon.b (mon.1) 612 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:36.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:36 vm00 bash[20701]: audit 2026-03-10T07:40:35.181120+0000 mon.b (mon.1) 612 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:36.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:36 vm03 bash[23382]: cluster 2026-03-10T07:40:34.748943+0000 mgr.y (mgr.24407) 589 : cluster [DBG] pgmap v1037: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:36.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:36 vm03 bash[23382]: cluster 2026-03-10T07:40:34.748943+0000 mgr.y (mgr.24407) 589 : cluster [DBG] pgmap v1037: 268 pgs: 2 unknown, 266 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:36.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:36 vm03 bash[23382]: audit 2026-03-10T07:40:35.180622+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:36.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:36 vm03 bash[23382]: audit 2026-03-10T07:40:35.180622+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:36.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:36 vm03 bash[23382]: audit 2026-03-10T07:40:35.181120+0000 mon.b (mon.1) 612 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:36.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:36 vm03 bash[23382]: audit 2026-03-10T07:40:35.181120+0000 mon.b (mon.1) 612 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:37 vm03 bash[23382]: audit 2026-03-10T07:40:36.120716+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:37 vm03 bash[23382]: audit 2026-03-10T07:40:36.120716+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:37 vm03 bash[23382]: cluster 2026-03-10T07:40:36.126255+0000 mon.a (mon.0) 3326 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T07:40:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:37 vm03 bash[23382]: cluster 2026-03-10T07:40:36.126255+0000 mon.a (mon.0) 3326 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T07:40:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:37 vm03 bash[23382]: audit 2026-03-10T07:40:36.126683+0000 mon.b (mon.1) 613 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:37 vm03 bash[23382]: audit 2026-03-10T07:40:36.126683+0000 mon.b (mon.1) 613 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:37 vm03 bash[23382]: audit 2026-03-10T07:40:36.131644+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:37.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:37 vm03 bash[23382]: audit 2026-03-10T07:40:36.131644+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:37.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:37 vm00 bash[28005]: audit 2026-03-10T07:40:36.120716+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:37.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:37 vm00 bash[28005]: audit 2026-03-10T07:40:36.120716+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:37.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:37 vm00 bash[28005]: cluster 2026-03-10T07:40:36.126255+0000 mon.a (mon.0) 3326 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T07:40:37.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:37 vm00 bash[28005]: cluster 2026-03-10T07:40:36.126255+0000 mon.a (mon.0) 3326 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T07:40:37.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:37 vm00 bash[28005]: audit 2026-03-10T07:40:36.126683+0000 mon.b (mon.1) 613 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:37.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:37 vm00 bash[28005]: audit 2026-03-10T07:40:36.126683+0000 mon.b (mon.1) 613 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:37.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:37 vm00 bash[28005]: audit 2026-03-10T07:40:36.131644+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:37.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:37 vm00 bash[28005]: audit 2026-03-10T07:40:36.131644+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:37.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:37 vm00 bash[20701]: audit 2026-03-10T07:40:36.120716+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:37.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:37 vm00 bash[20701]: audit 2026-03-10T07:40:36.120716+0000 mon.a (mon.0) 3325 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:40:37.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:37 vm00 bash[20701]: cluster 2026-03-10T07:40:36.126255+0000 mon.a (mon.0) 3326 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T07:40:37.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:37 vm00 bash[20701]: cluster 2026-03-10T07:40:36.126255+0000 mon.a (mon.0) 3326 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T07:40:37.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:37 vm00 bash[20701]: audit 2026-03-10T07:40:36.126683+0000 mon.b (mon.1) 613 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:37.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:37 vm00 bash[20701]: audit 2026-03-10T07:40:36.126683+0000 mon.b (mon.1) 613 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:37.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:37 vm00 bash[20701]: audit 2026-03-10T07:40:36.131644+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:37.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:37 vm00 bash[20701]: audit 2026-03-10T07:40:36.131644+0000 mon.a (mon.0) 3327 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]: dispatch 2026-03-10T07:40:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:38 vm03 bash[23382]: cluster 2026-03-10T07:40:36.749431+0000 mgr.y (mgr.24407) 590 : cluster [DBG] pgmap v1040: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:38 vm03 bash[23382]: cluster 2026-03-10T07:40:36.749431+0000 mgr.y (mgr.24407) 590 : cluster [DBG] pgmap v1040: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:38 vm03 bash[23382]: cluster 2026-03-10T07:40:37.120829+0000 mon.a (mon.0) 3328 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:38 vm03 bash[23382]: cluster 2026-03-10T07:40:37.120829+0000 mon.a (mon.0) 3328 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:38 vm03 bash[23382]: audit 2026-03-10T07:40:37.181034+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:38 vm03 bash[23382]: audit 2026-03-10T07:40:37.181034+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:38 vm03 bash[23382]: cluster 2026-03-10T07:40:37.185645+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T07:40:38.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:38 vm03 bash[23382]: cluster 2026-03-10T07:40:37.185645+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T07:40:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:38 vm00 bash[28005]: cluster 2026-03-10T07:40:36.749431+0000 mgr.y (mgr.24407) 590 : cluster [DBG] pgmap v1040: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:38 vm00 bash[28005]: cluster 2026-03-10T07:40:36.749431+0000 mgr.y (mgr.24407) 590 : cluster [DBG] pgmap v1040: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:38 vm00 bash[28005]: cluster 2026-03-10T07:40:37.120829+0000 mon.a (mon.0) 3328 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:38 vm00 bash[28005]: cluster 2026-03-10T07:40:37.120829+0000 mon.a (mon.0) 3328 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:38 vm00 bash[28005]: audit 2026-03-10T07:40:37.181034+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:38 vm00 bash[28005]: audit 2026-03-10T07:40:37.181034+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:38 vm00 bash[28005]: cluster 2026-03-10T07:40:37.185645+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T07:40:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:38 vm00 bash[28005]: cluster 2026-03-10T07:40:37.185645+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T07:40:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:38 vm00 bash[20701]: cluster 2026-03-10T07:40:36.749431+0000 mgr.y (mgr.24407) 590 : cluster [DBG] pgmap v1040: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:38 vm00 bash[20701]: cluster 2026-03-10T07:40:36.749431+0000 mgr.y (mgr.24407) 590 : cluster [DBG] pgmap v1040: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:38.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:38 vm00 bash[20701]: cluster 2026-03-10T07:40:37.120829+0000 mon.a (mon.0) 3328 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:38.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:38 vm00 bash[20701]: cluster 2026-03-10T07:40:37.120829+0000 mon.a (mon.0) 3328 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:40:38.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:38 vm00 bash[20701]: audit 2026-03-10T07:40:37.181034+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:38.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:38 vm00 bash[20701]: audit 2026-03-10T07:40:37.181034+0000 mon.a (mon.0) 3329 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-136"}]': finished 2026-03-10T07:40:38.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:38 vm00 bash[20701]: cluster 2026-03-10T07:40:37.185645+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T07:40:38.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:38 vm00 bash[20701]: cluster 2026-03-10T07:40:37.185645+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T07:40:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:39 vm03 bash[23382]: cluster 2026-03-10T07:40:38.213938+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T07:40:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:39 vm03 bash[23382]: cluster 2026-03-10T07:40:38.213938+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T07:40:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:39 vm03 bash[23382]: cluster 2026-03-10T07:40:39.215879+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T07:40:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:39 vm03 bash[23382]: cluster 2026-03-10T07:40:39.215879+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T07:40:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:39 vm03 bash[23382]: audit 2026-03-10T07:40:39.223345+0000 mon.b (mon.1) 614 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:39 vm03 bash[23382]: audit 2026-03-10T07:40:39.223345+0000 mon.b (mon.1) 614 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:39 vm03 bash[23382]: audit 2026-03-10T07:40:39.224215+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:39.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:39 vm03 bash[23382]: audit 2026-03-10T07:40:39.224215+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:39 vm00 bash[28005]: cluster 2026-03-10T07:40:38.213938+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:39 vm00 bash[28005]: cluster 2026-03-10T07:40:38.213938+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:39 vm00 bash[28005]: cluster 2026-03-10T07:40:39.215879+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:39 vm00 bash[28005]: cluster 2026-03-10T07:40:39.215879+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:39 vm00 bash[28005]: audit 2026-03-10T07:40:39.223345+0000 mon.b (mon.1) 614 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:39 vm00 bash[28005]: audit 2026-03-10T07:40:39.223345+0000 mon.b (mon.1) 614 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:39 vm00 bash[28005]: audit 2026-03-10T07:40:39.224215+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:39 vm00 bash[28005]: audit 2026-03-10T07:40:39.224215+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:39 vm00 bash[20701]: cluster 2026-03-10T07:40:38.213938+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:39 vm00 bash[20701]: cluster 2026-03-10T07:40:38.213938+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:39 vm00 bash[20701]: cluster 2026-03-10T07:40:39.215879+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:39 vm00 bash[20701]: cluster 2026-03-10T07:40:39.215879+0000 mon.a (mon.0) 3332 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:39 vm00 bash[20701]: audit 2026-03-10T07:40:39.223345+0000 mon.b (mon.1) 614 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:39 vm00 bash[20701]: audit 2026-03-10T07:40:39.223345+0000 mon.b (mon.1) 614 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:39 vm00 bash[20701]: audit 2026-03-10T07:40:39.224215+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:39 vm00 bash[20701]: audit 2026-03-10T07:40:39.224215+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: cluster 2026-03-10T07:40:38.749950+0000 mgr.y (mgr.24407) 591 : cluster [DBG] pgmap v1043: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: cluster 2026-03-10T07:40:38.749950+0000 mgr.y (mgr.24407) 591 : cluster [DBG] pgmap v1043: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: audit 2026-03-10T07:40:39.779118+0000 mon.c (mon.2) 357 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: audit 2026-03-10T07:40:39.779118+0000 mon.c (mon.2) 357 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: audit 2026-03-10T07:40:40.215853+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: audit 2026-03-10T07:40:40.215853+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: cluster 2026-03-10T07:40:40.219375+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: cluster 2026-03-10T07:40:40.219375+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: audit 2026-03-10T07:40:40.222181+0000 mon.b (mon.1) 615 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: audit 2026-03-10T07:40:40.222181+0000 mon.b (mon.1) 615 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: audit 2026-03-10T07:40:40.226456+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:40.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:40 vm03 bash[23382]: audit 2026-03-10T07:40:40.226456+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: cluster 2026-03-10T07:40:38.749950+0000 mgr.y (mgr.24407) 591 : cluster [DBG] pgmap v1043: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: cluster 2026-03-10T07:40:38.749950+0000 mgr.y (mgr.24407) 591 : cluster [DBG] pgmap v1043: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: audit 2026-03-10T07:40:39.779118+0000 mon.c (mon.2) 357 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: audit 2026-03-10T07:40:39.779118+0000 mon.c (mon.2) 357 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: audit 2026-03-10T07:40:40.215853+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: audit 2026-03-10T07:40:40.215853+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: cluster 2026-03-10T07:40:40.219375+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: cluster 2026-03-10T07:40:40.219375+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: audit 2026-03-10T07:40:40.222181+0000 mon.b (mon.1) 615 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: audit 2026-03-10T07:40:40.222181+0000 mon.b (mon.1) 615 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: audit 2026-03-10T07:40:40.226456+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:40 vm00 bash[28005]: audit 2026-03-10T07:40:40.226456+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: cluster 2026-03-10T07:40:38.749950+0000 mgr.y (mgr.24407) 591 : cluster [DBG] pgmap v1043: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: cluster 2026-03-10T07:40:38.749950+0000 mgr.y (mgr.24407) 591 : cluster [DBG] pgmap v1043: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: audit 2026-03-10T07:40:39.779118+0000 mon.c (mon.2) 357 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: audit 2026-03-10T07:40:39.779118+0000 mon.c (mon.2) 357 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: audit 2026-03-10T07:40:40.215853+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: audit 2026-03-10T07:40:40.215853+0000 mon.a (mon.0) 3334 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: cluster 2026-03-10T07:40:40.219375+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: cluster 2026-03-10T07:40:40.219375+0000 mon.a (mon.0) 3335 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: audit 2026-03-10T07:40:40.222181+0000 mon.b (mon.1) 615 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: audit 2026-03-10T07:40:40.222181+0000 mon.b (mon.1) 615 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: audit 2026-03-10T07:40:40.226456+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:40.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:40 vm00 bash[20701]: audit 2026-03-10T07:40:40.226456+0000 mon.a (mon.0) 3336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:41.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:40:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:40:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:40:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:42 vm03 bash[23382]: cluster 2026-03-10T07:40:40.750354+0000 mgr.y (mgr.24407) 592 : cluster [DBG] pgmap v1046: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:42 vm03 bash[23382]: cluster 2026-03-10T07:40:40.750354+0000 mgr.y (mgr.24407) 592 : cluster [DBG] pgmap v1046: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:42 vm03 bash[23382]: audit 2026-03-10T07:40:41.219104+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:42 vm03 bash[23382]: audit 2026-03-10T07:40:41.219104+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:42 vm03 bash[23382]: audit 2026-03-10T07:40:41.224353+0000 mon.b (mon.1) 616 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:42 vm03 bash[23382]: audit 2026-03-10T07:40:41.224353+0000 mon.b (mon.1) 616 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:42 vm03 bash[23382]: cluster 2026-03-10T07:40:41.228981+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T07:40:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:42 vm03 bash[23382]: cluster 2026-03-10T07:40:41.228981+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T07:40:42.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:42 vm03 bash[23382]: audit 2026-03-10T07:40:41.229804+0000 mon.a (mon.0) 3339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:42.514 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:42 vm03 bash[23382]: audit 2026-03-10T07:40:41.229804+0000 mon.a (mon.0) 3339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:42 vm00 bash[28005]: cluster 2026-03-10T07:40:40.750354+0000 mgr.y (mgr.24407) 592 : cluster [DBG] pgmap v1046: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:42 vm00 bash[28005]: cluster 2026-03-10T07:40:40.750354+0000 mgr.y (mgr.24407) 592 : cluster [DBG] pgmap v1046: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:42 vm00 bash[28005]: audit 2026-03-10T07:40:41.219104+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:42 vm00 bash[28005]: audit 2026-03-10T07:40:41.219104+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:42 vm00 bash[28005]: audit 2026-03-10T07:40:41.224353+0000 mon.b (mon.1) 616 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:42 vm00 bash[28005]: audit 2026-03-10T07:40:41.224353+0000 mon.b (mon.1) 616 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:42 vm00 bash[28005]: cluster 2026-03-10T07:40:41.228981+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:42 vm00 bash[28005]: cluster 2026-03-10T07:40:41.228981+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:42 vm00 bash[28005]: audit 2026-03-10T07:40:41.229804+0000 mon.a (mon.0) 3339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:42 vm00 bash[28005]: audit 2026-03-10T07:40:41.229804+0000 mon.a (mon.0) 3339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:42 vm00 bash[20701]: cluster 2026-03-10T07:40:40.750354+0000 mgr.y (mgr.24407) 592 : cluster [DBG] pgmap v1046: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:42 vm00 bash[20701]: cluster 2026-03-10T07:40:40.750354+0000 mgr.y (mgr.24407) 592 : cluster [DBG] pgmap v1046: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:42 vm00 bash[20701]: audit 2026-03-10T07:40:41.219104+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:42 vm00 bash[20701]: audit 2026-03-10T07:40:41.219104+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:42 vm00 bash[20701]: audit 2026-03-10T07:40:41.224353+0000 mon.b (mon.1) 616 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:42 vm00 bash[20701]: audit 2026-03-10T07:40:41.224353+0000 mon.b (mon.1) 616 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:42 vm00 bash[20701]: cluster 2026-03-10T07:40:41.228981+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:42 vm00 bash[20701]: cluster 2026-03-10T07:40:41.228981+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:42 vm00 bash[20701]: audit 2026-03-10T07:40:41.229804+0000 mon.a (mon.0) 3339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:42.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:42 vm00 bash[20701]: audit 2026-03-10T07:40:41.229804+0000 mon.a (mon.0) 3339 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:40:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:43 vm00 bash[28005]: audit 2026-03-10T07:40:42.222204+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:40:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:43 vm00 bash[28005]: audit 2026-03-10T07:40:42.222204+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:40:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:43 vm00 bash[28005]: audit 2026-03-10T07:40:42.227978+0000 mon.b (mon.1) 617 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:43 vm00 bash[28005]: audit 2026-03-10T07:40:42.227978+0000 mon.b (mon.1) 617 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:43.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:43 vm00 bash[28005]: cluster 2026-03-10T07:40:42.228453+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T07:40:43.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:43 vm00 bash[28005]: cluster 2026-03-10T07:40:42.228453+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T07:40:43.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:43 vm00 bash[28005]: audit 2026-03-10T07:40:42.230598+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:43.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:43 vm00 bash[28005]: audit 2026-03-10T07:40:42.230598+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:43 vm00 bash[20701]: audit 2026-03-10T07:40:42.222204+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:40:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:43 vm00 bash[20701]: audit 2026-03-10T07:40:42.222204+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:40:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:43 vm00 bash[20701]: audit 2026-03-10T07:40:42.227978+0000 mon.b (mon.1) 617 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:43 vm00 bash[20701]: audit 2026-03-10T07:40:42.227978+0000 mon.b (mon.1) 617 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:43 vm00 bash[20701]: cluster 2026-03-10T07:40:42.228453+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T07:40:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:43 vm00 bash[20701]: cluster 2026-03-10T07:40:42.228453+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T07:40:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:43 vm00 bash[20701]: audit 2026-03-10T07:40:42.230598+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:43.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:43 vm00 bash[20701]: audit 2026-03-10T07:40:42.230598+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:43.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:40:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:40:43.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:43 vm03 bash[23382]: audit 2026-03-10T07:40:42.222204+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:40:43.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:43 vm03 bash[23382]: audit 2026-03-10T07:40:42.222204+0000 mon.a (mon.0) 3340 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:40:43.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:43 vm03 bash[23382]: audit 2026-03-10T07:40:42.227978+0000 mon.b (mon.1) 617 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:43.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:43 vm03 bash[23382]: audit 2026-03-10T07:40:42.227978+0000 mon.b (mon.1) 617 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:43.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:43 vm03 bash[23382]: cluster 2026-03-10T07:40:42.228453+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T07:40:43.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:43 vm03 bash[23382]: cluster 2026-03-10T07:40:42.228453+0000 mon.a (mon.0) 3341 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T07:40:43.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:43 vm03 bash[23382]: audit 2026-03-10T07:40:42.230598+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:43.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:43 vm03 bash[23382]: audit 2026-03-10T07:40:42.230598+0000 mon.a (mon.0) 3342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:40:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:44 vm00 bash[28005]: cluster 2026-03-10T07:40:42.750737+0000 mgr.y (mgr.24407) 593 : cluster [DBG] pgmap v1049: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:44 vm00 bash[28005]: cluster 2026-03-10T07:40:42.750737+0000 mgr.y (mgr.24407) 593 : cluster [DBG] pgmap v1049: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:44 vm00 bash[28005]: audit 2026-03-10T07:40:43.315073+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:44 vm00 bash[28005]: audit 2026-03-10T07:40:43.315073+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:44 vm00 bash[28005]: cluster 2026-03-10T07:40:43.317799+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:44 vm00 bash[28005]: cluster 2026-03-10T07:40:43.317799+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:44 vm00 bash[28005]: audit 2026-03-10T07:40:43.320442+0000 mon.b (mon.1) 618 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:44 vm00 bash[28005]: audit 2026-03-10T07:40:43.320442+0000 mon.b (mon.1) 618 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:44 vm00 bash[28005]: audit 2026-03-10T07:40:43.345777+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:44 vm00 bash[28005]: audit 2026-03-10T07:40:43.345777+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:44 vm00 bash[20701]: cluster 2026-03-10T07:40:42.750737+0000 mgr.y (mgr.24407) 593 : cluster [DBG] pgmap v1049: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:44 vm00 bash[20701]: cluster 2026-03-10T07:40:42.750737+0000 mgr.y (mgr.24407) 593 : cluster [DBG] pgmap v1049: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:44 vm00 bash[20701]: audit 2026-03-10T07:40:43.315073+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:44 vm00 bash[20701]: audit 2026-03-10T07:40:43.315073+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:44 vm00 bash[20701]: cluster 2026-03-10T07:40:43.317799+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:44 vm00 bash[20701]: cluster 2026-03-10T07:40:43.317799+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:44 vm00 bash[20701]: audit 2026-03-10T07:40:43.320442+0000 mon.b (mon.1) 618 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:44 vm00 bash[20701]: audit 2026-03-10T07:40:43.320442+0000 mon.b (mon.1) 618 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:44 vm00 bash[20701]: audit 2026-03-10T07:40:43.345777+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:44.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:44 vm00 bash[20701]: audit 2026-03-10T07:40:43.345777+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:44 vm03 bash[23382]: cluster 2026-03-10T07:40:42.750737+0000 mgr.y (mgr.24407) 593 : cluster [DBG] pgmap v1049: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:44 vm03 bash[23382]: cluster 2026-03-10T07:40:42.750737+0000 mgr.y (mgr.24407) 593 : cluster [DBG] pgmap v1049: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T07:40:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:44 vm03 bash[23382]: audit 2026-03-10T07:40:43.315073+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:40:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:44 vm03 bash[23382]: audit 2026-03-10T07:40:43.315073+0000 mon.a (mon.0) 3343 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:40:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:44 vm03 bash[23382]: cluster 2026-03-10T07:40:43.317799+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T07:40:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:44 vm03 bash[23382]: cluster 2026-03-10T07:40:43.317799+0000 mon.a (mon.0) 3344 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T07:40:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:44 vm03 bash[23382]: audit 2026-03-10T07:40:43.320442+0000 mon.b (mon.1) 618 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:44 vm03 bash[23382]: audit 2026-03-10T07:40:43.320442+0000 mon.b (mon.1) 618 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:44 vm03 bash[23382]: audit 2026-03-10T07:40:43.345777+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:44 vm03 bash[23382]: audit 2026-03-10T07:40:43.345777+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T07:40:45.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:45 vm00 bash[28005]: audit 2026-03-10T07:40:43.540322+0000 mgr.y (mgr.24407) 594 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:45.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:45 vm00 bash[28005]: audit 2026-03-10T07:40:43.540322+0000 mgr.y (mgr.24407) 594 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:45.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:45 vm00 bash[28005]: audit 2026-03-10T07:40:44.344301+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:40:45.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:45 vm00 bash[28005]: audit 2026-03-10T07:40:44.344301+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:40:45.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:45 vm00 bash[28005]: cluster 2026-03-10T07:40:44.363132+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T07:40:45.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:45 vm00 bash[28005]: cluster 2026-03-10T07:40:44.363132+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T07:40:45.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:45 vm00 bash[20701]: audit 2026-03-10T07:40:43.540322+0000 mgr.y (mgr.24407) 594 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:45.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:45 vm00 bash[20701]: audit 2026-03-10T07:40:43.540322+0000 mgr.y (mgr.24407) 594 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:45.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:45 vm00 bash[20701]: audit 2026-03-10T07:40:44.344301+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:40:45.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:45 vm00 bash[20701]: audit 2026-03-10T07:40:44.344301+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:40:45.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:45 vm00 bash[20701]: cluster 2026-03-10T07:40:44.363132+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T07:40:45.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:45 vm00 bash[20701]: cluster 2026-03-10T07:40:44.363132+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T07:40:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:45 vm03 bash[23382]: audit 2026-03-10T07:40:43.540322+0000 mgr.y (mgr.24407) 594 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:45 vm03 bash[23382]: audit 2026-03-10T07:40:43.540322+0000 mgr.y (mgr.24407) 594 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:45 vm03 bash[23382]: audit 2026-03-10T07:40:44.344301+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:40:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:45 vm03 bash[23382]: audit 2026-03-10T07:40:44.344301+0000 mon.a (mon.0) 3346 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T07:40:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:45 vm03 bash[23382]: cluster 2026-03-10T07:40:44.363132+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T07:40:45.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:45 vm03 bash[23382]: cluster 2026-03-10T07:40:44.363132+0000 mon.a (mon.0) 3347 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:46 vm00 bash[28005]: cluster 2026-03-10T07:40:44.751354+0000 mgr.y (mgr.24407) 595 : cluster [DBG] pgmap v1052: 268 pgs: 5 creating+peering, 263 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:46 vm00 bash[28005]: cluster 2026-03-10T07:40:44.751354+0000 mgr.y (mgr.24407) 595 : cluster [DBG] pgmap v1052: 268 pgs: 5 creating+peering, 263 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:46 vm00 bash[28005]: audit 2026-03-10T07:40:45.373655+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:46 vm00 bash[28005]: audit 2026-03-10T07:40:45.373655+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:46 vm00 bash[28005]: audit 2026-03-10T07:40:45.374288+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:46 vm00 bash[28005]: audit 2026-03-10T07:40:45.374288+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:46 vm00 bash[28005]: audit 2026-03-10T07:40:45.374460+0000 mon.b (mon.1) 619 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:46 vm00 bash[28005]: audit 2026-03-10T07:40:45.374460+0000 mon.b (mon.1) 619 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:46 vm00 bash[28005]: audit 2026-03-10T07:40:45.375251+0000 mon.b (mon.1) 620 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:46 vm00 bash[28005]: audit 2026-03-10T07:40:45.375251+0000 mon.b (mon.1) 620 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:46 vm00 bash[20701]: cluster 2026-03-10T07:40:44.751354+0000 mgr.y (mgr.24407) 595 : cluster [DBG] pgmap v1052: 268 pgs: 5 creating+peering, 263 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:46 vm00 bash[20701]: cluster 2026-03-10T07:40:44.751354+0000 mgr.y (mgr.24407) 595 : cluster [DBG] pgmap v1052: 268 pgs: 5 creating+peering, 263 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:46 vm00 bash[20701]: audit 2026-03-10T07:40:45.373655+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:46 vm00 bash[20701]: audit 2026-03-10T07:40:45.373655+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:46 vm00 bash[20701]: audit 2026-03-10T07:40:45.374288+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:46 vm00 bash[20701]: audit 2026-03-10T07:40:45.374288+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:46 vm00 bash[20701]: audit 2026-03-10T07:40:45.374460+0000 mon.b (mon.1) 619 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:46 vm00 bash[20701]: audit 2026-03-10T07:40:45.374460+0000 mon.b (mon.1) 619 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:46 vm00 bash[20701]: audit 2026-03-10T07:40:45.375251+0000 mon.b (mon.1) 620 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:46.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:46 vm00 bash[20701]: audit 2026-03-10T07:40:45.375251+0000 mon.b (mon.1) 620 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:46 vm03 bash[23382]: cluster 2026-03-10T07:40:44.751354+0000 mgr.y (mgr.24407) 595 : cluster [DBG] pgmap v1052: 268 pgs: 5 creating+peering, 263 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:46 vm03 bash[23382]: cluster 2026-03-10T07:40:44.751354+0000 mgr.y (mgr.24407) 595 : cluster [DBG] pgmap v1052: 268 pgs: 5 creating+peering, 263 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:40:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:46 vm03 bash[23382]: audit 2026-03-10T07:40:45.373655+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:46 vm03 bash[23382]: audit 2026-03-10T07:40:45.373655+0000 mon.a (mon.0) 3348 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:46 vm03 bash[23382]: audit 2026-03-10T07:40:45.374288+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:46 vm03 bash[23382]: audit 2026-03-10T07:40:45.374288+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:46 vm03 bash[23382]: audit 2026-03-10T07:40:45.374460+0000 mon.b (mon.1) 619 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:46 vm03 bash[23382]: audit 2026-03-10T07:40:45.374460+0000 mon.b (mon.1) 619 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:40:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:46 vm03 bash[23382]: audit 2026-03-10T07:40:45.375251+0000 mon.b (mon.1) 620 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:46.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:46 vm03 bash[23382]: audit 2026-03-10T07:40:45.375251+0000 mon.b (mon.1) 620 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]: dispatch 2026-03-10T07:40:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:47 vm03 bash[23382]: audit 2026-03-10T07:40:46.357673+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]': finished 2026-03-10T07:40:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:47 vm03 bash[23382]: audit 2026-03-10T07:40:46.357673+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]': finished 2026-03-10T07:40:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:47 vm03 bash[23382]: cluster 2026-03-10T07:40:46.365965+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T07:40:47.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:47 vm03 bash[23382]: cluster 2026-03-10T07:40:46.365965+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T07:40:47.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:47 vm00 bash[28005]: audit 2026-03-10T07:40:46.357673+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]': finished 2026-03-10T07:40:47.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:47 vm00 bash[28005]: audit 2026-03-10T07:40:46.357673+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]': finished 2026-03-10T07:40:47.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:47 vm00 bash[28005]: cluster 2026-03-10T07:40:46.365965+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T07:40:47.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:47 vm00 bash[28005]: cluster 2026-03-10T07:40:46.365965+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T07:40:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:47 vm00 bash[20701]: audit 2026-03-10T07:40:46.357673+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]': finished 2026-03-10T07:40:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:47 vm00 bash[20701]: audit 2026-03-10T07:40:46.357673+0000 mon.a (mon.0) 3350 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-138"}]': finished 2026-03-10T07:40:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:47 vm00 bash[20701]: cluster 2026-03-10T07:40:46.365965+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T07:40:47.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:47 vm00 bash[20701]: cluster 2026-03-10T07:40:46.365965+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T07:40:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:48 vm03 bash[23382]: cluster 2026-03-10T07:40:46.751684+0000 mgr.y (mgr.24407) 596 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:48 vm03 bash[23382]: cluster 2026-03-10T07:40:46.751684+0000 mgr.y (mgr.24407) 596 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:48 vm03 bash[23382]: cluster 2026-03-10T07:40:47.377672+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:48 vm03 bash[23382]: cluster 2026-03-10T07:40:47.377672+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:48 vm03 bash[23382]: cluster 2026-03-10T07:40:47.401629+0000 mon.a (mon.0) 3353 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T07:40:48.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:48 vm03 bash[23382]: cluster 2026-03-10T07:40:47.401629+0000 mon.a (mon.0) 3353 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T07:40:48.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:48 vm00 bash[28005]: cluster 2026-03-10T07:40:46.751684+0000 mgr.y (mgr.24407) 596 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:48.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:48 vm00 bash[28005]: cluster 2026-03-10T07:40:46.751684+0000 mgr.y (mgr.24407) 596 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:48.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:48 vm00 bash[28005]: cluster 2026-03-10T07:40:47.377672+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:48.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:48 vm00 bash[28005]: cluster 2026-03-10T07:40:47.377672+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:48.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:48 vm00 bash[28005]: cluster 2026-03-10T07:40:47.401629+0000 mon.a (mon.0) 3353 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T07:40:48.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:48 vm00 bash[28005]: cluster 2026-03-10T07:40:47.401629+0000 mon.a (mon.0) 3353 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T07:40:48.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:48 vm00 bash[20701]: cluster 2026-03-10T07:40:46.751684+0000 mgr.y (mgr.24407) 596 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:48.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:48 vm00 bash[20701]: cluster 2026-03-10T07:40:46.751684+0000 mgr.y (mgr.24407) 596 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:48.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:48 vm00 bash[20701]: cluster 2026-03-10T07:40:47.377672+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:48.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:48 vm00 bash[20701]: cluster 2026-03-10T07:40:47.377672+0000 mon.a (mon.0) 3352 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:48.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:48 vm00 bash[20701]: cluster 2026-03-10T07:40:47.401629+0000 mon.a (mon.0) 3353 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T07:40:48.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:48 vm00 bash[20701]: cluster 2026-03-10T07:40:47.401629+0000 mon.a (mon.0) 3353 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: cluster 2026-03-10T07:40:48.401965+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: cluster 2026-03-10T07:40:48.401965+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: audit 2026-03-10T07:40:48.403754+0000 mon.b (mon.1) 621 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: audit 2026-03-10T07:40:48.403754+0000 mon.b (mon.1) 621 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: audit 2026-03-10T07:40:48.406799+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: audit 2026-03-10T07:40:48.406799+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: cluster 2026-03-10T07:40:48.752003+0000 mgr.y (mgr.24407) 597 : cluster [DBG] pgmap v1057: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: cluster 2026-03-10T07:40:48.752003+0000 mgr.y (mgr.24407) 597 : cluster [DBG] pgmap v1057: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: audit 2026-03-10T07:40:49.392975+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: audit 2026-03-10T07:40:49.392975+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: cluster 2026-03-10T07:40:49.398622+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: cluster 2026-03-10T07:40:49.398622+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: audit 2026-03-10T07:40:49.400255+0000 mon.b (mon.1) 622 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: audit 2026-03-10T07:40:49.400255+0000 mon.b (mon.1) 622 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: audit 2026-03-10T07:40:49.404243+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:49.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:49 vm03 bash[23382]: audit 2026-03-10T07:40:49.404243+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: cluster 2026-03-10T07:40:48.401965+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: cluster 2026-03-10T07:40:48.401965+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: audit 2026-03-10T07:40:48.403754+0000 mon.b (mon.1) 621 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: audit 2026-03-10T07:40:48.403754+0000 mon.b (mon.1) 621 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: audit 2026-03-10T07:40:48.406799+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: audit 2026-03-10T07:40:48.406799+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: cluster 2026-03-10T07:40:48.752003+0000 mgr.y (mgr.24407) 597 : cluster [DBG] pgmap v1057: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: cluster 2026-03-10T07:40:48.752003+0000 mgr.y (mgr.24407) 597 : cluster [DBG] pgmap v1057: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: audit 2026-03-10T07:40:49.392975+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: audit 2026-03-10T07:40:49.392975+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: cluster 2026-03-10T07:40:49.398622+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: cluster 2026-03-10T07:40:49.398622+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: audit 2026-03-10T07:40:49.400255+0000 mon.b (mon.1) 622 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: audit 2026-03-10T07:40:49.400255+0000 mon.b (mon.1) 622 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: audit 2026-03-10T07:40:49.404243+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:49 vm00 bash[20701]: audit 2026-03-10T07:40:49.404243+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: cluster 2026-03-10T07:40:48.401965+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: cluster 2026-03-10T07:40:48.401965+0000 mon.a (mon.0) 3354 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: audit 2026-03-10T07:40:48.403754+0000 mon.b (mon.1) 621 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: audit 2026-03-10T07:40:48.403754+0000 mon.b (mon.1) 621 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: audit 2026-03-10T07:40:48.406799+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: audit 2026-03-10T07:40:48.406799+0000 mon.a (mon.0) 3355 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: cluster 2026-03-10T07:40:48.752003+0000 mgr.y (mgr.24407) 597 : cluster [DBG] pgmap v1057: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: cluster 2026-03-10T07:40:48.752003+0000 mgr.y (mgr.24407) 597 : cluster [DBG] pgmap v1057: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: audit 2026-03-10T07:40:49.392975+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: audit 2026-03-10T07:40:49.392975+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: cluster 2026-03-10T07:40:49.398622+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: cluster 2026-03-10T07:40:49.398622+0000 mon.a (mon.0) 3357 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: audit 2026-03-10T07:40:49.400255+0000 mon.b (mon.1) 622 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: audit 2026-03-10T07:40:49.400255+0000 mon.b (mon.1) 622 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: audit 2026-03-10T07:40:49.404243+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:49 vm00 bash[28005]: audit 2026-03-10T07:40:49.404243+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:40:51.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:40:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:40:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:40:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:51 vm03 bash[23382]: audit 2026-03-10T07:40:50.396433+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:51 vm03 bash[23382]: audit 2026-03-10T07:40:50.396433+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:51 vm03 bash[23382]: cluster 2026-03-10T07:40:50.400063+0000 mon.a (mon.0) 3360 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T07:40:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:51 vm03 bash[23382]: cluster 2026-03-10T07:40:50.400063+0000 mon.a (mon.0) 3360 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T07:40:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:51 vm03 bash[23382]: audit 2026-03-10T07:40:50.403409+0000 mon.b (mon.1) 623 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:51 vm03 bash[23382]: audit 2026-03-10T07:40:50.403409+0000 mon.b (mon.1) 623 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:51 vm03 bash[23382]: audit 2026-03-10T07:40:50.404523+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:51 vm03 bash[23382]: audit 2026-03-10T07:40:50.404523+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:51 vm03 bash[23382]: cluster 2026-03-10T07:40:50.752333+0000 mgr.y (mgr.24407) 598 : cluster [DBG] pgmap v1060: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:51.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:51 vm03 bash[23382]: cluster 2026-03-10T07:40:50.752333+0000 mgr.y (mgr.24407) 598 : cluster [DBG] pgmap v1060: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:51.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:51 vm00 bash[28005]: audit 2026-03-10T07:40:50.396433+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:51 vm00 bash[28005]: audit 2026-03-10T07:40:50.396433+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:51 vm00 bash[28005]: cluster 2026-03-10T07:40:50.400063+0000 mon.a (mon.0) 3360 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:51 vm00 bash[28005]: cluster 2026-03-10T07:40:50.400063+0000 mon.a (mon.0) 3360 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:51 vm00 bash[28005]: audit 2026-03-10T07:40:50.403409+0000 mon.b (mon.1) 623 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:51 vm00 bash[28005]: audit 2026-03-10T07:40:50.403409+0000 mon.b (mon.1) 623 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:51 vm00 bash[28005]: audit 2026-03-10T07:40:50.404523+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:51 vm00 bash[28005]: audit 2026-03-10T07:40:50.404523+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:51 vm00 bash[28005]: cluster 2026-03-10T07:40:50.752333+0000 mgr.y (mgr.24407) 598 : cluster [DBG] pgmap v1060: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:51 vm00 bash[28005]: cluster 2026-03-10T07:40:50.752333+0000 mgr.y (mgr.24407) 598 : cluster [DBG] pgmap v1060: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:51 vm00 bash[20701]: audit 2026-03-10T07:40:50.396433+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:51 vm00 bash[20701]: audit 2026-03-10T07:40:50.396433+0000 mon.a (mon.0) 3359 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:51 vm00 bash[20701]: cluster 2026-03-10T07:40:50.400063+0000 mon.a (mon.0) 3360 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:51 vm00 bash[20701]: cluster 2026-03-10T07:40:50.400063+0000 mon.a (mon.0) 3360 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:51 vm00 bash[20701]: audit 2026-03-10T07:40:50.403409+0000 mon.b (mon.1) 623 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:51 vm00 bash[20701]: audit 2026-03-10T07:40:50.403409+0000 mon.b (mon.1) 623 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:51 vm00 bash[20701]: audit 2026-03-10T07:40:50.404523+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:51 vm00 bash[20701]: audit 2026-03-10T07:40:50.404523+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:51 vm00 bash[20701]: cluster 2026-03-10T07:40:50.752333+0000 mgr.y (mgr.24407) 598 : cluster [DBG] pgmap v1060: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:51.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:51 vm00 bash[20701]: cluster 2026-03-10T07:40:50.752333+0000 mgr.y (mgr.24407) 598 : cluster [DBG] pgmap v1060: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:52.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:52 vm03 bash[23382]: audit 2026-03-10T07:40:51.399010+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:40:52.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:52 vm03 bash[23382]: audit 2026-03-10T07:40:51.399010+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:40:52.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:52 vm03 bash[23382]: cluster 2026-03-10T07:40:51.402667+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T07:40:52.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:52 vm03 bash[23382]: cluster 2026-03-10T07:40:51.402667+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T07:40:52.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:52 vm03 bash[23382]: audit 2026-03-10T07:40:51.411059+0000 mon.b (mon.1) 624 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:52.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:52 vm03 bash[23382]: audit 2026-03-10T07:40:51.411059+0000 mon.b (mon.1) 624 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:52.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:52 vm03 bash[23382]: audit 2026-03-10T07:40:51.426900+0000 mon.a (mon.0) 3364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:52.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:52 vm03 bash[23382]: audit 2026-03-10T07:40:51.426900+0000 mon.a (mon.0) 3364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:52 vm00 bash[28005]: audit 2026-03-10T07:40:51.399010+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:40:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:52 vm00 bash[28005]: audit 2026-03-10T07:40:51.399010+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:40:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:52 vm00 bash[28005]: cluster 2026-03-10T07:40:51.402667+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T07:40:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:52 vm00 bash[28005]: cluster 2026-03-10T07:40:51.402667+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T07:40:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:52 vm00 bash[28005]: audit 2026-03-10T07:40:51.411059+0000 mon.b (mon.1) 624 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:52 vm00 bash[28005]: audit 2026-03-10T07:40:51.411059+0000 mon.b (mon.1) 624 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:52.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:52 vm00 bash[28005]: audit 2026-03-10T07:40:51.426900+0000 mon.a (mon.0) 3364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:52.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:52 vm00 bash[28005]: audit 2026-03-10T07:40:51.426900+0000 mon.a (mon.0) 3364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:52.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:52 vm00 bash[20701]: audit 2026-03-10T07:40:51.399010+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:40:52.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:52 vm00 bash[20701]: audit 2026-03-10T07:40:51.399010+0000 mon.a (mon.0) 3362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T07:40:52.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:52 vm00 bash[20701]: cluster 2026-03-10T07:40:51.402667+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T07:40:52.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:52 vm00 bash[20701]: cluster 2026-03-10T07:40:51.402667+0000 mon.a (mon.0) 3363 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T07:40:52.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:52 vm00 bash[20701]: audit 2026-03-10T07:40:51.411059+0000 mon.b (mon.1) 624 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:52.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:52 vm00 bash[20701]: audit 2026-03-10T07:40:51.411059+0000 mon.b (mon.1) 624 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:52.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:52 vm00 bash[20701]: audit 2026-03-10T07:40:51.426900+0000 mon.a (mon.0) 3364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:52.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:52 vm00 bash[20701]: audit 2026-03-10T07:40:51.426900+0000 mon.a (mon.0) 3364 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T07:40:53.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:40:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: audit 2026-03-10T07:40:52.414572+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: audit 2026-03-10T07:40:52.414572+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: audit 2026-03-10T07:40:52.424359+0000 mon.b (mon.1) 625 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: audit 2026-03-10T07:40:52.424359+0000 mon.b (mon.1) 625 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: cluster 2026-03-10T07:40:52.424790+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: cluster 2026-03-10T07:40:52.424790+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: audit 2026-03-10T07:40:52.427202+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: audit 2026-03-10T07:40:52.427202+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: cluster 2026-03-10T07:40:52.752671+0000 mgr.y (mgr.24407) 599 : cluster [DBG] pgmap v1063: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: cluster 2026-03-10T07:40:52.752671+0000 mgr.y (mgr.24407) 599 : cluster [DBG] pgmap v1063: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: audit 2026-03-10T07:40:53.428958+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: audit 2026-03-10T07:40:53.428958+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: cluster 2026-03-10T07:40:53.431442+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T07:40:53.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:53 vm03 bash[23382]: cluster 2026-03-10T07:40:53.431442+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T07:40:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: audit 2026-03-10T07:40:52.414572+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:40:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: audit 2026-03-10T07:40:52.414572+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:40:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: audit 2026-03-10T07:40:52.424359+0000 mon.b (mon.1) 625 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: audit 2026-03-10T07:40:52.424359+0000 mon.b (mon.1) 625 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: cluster 2026-03-10T07:40:52.424790+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T07:40:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: cluster 2026-03-10T07:40:52.424790+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T07:40:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: audit 2026-03-10T07:40:52.427202+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: audit 2026-03-10T07:40:52.427202+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: cluster 2026-03-10T07:40:52.752671+0000 mgr.y (mgr.24407) 599 : cluster [DBG] pgmap v1063: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: cluster 2026-03-10T07:40:52.752671+0000 mgr.y (mgr.24407) 599 : cluster [DBG] pgmap v1063: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: audit 2026-03-10T07:40:53.428958+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: audit 2026-03-10T07:40:53.428958+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: cluster 2026-03-10T07:40:53.431442+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:53 vm00 bash[28005]: cluster 2026-03-10T07:40:53.431442+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: audit 2026-03-10T07:40:52.414572+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: audit 2026-03-10T07:40:52.414572+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: audit 2026-03-10T07:40:52.424359+0000 mon.b (mon.1) 625 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: audit 2026-03-10T07:40:52.424359+0000 mon.b (mon.1) 625 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: cluster 2026-03-10T07:40:52.424790+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: cluster 2026-03-10T07:40:52.424790+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: audit 2026-03-10T07:40:52.427202+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: audit 2026-03-10T07:40:52.427202+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: cluster 2026-03-10T07:40:52.752671+0000 mgr.y (mgr.24407) 599 : cluster [DBG] pgmap v1063: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: cluster 2026-03-10T07:40:52.752671+0000 mgr.y (mgr.24407) 599 : cluster [DBG] pgmap v1063: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: audit 2026-03-10T07:40:53.428958+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: audit 2026-03-10T07:40:53.428958+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: cluster 2026-03-10T07:40:53.431442+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T07:40:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:53 vm00 bash[20701]: cluster 2026-03-10T07:40:53.431442+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T07:40:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:54 vm03 bash[23382]: audit 2026-03-10T07:40:53.434813+0000 mon.b (mon.1) 626 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:54 vm03 bash[23382]: audit 2026-03-10T07:40:53.434813+0000 mon.b (mon.1) 626 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:54 vm03 bash[23382]: audit 2026-03-10T07:40:53.444824+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:54 vm03 bash[23382]: audit 2026-03-10T07:40:53.444824+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:54 vm03 bash[23382]: audit 2026-03-10T07:40:53.545416+0000 mgr.y (mgr.24407) 600 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:54 vm03 bash[23382]: audit 2026-03-10T07:40:53.545416+0000 mgr.y (mgr.24407) 600 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:54 vm03 bash[23382]: audit 2026-03-10T07:40:54.444943+0000 mon.a (mon.0) 3371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:40:54.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:54 vm03 bash[23382]: audit 2026-03-10T07:40:54.444943+0000 mon.a (mon.0) 3371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:40:54.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:54 vm00 bash[28005]: audit 2026-03-10T07:40:53.434813+0000 mon.b (mon.1) 626 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:54 vm00 bash[28005]: audit 2026-03-10T07:40:53.434813+0000 mon.b (mon.1) 626 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:54 vm00 bash[28005]: audit 2026-03-10T07:40:53.444824+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:54 vm00 bash[28005]: audit 2026-03-10T07:40:53.444824+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:54 vm00 bash[28005]: audit 2026-03-10T07:40:53.545416+0000 mgr.y (mgr.24407) 600 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:54.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:54 vm00 bash[28005]: audit 2026-03-10T07:40:53.545416+0000 mgr.y (mgr.24407) 600 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:54.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:54 vm00 bash[28005]: audit 2026-03-10T07:40:54.444943+0000 mon.a (mon.0) 3371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:40:54.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:54 vm00 bash[28005]: audit 2026-03-10T07:40:54.444943+0000 mon.a (mon.0) 3371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:40:54.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:54 vm00 bash[20701]: audit 2026-03-10T07:40:53.434813+0000 mon.b (mon.1) 626 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:54 vm00 bash[20701]: audit 2026-03-10T07:40:53.434813+0000 mon.b (mon.1) 626 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:54 vm00 bash[20701]: audit 2026-03-10T07:40:53.444824+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:54 vm00 bash[20701]: audit 2026-03-10T07:40:53.444824+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T07:40:54.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:54 vm00 bash[20701]: audit 2026-03-10T07:40:53.545416+0000 mgr.y (mgr.24407) 600 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:54.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:54 vm00 bash[20701]: audit 2026-03-10T07:40:53.545416+0000 mgr.y (mgr.24407) 600 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:40:54.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:54 vm00 bash[20701]: audit 2026-03-10T07:40:54.444943+0000 mon.a (mon.0) 3371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:40:54.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:54 vm00 bash[20701]: audit 2026-03-10T07:40:54.444943+0000 mon.a (mon.0) 3371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T07:40:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:55 vm03 bash[23382]: cluster 2026-03-10T07:40:54.452951+0000 mon.a (mon.0) 3372 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T07:40:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:55 vm03 bash[23382]: cluster 2026-03-10T07:40:54.452951+0000 mon.a (mon.0) 3372 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T07:40:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:55 vm03 bash[23382]: cluster 2026-03-10T07:40:54.753305+0000 mgr.y (mgr.24407) 601 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T07:40:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:55 vm03 bash[23382]: cluster 2026-03-10T07:40:54.753305+0000 mgr.y (mgr.24407) 601 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T07:40:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:55 vm03 bash[23382]: audit 2026-03-10T07:40:54.786253+0000 mon.c (mon.2) 358 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:55.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:55 vm03 bash[23382]: audit 2026-03-10T07:40:54.786253+0000 mon.c (mon.2) 358 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:55.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:55 vm00 bash[28005]: cluster 2026-03-10T07:40:54.452951+0000 mon.a (mon.0) 3372 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T07:40:55.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:55 vm00 bash[28005]: cluster 2026-03-10T07:40:54.452951+0000 mon.a (mon.0) 3372 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T07:40:55.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:55 vm00 bash[28005]: cluster 2026-03-10T07:40:54.753305+0000 mgr.y (mgr.24407) 601 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T07:40:55.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:55 vm00 bash[28005]: cluster 2026-03-10T07:40:54.753305+0000 mgr.y (mgr.24407) 601 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T07:40:55.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:55 vm00 bash[28005]: audit 2026-03-10T07:40:54.786253+0000 mon.c (mon.2) 358 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:55.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:55 vm00 bash[28005]: audit 2026-03-10T07:40:54.786253+0000 mon.c (mon.2) 358 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:55.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:55 vm00 bash[20701]: cluster 2026-03-10T07:40:54.452951+0000 mon.a (mon.0) 3372 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T07:40:55.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:55 vm00 bash[20701]: cluster 2026-03-10T07:40:54.452951+0000 mon.a (mon.0) 3372 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T07:40:55.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:55 vm00 bash[20701]: cluster 2026-03-10T07:40:54.753305+0000 mgr.y (mgr.24407) 601 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T07:40:55.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:55 vm00 bash[20701]: cluster 2026-03-10T07:40:54.753305+0000 mgr.y (mgr.24407) 601 : cluster [DBG] pgmap v1066: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T07:40:55.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:55 vm00 bash[20701]: audit 2026-03-10T07:40:54.786253+0000 mon.c (mon.2) 358 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:55.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:55 vm00 bash[20701]: audit 2026-03-10T07:40:54.786253+0000 mon.c (mon.2) 358 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:40:57.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:56 vm00 bash[28005]: cluster 2026-03-10T07:40:56.795856+0000 mon.a (mon.0) 3373 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:57.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:56 vm00 bash[28005]: cluster 2026-03-10T07:40:56.795856+0000 mon.a (mon.0) 3373 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:56 vm00 bash[20701]: cluster 2026-03-10T07:40:56.795856+0000 mon.a (mon.0) 3373 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:57.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:56 vm00 bash[20701]: cluster 2026-03-10T07:40:56.795856+0000 mon.a (mon.0) 3373 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:57.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:56 vm03 bash[23382]: cluster 2026-03-10T07:40:56.795856+0000 mon.a (mon.0) 3373 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:57.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:56 vm03 bash[23382]: cluster 2026-03-10T07:40:56.795856+0000 mon.a (mon.0) 3373 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:40:58.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:57 vm00 bash[28005]: cluster 2026-03-10T07:40:56.753811+0000 mgr.y (mgr.24407) 602 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 957 B/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:40:58.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:57 vm00 bash[28005]: cluster 2026-03-10T07:40:56.753811+0000 mgr.y (mgr.24407) 602 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 957 B/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:40:58.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:57 vm00 bash[20701]: cluster 2026-03-10T07:40:56.753811+0000 mgr.y (mgr.24407) 602 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 957 B/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:40:58.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:57 vm00 bash[20701]: cluster 2026-03-10T07:40:56.753811+0000 mgr.y (mgr.24407) 602 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 957 B/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:40:58.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:57 vm03 bash[23382]: cluster 2026-03-10T07:40:56.753811+0000 mgr.y (mgr.24407) 602 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 957 B/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:40:58.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:57 vm03 bash[23382]: cluster 2026-03-10T07:40:56.753811+0000 mgr.y (mgr.24407) 602 : cluster [DBG] pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 957 B/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:41:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:59 vm03 bash[23382]: cluster 2026-03-10T07:40:58.754205+0000 mgr.y (mgr.24407) 603 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 808 B/s rd, 1.3 KiB/s wr, 0 op/s 2026-03-10T07:41:00.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:40:59 vm03 bash[23382]: cluster 2026-03-10T07:40:58.754205+0000 mgr.y (mgr.24407) 603 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 808 B/s rd, 1.3 KiB/s wr, 0 op/s 2026-03-10T07:41:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:59 vm00 bash[28005]: cluster 2026-03-10T07:40:58.754205+0000 mgr.y (mgr.24407) 603 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 808 B/s rd, 1.3 KiB/s wr, 0 op/s 2026-03-10T07:41:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:40:59 vm00 bash[28005]: cluster 2026-03-10T07:40:58.754205+0000 mgr.y (mgr.24407) 603 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 808 B/s rd, 1.3 KiB/s wr, 0 op/s 2026-03-10T07:41:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:59 vm00 bash[20701]: cluster 2026-03-10T07:40:58.754205+0000 mgr.y (mgr.24407) 603 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 808 B/s rd, 1.3 KiB/s wr, 0 op/s 2026-03-10T07:41:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:40:59 vm00 bash[20701]: cluster 2026-03-10T07:40:58.754205+0000 mgr.y (mgr.24407) 603 : cluster [DBG] pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 808 B/s rd, 1.3 KiB/s wr, 0 op/s 2026-03-10T07:41:01.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:41:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:41:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:41:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:01 vm03 bash[23382]: cluster 2026-03-10T07:41:00.754931+0000 mgr.y (mgr.24407) 604 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.1 KiB/s wr, 2 op/s 2026-03-10T07:41:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:01 vm03 bash[23382]: cluster 2026-03-10T07:41:00.754931+0000 mgr.y (mgr.24407) 604 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.1 KiB/s wr, 2 op/s 2026-03-10T07:41:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:01 vm00 bash[28005]: cluster 2026-03-10T07:41:00.754931+0000 mgr.y (mgr.24407) 604 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.1 KiB/s wr, 2 op/s 2026-03-10T07:41:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:01 vm00 bash[28005]: cluster 2026-03-10T07:41:00.754931+0000 mgr.y (mgr.24407) 604 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.1 KiB/s wr, 2 op/s 2026-03-10T07:41:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:01 vm00 bash[20701]: cluster 2026-03-10T07:41:00.754931+0000 mgr.y (mgr.24407) 604 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.1 KiB/s wr, 2 op/s 2026-03-10T07:41:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:01 vm00 bash[20701]: cluster 2026-03-10T07:41:00.754931+0000 mgr.y (mgr.24407) 604 : cluster [DBG] pgmap v1069: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.1 KiB/s wr, 2 op/s 2026-03-10T07:41:03.929 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:41:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:41:04.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:03 vm03 bash[23382]: cluster 2026-03-10T07:41:02.755258+0000 mgr.y (mgr.24407) 605 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 5.3 KiB/s wr, 1 op/s 2026-03-10T07:41:04.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:03 vm03 bash[23382]: cluster 2026-03-10T07:41:02.755258+0000 mgr.y (mgr.24407) 605 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 5.3 KiB/s wr, 1 op/s 2026-03-10T07:41:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:03 vm00 bash[28005]: cluster 2026-03-10T07:41:02.755258+0000 mgr.y (mgr.24407) 605 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 5.3 KiB/s wr, 1 op/s 2026-03-10T07:41:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:03 vm00 bash[28005]: cluster 2026-03-10T07:41:02.755258+0000 mgr.y (mgr.24407) 605 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 5.3 KiB/s wr, 1 op/s 2026-03-10T07:41:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:03 vm00 bash[20701]: cluster 2026-03-10T07:41:02.755258+0000 mgr.y (mgr.24407) 605 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 5.3 KiB/s wr, 1 op/s 2026-03-10T07:41:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:03 vm00 bash[20701]: cluster 2026-03-10T07:41:02.755258+0000 mgr.y (mgr.24407) 605 : cluster [DBG] pgmap v1070: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 5.3 KiB/s wr, 1 op/s 2026-03-10T07:41:05.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:04 vm03 bash[23382]: audit 2026-03-10T07:41:03.547695+0000 mgr.y (mgr.24407) 606 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:05.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:04 vm03 bash[23382]: audit 2026-03-10T07:41:03.547695+0000 mgr.y (mgr.24407) 606 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:04 vm00 bash[28005]: audit 2026-03-10T07:41:03.547695+0000 mgr.y (mgr.24407) 606 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:04 vm00 bash[28005]: audit 2026-03-10T07:41:03.547695+0000 mgr.y (mgr.24407) 606 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:05.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:04 vm00 bash[20701]: audit 2026-03-10T07:41:03.547695+0000 mgr.y (mgr.24407) 606 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:05.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:04 vm00 bash[20701]: audit 2026-03-10T07:41:03.547695+0000 mgr.y (mgr.24407) 606 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:06 vm00 bash[28005]: cluster 2026-03-10T07:41:04.756005+0000 mgr.y (mgr.24407) 607 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 8.8 KiB/s wr, 2 op/s 2026-03-10T07:41:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:06 vm00 bash[28005]: cluster 2026-03-10T07:41:04.756005+0000 mgr.y (mgr.24407) 607 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 8.8 KiB/s wr, 2 op/s 2026-03-10T07:41:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:06 vm00 bash[20701]: cluster 2026-03-10T07:41:04.756005+0000 mgr.y (mgr.24407) 607 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 8.8 KiB/s wr, 2 op/s 2026-03-10T07:41:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:06 vm00 bash[20701]: cluster 2026-03-10T07:41:04.756005+0000 mgr.y (mgr.24407) 607 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 8.8 KiB/s wr, 2 op/s 2026-03-10T07:41:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:06 vm03 bash[23382]: cluster 2026-03-10T07:41:04.756005+0000 mgr.y (mgr.24407) 607 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 8.8 KiB/s wr, 2 op/s 2026-03-10T07:41:06.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:06 vm03 bash[23382]: cluster 2026-03-10T07:41:04.756005+0000 mgr.y (mgr.24407) 607 : cluster [DBG] pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 995 B/s rd, 8.8 KiB/s wr, 2 op/s 2026-03-10T07:41:07.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:07 vm00 bash[28005]: audit 2026-03-10T07:41:06.576424+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:07 vm00 bash[28005]: audit 2026-03-10T07:41:06.576424+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:07 vm00 bash[28005]: audit 2026-03-10T07:41:06.577273+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:07.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:07 vm00 bash[28005]: audit 2026-03-10T07:41:06.577273+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:07 vm00 bash[28005]: audit 2026-03-10T07:41:06.577430+0000 mon.b (mon.1) 627 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:07 vm00 bash[28005]: audit 2026-03-10T07:41:06.577430+0000 mon.b (mon.1) 627 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:07 vm00 bash[28005]: audit 2026-03-10T07:41:06.578448+0000 mon.b (mon.1) 628 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:07 vm00 bash[28005]: audit 2026-03-10T07:41:06.578448+0000 mon.b (mon.1) 628 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:07 vm00 bash[20701]: audit 2026-03-10T07:41:06.576424+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:07 vm00 bash[20701]: audit 2026-03-10T07:41:06.576424+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:07 vm00 bash[20701]: audit 2026-03-10T07:41:06.577273+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:07 vm00 bash[20701]: audit 2026-03-10T07:41:06.577273+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:07 vm00 bash[20701]: audit 2026-03-10T07:41:06.577430+0000 mon.b (mon.1) 627 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:07 vm00 bash[20701]: audit 2026-03-10T07:41:06.577430+0000 mon.b (mon.1) 627 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:07 vm00 bash[20701]: audit 2026-03-10T07:41:06.578448+0000 mon.b (mon.1) 628 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:07.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:07 vm00 bash[20701]: audit 2026-03-10T07:41:06.578448+0000 mon.b (mon.1) 628 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:07 vm03 bash[23382]: audit 2026-03-10T07:41:06.576424+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:07 vm03 bash[23382]: audit 2026-03-10T07:41:06.576424+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:07 vm03 bash[23382]: audit 2026-03-10T07:41:06.577273+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:07 vm03 bash[23382]: audit 2026-03-10T07:41:06.577273+0000 mon.a (mon.0) 3375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:07 vm03 bash[23382]: audit 2026-03-10T07:41:06.577430+0000 mon.b (mon.1) 627 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:07 vm03 bash[23382]: audit 2026-03-10T07:41:06.577430+0000 mon.b (mon.1) 627 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:07 vm03 bash[23382]: audit 2026-03-10T07:41:06.578448+0000 mon.b (mon.1) 628 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:07.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:07 vm03 bash[23382]: audit 2026-03-10T07:41:06.578448+0000 mon.b (mon.1) 628 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]: dispatch 2026-03-10T07:41:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:08 vm00 bash[28005]: cluster 2026-03-10T07:41:06.756550+0000 mgr.y (mgr.24407) 608 : cluster [DBG] pgmap v1072: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T07:41:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:08 vm00 bash[28005]: cluster 2026-03-10T07:41:06.756550+0000 mgr.y (mgr.24407) 608 : cluster [DBG] pgmap v1072: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T07:41:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:08 vm00 bash[28005]: audit 2026-03-10T07:41:07.078810+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]': finished 2026-03-10T07:41:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:08 vm00 bash[28005]: audit 2026-03-10T07:41:07.078810+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]': finished 2026-03-10T07:41:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:08 vm00 bash[28005]: cluster 2026-03-10T07:41:07.089638+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T07:41:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:08 vm00 bash[28005]: cluster 2026-03-10T07:41:07.089638+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T07:41:08.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:08 vm00 bash[20701]: cluster 2026-03-10T07:41:06.756550+0000 mgr.y (mgr.24407) 608 : cluster [DBG] pgmap v1072: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T07:41:08.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:08 vm00 bash[20701]: cluster 2026-03-10T07:41:06.756550+0000 mgr.y (mgr.24407) 608 : cluster [DBG] pgmap v1072: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T07:41:08.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:08 vm00 bash[20701]: audit 2026-03-10T07:41:07.078810+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]': finished 2026-03-10T07:41:08.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:08 vm00 bash[20701]: audit 2026-03-10T07:41:07.078810+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]': finished 2026-03-10T07:41:08.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:08 vm00 bash[20701]: cluster 2026-03-10T07:41:07.089638+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T07:41:08.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:08 vm00 bash[20701]: cluster 2026-03-10T07:41:07.089638+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T07:41:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:08 vm03 bash[23382]: cluster 2026-03-10T07:41:06.756550+0000 mgr.y (mgr.24407) 608 : cluster [DBG] pgmap v1072: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T07:41:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:08 vm03 bash[23382]: cluster 2026-03-10T07:41:06.756550+0000 mgr.y (mgr.24407) 608 : cluster [DBG] pgmap v1072: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T07:41:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:08 vm03 bash[23382]: audit 2026-03-10T07:41:07.078810+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]': finished 2026-03-10T07:41:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:08 vm03 bash[23382]: audit 2026-03-10T07:41:07.078810+0000 mon.a (mon.0) 3376 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-140"}]': finished 2026-03-10T07:41:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:08 vm03 bash[23382]: cluster 2026-03-10T07:41:07.089638+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T07:41:08.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:08 vm03 bash[23382]: cluster 2026-03-10T07:41:07.089638+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T07:41:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:09 vm00 bash[28005]: cluster 2026-03-10T07:41:08.115968+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T07:41:09.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:09 vm00 bash[28005]: cluster 2026-03-10T07:41:08.115968+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T07:41:09.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:09 vm00 bash[20701]: cluster 2026-03-10T07:41:08.115968+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T07:41:09.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:09 vm00 bash[20701]: cluster 2026-03-10T07:41:08.115968+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T07:41:09.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:09 vm03 bash[23382]: cluster 2026-03-10T07:41:08.115968+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T07:41:09.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:09 vm03 bash[23382]: cluster 2026-03-10T07:41:08.115968+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T07:41:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:10 vm03 bash[23382]: cluster 2026-03-10T07:41:08.756821+0000 mgr.y (mgr.24407) 609 : cluster [DBG] pgmap v1075: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:41:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:10 vm03 bash[23382]: cluster 2026-03-10T07:41:08.756821+0000 mgr.y (mgr.24407) 609 : cluster [DBG] pgmap v1075: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:41:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:10 vm03 bash[23382]: cluster 2026-03-10T07:41:09.133777+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T07:41:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:10 vm03 bash[23382]: cluster 2026-03-10T07:41:09.133777+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T07:41:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:10 vm03 bash[23382]: audit 2026-03-10T07:41:09.142008+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:10 vm03 bash[23382]: audit 2026-03-10T07:41:09.142008+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:10 vm03 bash[23382]: audit 2026-03-10T07:41:09.142754+0000 mon.b (mon.1) 629 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:10 vm03 bash[23382]: audit 2026-03-10T07:41:09.142754+0000 mon.b (mon.1) 629 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:10 vm03 bash[23382]: audit 2026-03-10T07:41:09.792497+0000 mon.c (mon.2) 359 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:10.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:10 vm03 bash[23382]: audit 2026-03-10T07:41:09.792497+0000 mon.c (mon.2) 359 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:10.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:10 vm00 bash[28005]: cluster 2026-03-10T07:41:08.756821+0000 mgr.y (mgr.24407) 609 : cluster [DBG] pgmap v1075: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:10 vm00 bash[28005]: cluster 2026-03-10T07:41:08.756821+0000 mgr.y (mgr.24407) 609 : cluster [DBG] pgmap v1075: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:10 vm00 bash[28005]: cluster 2026-03-10T07:41:09.133777+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:10 vm00 bash[28005]: cluster 2026-03-10T07:41:09.133777+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:10 vm00 bash[28005]: audit 2026-03-10T07:41:09.142008+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:10 vm00 bash[28005]: audit 2026-03-10T07:41:09.142008+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:10 vm00 bash[28005]: audit 2026-03-10T07:41:09.142754+0000 mon.b (mon.1) 629 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:10 vm00 bash[28005]: audit 2026-03-10T07:41:09.142754+0000 mon.b (mon.1) 629 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:10 vm00 bash[28005]: audit 2026-03-10T07:41:09.792497+0000 mon.c (mon.2) 359 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:10 vm00 bash[28005]: audit 2026-03-10T07:41:09.792497+0000 mon.c (mon.2) 359 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:10 vm00 bash[20701]: cluster 2026-03-10T07:41:08.756821+0000 mgr.y (mgr.24407) 609 : cluster [DBG] pgmap v1075: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:10 vm00 bash[20701]: cluster 2026-03-10T07:41:08.756821+0000 mgr.y (mgr.24407) 609 : cluster [DBG] pgmap v1075: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:10 vm00 bash[20701]: cluster 2026-03-10T07:41:09.133777+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:10 vm00 bash[20701]: cluster 2026-03-10T07:41:09.133777+0000 mon.a (mon.0) 3379 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:10 vm00 bash[20701]: audit 2026-03-10T07:41:09.142008+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:10 vm00 bash[20701]: audit 2026-03-10T07:41:09.142008+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:10 vm00 bash[20701]: audit 2026-03-10T07:41:09.142754+0000 mon.b (mon.1) 629 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:10 vm00 bash[20701]: audit 2026-03-10T07:41:09.142754+0000 mon.b (mon.1) 629 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:10 vm00 bash[20701]: audit 2026-03-10T07:41:09.792497+0000 mon.c (mon.2) 359 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:10 vm00 bash[20701]: audit 2026-03-10T07:41:09.792497+0000 mon.c (mon.2) 359 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:11.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:41:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:41:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:11 vm00 bash[20701]: audit 2026-03-10T07:41:10.122933+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:11 vm00 bash[20701]: audit 2026-03-10T07:41:10.122933+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:11 vm00 bash[20701]: cluster 2026-03-10T07:41:10.131188+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:11 vm00 bash[20701]: cluster 2026-03-10T07:41:10.131188+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:11 vm00 bash[20701]: audit 2026-03-10T07:41:10.226477+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:11 vm00 bash[20701]: audit 2026-03-10T07:41:10.226477+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:11 vm00 bash[20701]: audit 2026-03-10T07:41:10.226742+0000 mon.b (mon.1) 630 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:11 vm00 bash[20701]: audit 2026-03-10T07:41:10.226742+0000 mon.b (mon.1) 630 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:11 vm00 bash[28005]: audit 2026-03-10T07:41:10.122933+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:11 vm00 bash[28005]: audit 2026-03-10T07:41:10.122933+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:11 vm00 bash[28005]: cluster 2026-03-10T07:41:10.131188+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:11 vm00 bash[28005]: cluster 2026-03-10T07:41:10.131188+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:11 vm00 bash[28005]: audit 2026-03-10T07:41:10.226477+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:11 vm00 bash[28005]: audit 2026-03-10T07:41:10.226477+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:11 vm00 bash[28005]: audit 2026-03-10T07:41:10.226742+0000 mon.b (mon.1) 630 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:11 vm00 bash[28005]: audit 2026-03-10T07:41:10.226742+0000 mon.b (mon.1) 630 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:11 vm03 bash[23382]: audit 2026-03-10T07:41:10.122933+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:11 vm03 bash[23382]: audit 2026-03-10T07:41:10.122933+0000 mon.a (mon.0) 3381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:11 vm03 bash[23382]: cluster 2026-03-10T07:41:10.131188+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T07:41:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:11 vm03 bash[23382]: cluster 2026-03-10T07:41:10.131188+0000 mon.a (mon.0) 3382 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T07:41:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:11 vm03 bash[23382]: audit 2026-03-10T07:41:10.226477+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:11 vm03 bash[23382]: audit 2026-03-10T07:41:10.226477+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:11 vm03 bash[23382]: audit 2026-03-10T07:41:10.226742+0000 mon.b (mon.1) 630 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:11 vm03 bash[23382]: audit 2026-03-10T07:41:10.226742+0000 mon.b (mon.1) 630 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: cluster 2026-03-10T07:41:10.757154+0000 mgr.y (mgr.24407) 610 : cluster [DBG] pgmap v1078: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: cluster 2026-03-10T07:41:10.757154+0000 mgr.y (mgr.24407) 610 : cluster [DBG] pgmap v1078: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: audit 2026-03-10T07:41:11.204104+0000 mon.a (mon.0) 3384 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: audit 2026-03-10T07:41:11.204104+0000 mon.a (mon.0) 3384 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: audit 2026-03-10T07:41:11.209175+0000 mon.b (mon.1) 631 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: audit 2026-03-10T07:41:11.209175+0000 mon.b (mon.1) 631 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: cluster 2026-03-10T07:41:11.209899+0000 mon.a (mon.0) 3385 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: cluster 2026-03-10T07:41:11.209899+0000 mon.a (mon.0) 3385 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: audit 2026-03-10T07:41:11.213252+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: audit 2026-03-10T07:41:11.213252+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: audit 2026-03-10T07:41:12.209153+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: audit 2026-03-10T07:41:12.209153+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: cluster 2026-03-10T07:41:12.215819+0000 mon.a (mon.0) 3388 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T07:41:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:12 vm03 bash[23382]: cluster 2026-03-10T07:41:12.215819+0000 mon.a (mon.0) 3388 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: cluster 2026-03-10T07:41:10.757154+0000 mgr.y (mgr.24407) 610 : cluster [DBG] pgmap v1078: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: cluster 2026-03-10T07:41:10.757154+0000 mgr.y (mgr.24407) 610 : cluster [DBG] pgmap v1078: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: audit 2026-03-10T07:41:11.204104+0000 mon.a (mon.0) 3384 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: audit 2026-03-10T07:41:11.204104+0000 mon.a (mon.0) 3384 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: audit 2026-03-10T07:41:11.209175+0000 mon.b (mon.1) 631 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: audit 2026-03-10T07:41:11.209175+0000 mon.b (mon.1) 631 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: cluster 2026-03-10T07:41:11.209899+0000 mon.a (mon.0) 3385 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: cluster 2026-03-10T07:41:11.209899+0000 mon.a (mon.0) 3385 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: audit 2026-03-10T07:41:11.213252+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: audit 2026-03-10T07:41:11.213252+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: audit 2026-03-10T07:41:12.209153+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: audit 2026-03-10T07:41:12.209153+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: cluster 2026-03-10T07:41:12.215819+0000 mon.a (mon.0) 3388 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:12 vm00 bash[28005]: cluster 2026-03-10T07:41:12.215819+0000 mon.a (mon.0) 3388 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: cluster 2026-03-10T07:41:10.757154+0000 mgr.y (mgr.24407) 610 : cluster [DBG] pgmap v1078: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: cluster 2026-03-10T07:41:10.757154+0000 mgr.y (mgr.24407) 610 : cluster [DBG] pgmap v1078: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: audit 2026-03-10T07:41:11.204104+0000 mon.a (mon.0) 3384 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: audit 2026-03-10T07:41:11.204104+0000 mon.a (mon.0) 3384 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: audit 2026-03-10T07:41:11.209175+0000 mon.b (mon.1) 631 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: audit 2026-03-10T07:41:11.209175+0000 mon.b (mon.1) 631 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: cluster 2026-03-10T07:41:11.209899+0000 mon.a (mon.0) 3385 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: cluster 2026-03-10T07:41:11.209899+0000 mon.a (mon.0) 3385 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: audit 2026-03-10T07:41:11.213252+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: audit 2026-03-10T07:41:11.213252+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: audit 2026-03-10T07:41:12.209153+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: audit 2026-03-10T07:41:12.209153+0000 mon.a (mon.0) 3387 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: cluster 2026-03-10T07:41:12.215819+0000 mon.a (mon.0) 3388 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T07:41:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:12 vm00 bash[20701]: cluster 2026-03-10T07:41:12.215819+0000 mon.a (mon.0) 3388 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T07:41:13.554 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:13 vm03 bash[23382]: audit 2026-03-10T07:41:12.214874+0000 mon.b (mon.1) 632 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:13.554 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:13 vm03 bash[23382]: audit 2026-03-10T07:41:12.214874+0000 mon.b (mon.1) 632 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:13.554 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:13 vm03 bash[23382]: audit 2026-03-10T07:41:12.219466+0000 mon.a (mon.0) 3389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:13.554 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:13 vm03 bash[23382]: audit 2026-03-10T07:41:12.219466+0000 mon.a (mon.0) 3389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:13.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:13 vm00 bash[28005]: audit 2026-03-10T07:41:12.214874+0000 mon.b (mon.1) 632 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:13.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:13 vm00 bash[28005]: audit 2026-03-10T07:41:12.214874+0000 mon.b (mon.1) 632 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:13.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:13 vm00 bash[28005]: audit 2026-03-10T07:41:12.219466+0000 mon.a (mon.0) 3389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:13.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:13 vm00 bash[28005]: audit 2026-03-10T07:41:12.219466+0000 mon.a (mon.0) 3389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:13.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:13 vm00 bash[20701]: audit 2026-03-10T07:41:12.214874+0000 mon.b (mon.1) 632 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:13.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:13 vm00 bash[20701]: audit 2026-03-10T07:41:12.214874+0000 mon.b (mon.1) 632 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:13.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:13 vm00 bash[20701]: audit 2026-03-10T07:41:12.219466+0000 mon.a (mon.0) 3389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:13.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:13 vm00 bash[20701]: audit 2026-03-10T07:41:12.219466+0000 mon.a (mon.0) 3389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]: dispatch 2026-03-10T07:41:14.013 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:41:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: cluster 2026-03-10T07:41:12.757516+0000 mgr.y (mgr.24407) 611 : cluster [DBG] pgmap v1081: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: cluster 2026-03-10T07:41:12.757516+0000 mgr.y (mgr.24407) 611 : cluster [DBG] pgmap v1081: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: cluster 2026-03-10T07:41:13.209233+0000 mon.a (mon.0) 3390 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: cluster 2026-03-10T07:41:13.209233+0000 mon.a (mon.0) 3390 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: audit 2026-03-10T07:41:13.287752+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]': finished 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: audit 2026-03-10T07:41:13.287752+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]': finished 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: cluster 2026-03-10T07:41:13.290553+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: cluster 2026-03-10T07:41:13.290553+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: audit 2026-03-10T07:41:13.293130+0000 mon.b (mon.1) 633 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: audit 2026-03-10T07:41:13.293130+0000 mon.b (mon.1) 633 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: audit 2026-03-10T07:41:13.294863+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: audit 2026-03-10T07:41:13.294863+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: audit 2026-03-10T07:41:14.295991+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: audit 2026-03-10T07:41:14.295991+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:14.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: cluster 2026-03-10T07:41:14.300980+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:14 vm00 bash[20701]: cluster 2026-03-10T07:41:14.300980+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: cluster 2026-03-10T07:41:12.757516+0000 mgr.y (mgr.24407) 611 : cluster [DBG] pgmap v1081: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: cluster 2026-03-10T07:41:12.757516+0000 mgr.y (mgr.24407) 611 : cluster [DBG] pgmap v1081: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: cluster 2026-03-10T07:41:13.209233+0000 mon.a (mon.0) 3390 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: cluster 2026-03-10T07:41:13.209233+0000 mon.a (mon.0) 3390 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: audit 2026-03-10T07:41:13.287752+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]': finished 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: audit 2026-03-10T07:41:13.287752+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]': finished 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: cluster 2026-03-10T07:41:13.290553+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: cluster 2026-03-10T07:41:13.290553+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: audit 2026-03-10T07:41:13.293130+0000 mon.b (mon.1) 633 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: audit 2026-03-10T07:41:13.293130+0000 mon.b (mon.1) 633 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: audit 2026-03-10T07:41:13.294863+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: audit 2026-03-10T07:41:13.294863+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: audit 2026-03-10T07:41:14.295991+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: audit 2026-03-10T07:41:14.295991+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: cluster 2026-03-10T07:41:14.300980+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T07:41:14.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:14 vm00 bash[28005]: cluster 2026-03-10T07:41:14.300980+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: cluster 2026-03-10T07:41:12.757516+0000 mgr.y (mgr.24407) 611 : cluster [DBG] pgmap v1081: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: cluster 2026-03-10T07:41:12.757516+0000 mgr.y (mgr.24407) 611 : cluster [DBG] pgmap v1081: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: cluster 2026-03-10T07:41:13.209233+0000 mon.a (mon.0) 3390 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: cluster 2026-03-10T07:41:13.209233+0000 mon.a (mon.0) 3390 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: audit 2026-03-10T07:41:13.287752+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]': finished 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: audit 2026-03-10T07:41:13.287752+0000 mon.a (mon.0) 3391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-142", "mode": "writeback"}]': finished 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: cluster 2026-03-10T07:41:13.290553+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: cluster 2026-03-10T07:41:13.290553+0000 mon.a (mon.0) 3392 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: audit 2026-03-10T07:41:13.293130+0000 mon.b (mon.1) 633 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: audit 2026-03-10T07:41:13.293130+0000 mon.b (mon.1) 633 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: audit 2026-03-10T07:41:13.294863+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: audit 2026-03-10T07:41:13.294863+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: audit 2026-03-10T07:41:14.295991+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: audit 2026-03-10T07:41:14.295991+0000 mon.a (mon.0) 3394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: cluster 2026-03-10T07:41:14.300980+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T07:41:14.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:14 vm03 bash[23382]: cluster 2026-03-10T07:41:14.300980+0000 mon.a (mon.0) 3395 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:13.557886+0000 mgr.y (mgr.24407) 612 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:13.557886+0000 mgr.y (mgr.24407) 612 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:14.301594+0000 mon.b (mon.1) 634 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:14.301594+0000 mon.b (mon.1) 634 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:14.306663+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:14.306663+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:14.553896+0000 mon.c (mon.2) 360 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:14.553896+0000 mon.c (mon.2) 360 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:14.834063+0000 mon.a (mon.0) 3397 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:14.834063+0000 mon.a (mon.0) 3397 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:14.843034+0000 mon.a (mon.0) 3398 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:14.843034+0000 mon.a (mon.0) 3398 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:15.136192+0000 mon.c (mon.2) 361 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:15.136192+0000 mon.c (mon.2) 361 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:15.136868+0000 mon.c (mon.2) 362 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:15.136868+0000 mon.c (mon.2) 362 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:15.142834+0000 mon.a (mon.0) 3399 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:15 vm00 bash[28005]: audit 2026-03-10T07:41:15.142834+0000 mon.a (mon.0) 3399 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:13.557886+0000 mgr.y (mgr.24407) 612 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:13.557886+0000 mgr.y (mgr.24407) 612 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:14.301594+0000 mon.b (mon.1) 634 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:14.301594+0000 mon.b (mon.1) 634 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:14.306663+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:14.306663+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:14.553896+0000 mon.c (mon.2) 360 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:14.553896+0000 mon.c (mon.2) 360 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:14.834063+0000 mon.a (mon.0) 3397 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:14.834063+0000 mon.a (mon.0) 3397 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:14.843034+0000 mon.a (mon.0) 3398 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:14.843034+0000 mon.a (mon.0) 3398 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:15.136192+0000 mon.c (mon.2) 361 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:15.136192+0000 mon.c (mon.2) 361 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:15.136868+0000 mon.c (mon.2) 362 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:15.136868+0000 mon.c (mon.2) 362 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:15.142834+0000 mon.a (mon.0) 3399 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:15 vm00 bash[20701]: audit 2026-03-10T07:41:15.142834+0000 mon.a (mon.0) 3399 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:13.557886+0000 mgr.y (mgr.24407) 612 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:13.557886+0000 mgr.y (mgr.24407) 612 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:14.301594+0000 mon.b (mon.1) 634 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:14.301594+0000 mon.b (mon.1) 634 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:14.306663+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:14.306663+0000 mon.a (mon.0) 3396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:14.553896+0000 mon.c (mon.2) 360 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:14.553896+0000 mon.c (mon.2) 360 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:14.834063+0000 mon.a (mon.0) 3397 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:14.834063+0000 mon.a (mon.0) 3397 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:14.843034+0000 mon.a (mon.0) 3398 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:14.843034+0000 mon.a (mon.0) 3398 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:15.136192+0000 mon.c (mon.2) 361 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:15.136192+0000 mon.c (mon.2) 361 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:15.136868+0000 mon.c (mon.2) 362 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:15.136868+0000 mon.c (mon.2) 362 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:15.142834+0000 mon.a (mon.0) 3399 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:15.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:15 vm03 bash[23382]: audit 2026-03-10T07:41:15.142834+0000 mon.a (mon.0) 3399 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: cluster 2026-03-10T07:41:14.758132+0000 mgr.y (mgr.24407) 613 : cluster [DBG] pgmap v1084: 268 pgs: 8 creating+peering, 260 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 5.0 KiB/s wr, 4 op/s 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: cluster 2026-03-10T07:41:14.758132+0000 mgr.y (mgr.24407) 613 : cluster [DBG] pgmap v1084: 268 pgs: 8 creating+peering, 260 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 5.0 KiB/s wr, 4 op/s 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: audit 2026-03-10T07:41:15.312943+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: audit 2026-03-10T07:41:15.312943+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: cluster 2026-03-10T07:41:15.317420+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: cluster 2026-03-10T07:41:15.317420+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: audit 2026-03-10T07:41:15.321712+0000 mon.b (mon.1) 635 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: audit 2026-03-10T07:41:15.321712+0000 mon.b (mon.1) 635 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: audit 2026-03-10T07:41:15.321773+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: audit 2026-03-10T07:41:15.321773+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: cluster 2026-03-10T07:41:16.313171+0000 mon.a (mon.0) 3403 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: cluster 2026-03-10T07:41:16.313171+0000 mon.a (mon.0) 3403 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: audit 2026-03-10T07:41:16.317257+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: audit 2026-03-10T07:41:16.317257+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: audit 2026-03-10T07:41:16.322223+0000 mon.b (mon.1) 636 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: audit 2026-03-10T07:41:16.322223+0000 mon.b (mon.1) 636 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: cluster 2026-03-10T07:41:16.327179+0000 mon.a (mon.0) 3405 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:16 vm00 bash[28005]: cluster 2026-03-10T07:41:16.327179+0000 mon.a (mon.0) 3405 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: cluster 2026-03-10T07:41:14.758132+0000 mgr.y (mgr.24407) 613 : cluster [DBG] pgmap v1084: 268 pgs: 8 creating+peering, 260 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 5.0 KiB/s wr, 4 op/s 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: cluster 2026-03-10T07:41:14.758132+0000 mgr.y (mgr.24407) 613 : cluster [DBG] pgmap v1084: 268 pgs: 8 creating+peering, 260 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 5.0 KiB/s wr, 4 op/s 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: audit 2026-03-10T07:41:15.312943+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: audit 2026-03-10T07:41:15.312943+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: cluster 2026-03-10T07:41:15.317420+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: cluster 2026-03-10T07:41:15.317420+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: audit 2026-03-10T07:41:15.321712+0000 mon.b (mon.1) 635 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: audit 2026-03-10T07:41:15.321712+0000 mon.b (mon.1) 635 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: audit 2026-03-10T07:41:15.321773+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: audit 2026-03-10T07:41:15.321773+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: cluster 2026-03-10T07:41:16.313171+0000 mon.a (mon.0) 3403 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: cluster 2026-03-10T07:41:16.313171+0000 mon.a (mon.0) 3403 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: audit 2026-03-10T07:41:16.317257+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: audit 2026-03-10T07:41:16.317257+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: audit 2026-03-10T07:41:16.322223+0000 mon.b (mon.1) 636 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: audit 2026-03-10T07:41:16.322223+0000 mon.b (mon.1) 636 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: cluster 2026-03-10T07:41:16.327179+0000 mon.a (mon.0) 3405 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T07:41:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:16 vm00 bash[20701]: cluster 2026-03-10T07:41:16.327179+0000 mon.a (mon.0) 3405 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: cluster 2026-03-10T07:41:14.758132+0000 mgr.y (mgr.24407) 613 : cluster [DBG] pgmap v1084: 268 pgs: 8 creating+peering, 260 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 5.0 KiB/s wr, 4 op/s 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: cluster 2026-03-10T07:41:14.758132+0000 mgr.y (mgr.24407) 613 : cluster [DBG] pgmap v1084: 268 pgs: 8 creating+peering, 260 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 5.0 KiB/s wr, 4 op/s 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: audit 2026-03-10T07:41:15.312943+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: audit 2026-03-10T07:41:15.312943+0000 mon.a (mon.0) 3400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: cluster 2026-03-10T07:41:15.317420+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: cluster 2026-03-10T07:41:15.317420+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: audit 2026-03-10T07:41:15.321712+0000 mon.b (mon.1) 635 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: audit 2026-03-10T07:41:15.321712+0000 mon.b (mon.1) 635 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: audit 2026-03-10T07:41:15.321773+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: audit 2026-03-10T07:41:15.321773+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: cluster 2026-03-10T07:41:16.313171+0000 mon.a (mon.0) 3403 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: cluster 2026-03-10T07:41:16.313171+0000 mon.a (mon.0) 3403 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: audit 2026-03-10T07:41:16.317257+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: audit 2026-03-10T07:41:16.317257+0000 mon.a (mon.0) 3404 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: audit 2026-03-10T07:41:16.322223+0000 mon.b (mon.1) 636 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: audit 2026-03-10T07:41:16.322223+0000 mon.b (mon.1) 636 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: cluster 2026-03-10T07:41:16.327179+0000 mon.a (mon.0) 3405 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T07:41:16.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:16 vm03 bash[23382]: cluster 2026-03-10T07:41:16.327179+0000 mon.a (mon.0) 3405 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T07:41:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:17 vm00 bash[28005]: audit 2026-03-10T07:41:16.328817+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:17 vm00 bash[28005]: audit 2026-03-10T07:41:16.328817+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:17 vm00 bash[28005]: audit 2026-03-10T07:41:17.321210+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:17 vm00 bash[28005]: audit 2026-03-10T07:41:17.321210+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:17 vm00 bash[28005]: cluster 2026-03-10T07:41:17.335501+0000 mon.a (mon.0) 3408 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T07:41:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:17 vm00 bash[28005]: cluster 2026-03-10T07:41:17.335501+0000 mon.a (mon.0) 3408 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T07:41:17.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:17 vm00 bash[20701]: audit 2026-03-10T07:41:16.328817+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:17.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:17 vm00 bash[20701]: audit 2026-03-10T07:41:16.328817+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:17.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:17 vm00 bash[20701]: audit 2026-03-10T07:41:17.321210+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:17.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:17 vm00 bash[20701]: audit 2026-03-10T07:41:17.321210+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:17.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:17 vm00 bash[20701]: cluster 2026-03-10T07:41:17.335501+0000 mon.a (mon.0) 3408 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T07:41:17.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:17 vm00 bash[20701]: cluster 2026-03-10T07:41:17.335501+0000 mon.a (mon.0) 3408 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T07:41:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:17 vm03 bash[23382]: audit 2026-03-10T07:41:16.328817+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:17 vm03 bash[23382]: audit 2026-03-10T07:41:16.328817+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:17 vm03 bash[23382]: audit 2026-03-10T07:41:17.321210+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:17 vm03 bash[23382]: audit 2026-03-10T07:41:17.321210+0000 mon.a (mon.0) 3407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:17 vm03 bash[23382]: cluster 2026-03-10T07:41:17.335501+0000 mon.a (mon.0) 3408 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T07:41:17.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:17 vm03 bash[23382]: cluster 2026-03-10T07:41:17.335501+0000 mon.a (mon.0) 3408 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T07:41:18.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: cluster 2026-03-10T07:41:16.758468+0000 mgr.y (mgr.24407) 614 : cluster [DBG] pgmap v1087: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:41:18.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: cluster 2026-03-10T07:41:16.758468+0000 mgr.y (mgr.24407) 614 : cluster [DBG] pgmap v1087: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: audit 2026-03-10T07:41:17.330747+0000 mon.b (mon.1) 637 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: audit 2026-03-10T07:41:17.330747+0000 mon.b (mon.1) 637 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: audit 2026-03-10T07:41:17.337033+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: audit 2026-03-10T07:41:17.337033+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: audit 2026-03-10T07:41:18.324273+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: audit 2026-03-10T07:41:18.324273+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: cluster 2026-03-10T07:41:18.326971+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: cluster 2026-03-10T07:41:18.326971+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: audit 2026-03-10T07:41:18.331983+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: audit 2026-03-10T07:41:18.331983+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: audit 2026-03-10T07:41:18.333106+0000 mon.b (mon.1) 638 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:18 vm00 bash[28005]: audit 2026-03-10T07:41:18.333106+0000 mon.b (mon.1) 638 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: cluster 2026-03-10T07:41:16.758468+0000 mgr.y (mgr.24407) 614 : cluster [DBG] pgmap v1087: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: cluster 2026-03-10T07:41:16.758468+0000 mgr.y (mgr.24407) 614 : cluster [DBG] pgmap v1087: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: audit 2026-03-10T07:41:17.330747+0000 mon.b (mon.1) 637 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: audit 2026-03-10T07:41:17.330747+0000 mon.b (mon.1) 637 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: audit 2026-03-10T07:41:17.337033+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: audit 2026-03-10T07:41:17.337033+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: audit 2026-03-10T07:41:18.324273+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: audit 2026-03-10T07:41:18.324273+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: cluster 2026-03-10T07:41:18.326971+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: cluster 2026-03-10T07:41:18.326971+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: audit 2026-03-10T07:41:18.331983+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: audit 2026-03-10T07:41:18.331983+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: audit 2026-03-10T07:41:18.333106+0000 mon.b (mon.1) 638 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:18 vm00 bash[20701]: audit 2026-03-10T07:41:18.333106+0000 mon.b (mon.1) 638 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: cluster 2026-03-10T07:41:16.758468+0000 mgr.y (mgr.24407) 614 : cluster [DBG] pgmap v1087: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: cluster 2026-03-10T07:41:16.758468+0000 mgr.y (mgr.24407) 614 : cluster [DBG] pgmap v1087: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: audit 2026-03-10T07:41:17.330747+0000 mon.b (mon.1) 637 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: audit 2026-03-10T07:41:17.330747+0000 mon.b (mon.1) 637 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: audit 2026-03-10T07:41:17.337033+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: audit 2026-03-10T07:41:17.337033+0000 mon.a (mon.0) 3409 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: audit 2026-03-10T07:41:18.324273+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: audit 2026-03-10T07:41:18.324273+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: cluster 2026-03-10T07:41:18.326971+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: cluster 2026-03-10T07:41:18.326971+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: audit 2026-03-10T07:41:18.331983+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: audit 2026-03-10T07:41:18.331983+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: audit 2026-03-10T07:41:18.333106+0000 mon.b (mon.1) 638 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:18.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:18 vm03 bash[23382]: audit 2026-03-10T07:41:18.333106+0000 mon.b (mon.1) 638 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T07:41:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:20 vm00 bash[28005]: cluster 2026-03-10T07:41:18.758824+0000 mgr.y (mgr.24407) 615 : cluster [DBG] pgmap v1090: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:20 vm00 bash[28005]: cluster 2026-03-10T07:41:18.758824+0000 mgr.y (mgr.24407) 615 : cluster [DBG] pgmap v1090: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:20 vm00 bash[28005]: audit 2026-03-10T07:41:19.343185+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:20 vm00 bash[28005]: audit 2026-03-10T07:41:19.343185+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:20 vm00 bash[28005]: cluster 2026-03-10T07:41:19.348612+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:20 vm00 bash[28005]: cluster 2026-03-10T07:41:19.348612+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:20 vm00 bash[28005]: audit 2026-03-10T07:41:19.382650+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:20 vm00 bash[28005]: audit 2026-03-10T07:41:19.382650+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:20 vm00 bash[28005]: audit 2026-03-10T07:41:19.383746+0000 mon.b (mon.1) 639 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:20 vm00 bash[28005]: audit 2026-03-10T07:41:19.383746+0000 mon.b (mon.1) 639 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:20 vm00 bash[20701]: cluster 2026-03-10T07:41:18.758824+0000 mgr.y (mgr.24407) 615 : cluster [DBG] pgmap v1090: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:20 vm00 bash[20701]: cluster 2026-03-10T07:41:18.758824+0000 mgr.y (mgr.24407) 615 : cluster [DBG] pgmap v1090: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:20 vm00 bash[20701]: audit 2026-03-10T07:41:19.343185+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:20 vm00 bash[20701]: audit 2026-03-10T07:41:19.343185+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:20 vm00 bash[20701]: cluster 2026-03-10T07:41:19.348612+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:20 vm00 bash[20701]: cluster 2026-03-10T07:41:19.348612+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:20 vm00 bash[20701]: audit 2026-03-10T07:41:19.382650+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:20 vm00 bash[20701]: audit 2026-03-10T07:41:19.382650+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:20 vm00 bash[20701]: audit 2026-03-10T07:41:19.383746+0000 mon.b (mon.1) 639 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:20 vm00 bash[20701]: audit 2026-03-10T07:41:19.383746+0000 mon.b (mon.1) 639 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:20 vm03 bash[23382]: cluster 2026-03-10T07:41:18.758824+0000 mgr.y (mgr.24407) 615 : cluster [DBG] pgmap v1090: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:20 vm03 bash[23382]: cluster 2026-03-10T07:41:18.758824+0000 mgr.y (mgr.24407) 615 : cluster [DBG] pgmap v1090: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:20 vm03 bash[23382]: audit 2026-03-10T07:41:19.343185+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:41:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:20 vm03 bash[23382]: audit 2026-03-10T07:41:19.343185+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T07:41:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:20 vm03 bash[23382]: cluster 2026-03-10T07:41:19.348612+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T07:41:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:20 vm03 bash[23382]: cluster 2026-03-10T07:41:19.348612+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T07:41:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:20 vm03 bash[23382]: audit 2026-03-10T07:41:19.382650+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:20 vm03 bash[23382]: audit 2026-03-10T07:41:19.382650+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:20 vm03 bash[23382]: audit 2026-03-10T07:41:19.383746+0000 mon.b (mon.1) 639 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:20.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:20 vm03 bash[23382]: audit 2026-03-10T07:41:19.383746+0000 mon.b (mon.1) 639 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:21.371 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:41:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:41:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:41:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:21 vm00 bash[28005]: audit 2026-03-10T07:41:20.355259+0000 mon.a (mon.0) 3416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:21 vm00 bash[28005]: audit 2026-03-10T07:41:20.355259+0000 mon.a (mon.0) 3416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:21 vm00 bash[28005]: audit 2026-03-10T07:41:20.362220+0000 mon.b (mon.1) 640 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:21 vm00 bash[28005]: audit 2026-03-10T07:41:20.362220+0000 mon.b (mon.1) 640 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:21 vm00 bash[28005]: cluster 2026-03-10T07:41:20.363488+0000 mon.a (mon.0) 3417 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T07:41:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:21 vm00 bash[28005]: cluster 2026-03-10T07:41:20.363488+0000 mon.a (mon.0) 3417 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T07:41:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:21 vm00 bash[28005]: audit 2026-03-10T07:41:20.364083+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:21.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:21 vm00 bash[28005]: audit 2026-03-10T07:41:20.364083+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:21.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:21 vm00 bash[20701]: audit 2026-03-10T07:41:20.355259+0000 mon.a (mon.0) 3416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:21.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:21 vm00 bash[20701]: audit 2026-03-10T07:41:20.355259+0000 mon.a (mon.0) 3416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:21.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:21 vm00 bash[20701]: audit 2026-03-10T07:41:20.362220+0000 mon.b (mon.1) 640 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:21.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:21 vm00 bash[20701]: audit 2026-03-10T07:41:20.362220+0000 mon.b (mon.1) 640 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:21.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:21 vm00 bash[20701]: cluster 2026-03-10T07:41:20.363488+0000 mon.a (mon.0) 3417 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T07:41:21.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:21 vm00 bash[20701]: cluster 2026-03-10T07:41:20.363488+0000 mon.a (mon.0) 3417 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T07:41:21.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:21 vm00 bash[20701]: audit 2026-03-10T07:41:20.364083+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:21.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:21 vm00 bash[20701]: audit 2026-03-10T07:41:20.364083+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:21 vm03 bash[23382]: audit 2026-03-10T07:41:20.355259+0000 mon.a (mon.0) 3416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:21 vm03 bash[23382]: audit 2026-03-10T07:41:20.355259+0000 mon.a (mon.0) 3416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:21 vm03 bash[23382]: audit 2026-03-10T07:41:20.362220+0000 mon.b (mon.1) 640 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:21 vm03 bash[23382]: audit 2026-03-10T07:41:20.362220+0000 mon.b (mon.1) 640 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:21 vm03 bash[23382]: cluster 2026-03-10T07:41:20.363488+0000 mon.a (mon.0) 3417 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T07:41:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:21 vm03 bash[23382]: cluster 2026-03-10T07:41:20.363488+0000 mon.a (mon.0) 3417 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T07:41:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:21 vm03 bash[23382]: audit 2026-03-10T07:41:20.364083+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:21.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:21 vm03 bash[23382]: audit 2026-03-10T07:41:20.364083+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: cluster 2026-03-10T07:41:20.759145+0000 mgr.y (mgr.24407) 616 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: cluster 2026-03-10T07:41:20.759145+0000 mgr.y (mgr.24407) 616 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: audit 2026-03-10T07:41:21.358523+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: audit 2026-03-10T07:41:21.358523+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: cluster 2026-03-10T07:41:21.365897+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T07:41:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: cluster 2026-03-10T07:41:21.365897+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T07:41:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: audit 2026-03-10T07:41:21.392619+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: audit 2026-03-10T07:41:21.392619+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: audit 2026-03-10T07:41:21.393448+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: audit 2026-03-10T07:41:21.393448+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: audit 2026-03-10T07:41:21.393830+0000 mon.b (mon.1) 641 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: audit 2026-03-10T07:41:21.393830+0000 mon.b (mon.1) 641 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: audit 2026-03-10T07:41:21.394638+0000 mon.b (mon.1) 642 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.764 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:22 vm03 bash[23382]: audit 2026-03-10T07:41:21.394638+0000 mon.b (mon.1) 642 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: cluster 2026-03-10T07:41:20.759145+0000 mgr.y (mgr.24407) 616 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: cluster 2026-03-10T07:41:20.759145+0000 mgr.y (mgr.24407) 616 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: audit 2026-03-10T07:41:21.358523+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: audit 2026-03-10T07:41:21.358523+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: cluster 2026-03-10T07:41:21.365897+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T07:41:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: cluster 2026-03-10T07:41:21.365897+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: audit 2026-03-10T07:41:21.392619+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: audit 2026-03-10T07:41:21.392619+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: audit 2026-03-10T07:41:21.393448+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: audit 2026-03-10T07:41:21.393448+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: audit 2026-03-10T07:41:21.393830+0000 mon.b (mon.1) 641 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: audit 2026-03-10T07:41:21.393830+0000 mon.b (mon.1) 641 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: audit 2026-03-10T07:41:21.394638+0000 mon.b (mon.1) 642 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:22 vm00 bash[28005]: audit 2026-03-10T07:41:21.394638+0000 mon.b (mon.1) 642 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: cluster 2026-03-10T07:41:20.759145+0000 mgr.y (mgr.24407) 616 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: cluster 2026-03-10T07:41:20.759145+0000 mgr.y (mgr.24407) 616 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: audit 2026-03-10T07:41:21.358523+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: audit 2026-03-10T07:41:21.358523+0000 mon.a (mon.0) 3419 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]': finished 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: cluster 2026-03-10T07:41:21.365897+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: cluster 2026-03-10T07:41:21.365897+0000 mon.a (mon.0) 3420 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: audit 2026-03-10T07:41:21.392619+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: audit 2026-03-10T07:41:21.392619+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: audit 2026-03-10T07:41:21.393448+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: audit 2026-03-10T07:41:21.393448+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: audit 2026-03-10T07:41:21.393830+0000 mon.b (mon.1) 641 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: audit 2026-03-10T07:41:21.393830+0000 mon.b (mon.1) 641 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: audit 2026-03-10T07:41:21.394638+0000 mon.b (mon.1) 642 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:22 vm00 bash[20701]: audit 2026-03-10T07:41:21.394638+0000 mon.b (mon.1) 642 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-142"}]: dispatch 2026-03-10T07:41:23.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:23 vm03 bash[23382]: cluster 2026-03-10T07:41:22.393964+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T07:41:23.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:23 vm03 bash[23382]: cluster 2026-03-10T07:41:22.393964+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T07:41:23.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:23 vm03 bash[23382]: cluster 2026-03-10T07:41:22.759667+0000 mgr.y (mgr.24407) 617 : cluster [DBG] pgmap v1096: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:23.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:23 vm03 bash[23382]: cluster 2026-03-10T07:41:22.759667+0000 mgr.y (mgr.24407) 617 : cluster [DBG] pgmap v1096: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:23.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:23 vm03 bash[23382]: cluster 2026-03-10T07:41:23.385600+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T07:41:23.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:23 vm03 bash[23382]: cluster 2026-03-10T07:41:23.385600+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T07:41:23.763 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:41:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:41:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:23 vm00 bash[28005]: cluster 2026-03-10T07:41:22.393964+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T07:41:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:23 vm00 bash[28005]: cluster 2026-03-10T07:41:22.393964+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T07:41:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:23 vm00 bash[28005]: cluster 2026-03-10T07:41:22.759667+0000 mgr.y (mgr.24407) 617 : cluster [DBG] pgmap v1096: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:23 vm00 bash[28005]: cluster 2026-03-10T07:41:22.759667+0000 mgr.y (mgr.24407) 617 : cluster [DBG] pgmap v1096: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:23 vm00 bash[28005]: cluster 2026-03-10T07:41:23.385600+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T07:41:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:23 vm00 bash[28005]: cluster 2026-03-10T07:41:23.385600+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T07:41:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:23 vm00 bash[20701]: cluster 2026-03-10T07:41:22.393964+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T07:41:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:23 vm00 bash[20701]: cluster 2026-03-10T07:41:22.393964+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T07:41:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:23 vm00 bash[20701]: cluster 2026-03-10T07:41:22.759667+0000 mgr.y (mgr.24407) 617 : cluster [DBG] pgmap v1096: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:23 vm00 bash[20701]: cluster 2026-03-10T07:41:22.759667+0000 mgr.y (mgr.24407) 617 : cluster [DBG] pgmap v1096: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:23 vm00 bash[20701]: cluster 2026-03-10T07:41:23.385600+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T07:41:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:23 vm00 bash[20701]: cluster 2026-03-10T07:41:23.385600+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T07:41:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:24 vm03 bash[23382]: audit 2026-03-10T07:41:23.406254+0000 mon.b (mon.1) 643 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:24 vm03 bash[23382]: audit 2026-03-10T07:41:23.406254+0000 mon.b (mon.1) 643 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:24 vm03 bash[23382]: audit 2026-03-10T07:41:23.408336+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:24 vm03 bash[23382]: audit 2026-03-10T07:41:23.408336+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:24 vm03 bash[23382]: audit 2026-03-10T07:41:23.568298+0000 mgr.y (mgr.24407) 618 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:24 vm03 bash[23382]: audit 2026-03-10T07:41:23.568298+0000 mgr.y (mgr.24407) 618 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:24 vm03 bash[23382]: audit 2026-03-10T07:41:24.385861+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:24 vm03 bash[23382]: audit 2026-03-10T07:41:24.385861+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:24 vm03 bash[23382]: cluster 2026-03-10T07:41:24.409232+0000 mon.a (mon.0) 3427 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T07:41:24.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:24 vm03 bash[23382]: cluster 2026-03-10T07:41:24.409232+0000 mon.a (mon.0) 3427 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T07:41:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:24 vm00 bash[28005]: audit 2026-03-10T07:41:23.406254+0000 mon.b (mon.1) 643 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:24 vm00 bash[28005]: audit 2026-03-10T07:41:23.406254+0000 mon.b (mon.1) 643 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:24 vm00 bash[28005]: audit 2026-03-10T07:41:23.408336+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:24 vm00 bash[28005]: audit 2026-03-10T07:41:23.408336+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:24 vm00 bash[28005]: audit 2026-03-10T07:41:23.568298+0000 mgr.y (mgr.24407) 618 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:24 vm00 bash[28005]: audit 2026-03-10T07:41:23.568298+0000 mgr.y (mgr.24407) 618 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:24 vm00 bash[28005]: audit 2026-03-10T07:41:24.385861+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:24 vm00 bash[28005]: audit 2026-03-10T07:41:24.385861+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:24 vm00 bash[28005]: cluster 2026-03-10T07:41:24.409232+0000 mon.a (mon.0) 3427 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:24 vm00 bash[28005]: cluster 2026-03-10T07:41:24.409232+0000 mon.a (mon.0) 3427 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:24 vm00 bash[20701]: audit 2026-03-10T07:41:23.406254+0000 mon.b (mon.1) 643 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:24 vm00 bash[20701]: audit 2026-03-10T07:41:23.406254+0000 mon.b (mon.1) 643 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:24 vm00 bash[20701]: audit 2026-03-10T07:41:23.408336+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:24 vm00 bash[20701]: audit 2026-03-10T07:41:23.408336+0000 mon.a (mon.0) 3425 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:24 vm00 bash[20701]: audit 2026-03-10T07:41:23.568298+0000 mgr.y (mgr.24407) 618 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:24 vm00 bash[20701]: audit 2026-03-10T07:41:23.568298+0000 mgr.y (mgr.24407) 618 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:24 vm00 bash[20701]: audit 2026-03-10T07:41:24.385861+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:24 vm00 bash[20701]: audit 2026-03-10T07:41:24.385861+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:24 vm00 bash[20701]: cluster 2026-03-10T07:41:24.409232+0000 mon.a (mon.0) 3427 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T07:41:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:24 vm00 bash[20701]: cluster 2026-03-10T07:41:24.409232+0000 mon.a (mon.0) 3427 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:24.424875+0000 mon.b (mon.1) 644 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:24.424875+0000 mon.b (mon.1) 644 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:24.450288+0000 mon.a (mon.0) 3428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:24.450288+0000 mon.a (mon.0) 3428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: cluster 2026-03-10T07:41:24.760416+0000 mgr.y (mgr.24407) 619 : cluster [DBG] pgmap v1099: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: cluster 2026-03-10T07:41:24.760416+0000 mgr.y (mgr.24407) 619 : cluster [DBG] pgmap v1099: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:24.798361+0000 mon.c (mon.2) 363 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:24.798361+0000 mon.c (mon.2) 363 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:25.389634+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:25.389634+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: cluster 2026-03-10T07:41:25.393271+0000 mon.a (mon.0) 3430 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: cluster 2026-03-10T07:41:25.393271+0000 mon.a (mon.0) 3430 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:25.398506+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:25.398506+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:25.399494+0000 mon.b (mon.1) 645 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:25.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:25 vm03 bash[23382]: audit 2026-03-10T07:41:25.399494+0000 mon.b (mon.1) 645 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:24.424875+0000 mon.b (mon.1) 644 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:24.424875+0000 mon.b (mon.1) 644 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:24.450288+0000 mon.a (mon.0) 3428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:24.450288+0000 mon.a (mon.0) 3428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: cluster 2026-03-10T07:41:24.760416+0000 mgr.y (mgr.24407) 619 : cluster [DBG] pgmap v1099: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: cluster 2026-03-10T07:41:24.760416+0000 mgr.y (mgr.24407) 619 : cluster [DBG] pgmap v1099: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:24.798361+0000 mon.c (mon.2) 363 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:24.798361+0000 mon.c (mon.2) 363 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:25.389634+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:25.389634+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: cluster 2026-03-10T07:41:25.393271+0000 mon.a (mon.0) 3430 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: cluster 2026-03-10T07:41:25.393271+0000 mon.a (mon.0) 3430 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:25.398506+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:25.398506+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:25.399494+0000 mon.b (mon.1) 645 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:25 vm00 bash[28005]: audit 2026-03-10T07:41:25.399494+0000 mon.b (mon.1) 645 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:24.424875+0000 mon.b (mon.1) 644 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:24.424875+0000 mon.b (mon.1) 644 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:24.450288+0000 mon.a (mon.0) 3428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:24.450288+0000 mon.a (mon.0) 3428 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: cluster 2026-03-10T07:41:24.760416+0000 mgr.y (mgr.24407) 619 : cluster [DBG] pgmap v1099: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: cluster 2026-03-10T07:41:24.760416+0000 mgr.y (mgr.24407) 619 : cluster [DBG] pgmap v1099: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:24.798361+0000 mon.c (mon.2) 363 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:24.798361+0000 mon.c (mon.2) 363 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:25.389634+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:25.389634+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: cluster 2026-03-10T07:41:25.393271+0000 mon.a (mon.0) 3430 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: cluster 2026-03-10T07:41:25.393271+0000 mon.a (mon.0) 3430 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:25.398506+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:25.398506+0000 mon.a (mon.0) 3431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:25.399494+0000 mon.b (mon.1) 645 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:25.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:25 vm00 bash[20701]: audit 2026-03-10T07:41:25.399494+0000 mon.b (mon.1) 645 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:27 vm03 bash[23382]: audit 2026-03-10T07:41:26.392634+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:27 vm03 bash[23382]: audit 2026-03-10T07:41:26.392634+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:27 vm03 bash[23382]: cluster 2026-03-10T07:41:26.395345+0000 mon.a (mon.0) 3433 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T07:41:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:27 vm03 bash[23382]: cluster 2026-03-10T07:41:26.395345+0000 mon.a (mon.0) 3433 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T07:41:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:27 vm03 bash[23382]: audit 2026-03-10T07:41:26.397780+0000 mon.b (mon.1) 646 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:27 vm03 bash[23382]: audit 2026-03-10T07:41:26.397780+0000 mon.b (mon.1) 646 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:27 vm03 bash[23382]: audit 2026-03-10T07:41:26.401992+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:27.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:27 vm03 bash[23382]: audit 2026-03-10T07:41:26.401992+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:27 vm00 bash[28005]: audit 2026-03-10T07:41:26.392634+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:27 vm00 bash[28005]: audit 2026-03-10T07:41:26.392634+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:27 vm00 bash[28005]: cluster 2026-03-10T07:41:26.395345+0000 mon.a (mon.0) 3433 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:27 vm00 bash[28005]: cluster 2026-03-10T07:41:26.395345+0000 mon.a (mon.0) 3433 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:27 vm00 bash[28005]: audit 2026-03-10T07:41:26.397780+0000 mon.b (mon.1) 646 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:27 vm00 bash[28005]: audit 2026-03-10T07:41:26.397780+0000 mon.b (mon.1) 646 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:27 vm00 bash[28005]: audit 2026-03-10T07:41:26.401992+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:27 vm00 bash[28005]: audit 2026-03-10T07:41:26.401992+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:27 vm00 bash[20701]: audit 2026-03-10T07:41:26.392634+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:27 vm00 bash[20701]: audit 2026-03-10T07:41:26.392634+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:27 vm00 bash[20701]: cluster 2026-03-10T07:41:26.395345+0000 mon.a (mon.0) 3433 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:27 vm00 bash[20701]: cluster 2026-03-10T07:41:26.395345+0000 mon.a (mon.0) 3433 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:27 vm00 bash[20701]: audit 2026-03-10T07:41:26.397780+0000 mon.b (mon.1) 646 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:27 vm00 bash[20701]: audit 2026-03-10T07:41:26.397780+0000 mon.b (mon.1) 646 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:27 vm00 bash[20701]: audit 2026-03-10T07:41:26.401992+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:27.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:27 vm00 bash[20701]: audit 2026-03-10T07:41:26.401992+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]: dispatch 2026-03-10T07:41:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:28 vm03 bash[23382]: cluster 2026-03-10T07:41:26.760760+0000 mgr.y (mgr.24407) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:41:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:28 vm03 bash[23382]: cluster 2026-03-10T07:41:26.760760+0000 mgr.y (mgr.24407) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:41:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:28 vm03 bash[23382]: cluster 2026-03-10T07:41:27.393201+0000 mon.a (mon.0) 3435 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:28 vm03 bash[23382]: cluster 2026-03-10T07:41:27.393201+0000 mon.a (mon.0) 3435 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:28 vm03 bash[23382]: audit 2026-03-10T07:41:27.397770+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]': finished 2026-03-10T07:41:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:28 vm03 bash[23382]: audit 2026-03-10T07:41:27.397770+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]': finished 2026-03-10T07:41:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:28 vm03 bash[23382]: cluster 2026-03-10T07:41:27.408026+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T07:41:28.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:28 vm03 bash[23382]: cluster 2026-03-10T07:41:27.408026+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T07:41:28.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:28 vm00 bash[28005]: cluster 2026-03-10T07:41:26.760760+0000 mgr.y (mgr.24407) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:28 vm00 bash[28005]: cluster 2026-03-10T07:41:26.760760+0000 mgr.y (mgr.24407) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:28 vm00 bash[28005]: cluster 2026-03-10T07:41:27.393201+0000 mon.a (mon.0) 3435 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:28 vm00 bash[28005]: cluster 2026-03-10T07:41:27.393201+0000 mon.a (mon.0) 3435 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:28 vm00 bash[28005]: audit 2026-03-10T07:41:27.397770+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]': finished 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:28 vm00 bash[28005]: audit 2026-03-10T07:41:27.397770+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]': finished 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:28 vm00 bash[28005]: cluster 2026-03-10T07:41:27.408026+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:28 vm00 bash[28005]: cluster 2026-03-10T07:41:27.408026+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:28 vm00 bash[20701]: cluster 2026-03-10T07:41:26.760760+0000 mgr.y (mgr.24407) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:28 vm00 bash[20701]: cluster 2026-03-10T07:41:26.760760+0000 mgr.y (mgr.24407) 620 : cluster [DBG] pgmap v1102: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:28 vm00 bash[20701]: cluster 2026-03-10T07:41:27.393201+0000 mon.a (mon.0) 3435 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:28 vm00 bash[20701]: cluster 2026-03-10T07:41:27.393201+0000 mon.a (mon.0) 3435 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:28 vm00 bash[20701]: audit 2026-03-10T07:41:27.397770+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]': finished 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:28 vm00 bash[20701]: audit 2026-03-10T07:41:27.397770+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-144", "mode": "readproxy"}]': finished 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:28 vm00 bash[20701]: cluster 2026-03-10T07:41:27.408026+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T07:41:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:28 vm00 bash[20701]: cluster 2026-03-10T07:41:27.408026+0000 mon.a (mon.0) 3437 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T07:41:29.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:29 vm03 bash[23382]: cluster 2026-03-10T07:41:28.761148+0000 mgr.y (mgr.24407) 621 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 234 B/s wr, 1 op/s 2026-03-10T07:41:29.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:29 vm03 bash[23382]: cluster 2026-03-10T07:41:28.761148+0000 mgr.y (mgr.24407) 621 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 234 B/s wr, 1 op/s 2026-03-10T07:41:29.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:29 vm00 bash[28005]: cluster 2026-03-10T07:41:28.761148+0000 mgr.y (mgr.24407) 621 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 234 B/s wr, 1 op/s 2026-03-10T07:41:29.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:29 vm00 bash[28005]: cluster 2026-03-10T07:41:28.761148+0000 mgr.y (mgr.24407) 621 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 234 B/s wr, 1 op/s 2026-03-10T07:41:29.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:29 vm00 bash[20701]: cluster 2026-03-10T07:41:28.761148+0000 mgr.y (mgr.24407) 621 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 234 B/s wr, 1 op/s 2026-03-10T07:41:29.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:29 vm00 bash[20701]: cluster 2026-03-10T07:41:28.761148+0000 mgr.y (mgr.24407) 621 : cluster [DBG] pgmap v1104: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 234 B/s wr, 1 op/s 2026-03-10T07:41:31.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:41:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:41:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:41:32.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:31 vm00 bash[20701]: cluster 2026-03-10T07:41:30.761901+0000 mgr.y (mgr.24407) 622 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-10T07:41:32.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:31 vm00 bash[20701]: cluster 2026-03-10T07:41:30.761901+0000 mgr.y (mgr.24407) 622 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-10T07:41:32.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:31 vm00 bash[28005]: cluster 2026-03-10T07:41:30.761901+0000 mgr.y (mgr.24407) 622 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-10T07:41:32.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:31 vm00 bash[28005]: cluster 2026-03-10T07:41:30.761901+0000 mgr.y (mgr.24407) 622 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-10T07:41:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:31 vm03 bash[23382]: cluster 2026-03-10T07:41:30.761901+0000 mgr.y (mgr.24407) 622 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-10T07:41:32.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:31 vm03 bash[23382]: cluster 2026-03-10T07:41:30.761901+0000 mgr.y (mgr.24407) 622 : cluster [DBG] pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-10T07:41:33.513 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:41:33 vm03 bash[51371]: logger=sqlstore.transactions t=2026-03-10T07:41:33.088407714Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-10T07:41:33.847 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:41:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:41:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:33 vm00 bash[20701]: cluster 2026-03-10T07:41:32.762223+0000 mgr.y (mgr.24407) 623 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 833 B/s rd, 0 op/s 2026-03-10T07:41:34.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:33 vm00 bash[20701]: cluster 2026-03-10T07:41:32.762223+0000 mgr.y (mgr.24407) 623 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 833 B/s rd, 0 op/s 2026-03-10T07:41:34.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:33 vm00 bash[28005]: cluster 2026-03-10T07:41:32.762223+0000 mgr.y (mgr.24407) 623 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 833 B/s rd, 0 op/s 2026-03-10T07:41:34.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:33 vm00 bash[28005]: cluster 2026-03-10T07:41:32.762223+0000 mgr.y (mgr.24407) 623 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 833 B/s rd, 0 op/s 2026-03-10T07:41:34.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:33 vm03 bash[23382]: cluster 2026-03-10T07:41:32.762223+0000 mgr.y (mgr.24407) 623 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 833 B/s rd, 0 op/s 2026-03-10T07:41:34.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:33 vm03 bash[23382]: cluster 2026-03-10T07:41:32.762223+0000 mgr.y (mgr.24407) 623 : cluster [DBG] pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 833 B/s rd, 0 op/s 2026-03-10T07:41:35.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:34 vm00 bash[28005]: audit 2026-03-10T07:41:33.573210+0000 mgr.y (mgr.24407) 624 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:35.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:34 vm00 bash[28005]: audit 2026-03-10T07:41:33.573210+0000 mgr.y (mgr.24407) 624 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:35.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:34 vm00 bash[20701]: audit 2026-03-10T07:41:33.573210+0000 mgr.y (mgr.24407) 624 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:35.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:34 vm00 bash[20701]: audit 2026-03-10T07:41:33.573210+0000 mgr.y (mgr.24407) 624 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:35.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:34 vm03 bash[23382]: audit 2026-03-10T07:41:33.573210+0000 mgr.y (mgr.24407) 624 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:35.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:34 vm03 bash[23382]: audit 2026-03-10T07:41:33.573210+0000 mgr.y (mgr.24407) 624 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:36.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:35 vm00 bash[20701]: cluster 2026-03-10T07:41:34.762777+0000 mgr.y (mgr.24407) 625 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 734 B/s rd, 0 op/s 2026-03-10T07:41:36.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:35 vm00 bash[20701]: cluster 2026-03-10T07:41:34.762777+0000 mgr.y (mgr.24407) 625 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 734 B/s rd, 0 op/s 2026-03-10T07:41:36.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:35 vm00 bash[28005]: cluster 2026-03-10T07:41:34.762777+0000 mgr.y (mgr.24407) 625 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 734 B/s rd, 0 op/s 2026-03-10T07:41:36.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:35 vm00 bash[28005]: cluster 2026-03-10T07:41:34.762777+0000 mgr.y (mgr.24407) 625 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 734 B/s rd, 0 op/s 2026-03-10T07:41:36.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:35 vm03 bash[23382]: cluster 2026-03-10T07:41:34.762777+0000 mgr.y (mgr.24407) 625 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 734 B/s rd, 0 op/s 2026-03-10T07:41:36.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:35 vm03 bash[23382]: cluster 2026-03-10T07:41:34.762777+0000 mgr.y (mgr.24407) 625 : cluster [DBG] pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 734 B/s rd, 0 op/s 2026-03-10T07:41:38.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:37 vm03 bash[23382]: cluster 2026-03-10T07:41:36.763367+0000 mgr.y (mgr.24407) 626 : cluster [DBG] pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:38.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:37 vm03 bash[23382]: cluster 2026-03-10T07:41:36.763367+0000 mgr.y (mgr.24407) 626 : cluster [DBG] pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:38.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:37 vm03 bash[23382]: audit 2026-03-10T07:41:37.515275+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:38.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:37 vm03 bash[23382]: audit 2026-03-10T07:41:37.515275+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:38.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:37 vm03 bash[23382]: audit 2026-03-10T07:41:37.516129+0000 mon.b (mon.1) 647 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:38.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:37 vm03 bash[23382]: audit 2026-03-10T07:41:37.516129+0000 mon.b (mon.1) 647 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:38.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:37 vm00 bash[20701]: cluster 2026-03-10T07:41:36.763367+0000 mgr.y (mgr.24407) 626 : cluster [DBG] pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:38.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:37 vm00 bash[20701]: cluster 2026-03-10T07:41:36.763367+0000 mgr.y (mgr.24407) 626 : cluster [DBG] pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:38.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:37 vm00 bash[20701]: audit 2026-03-10T07:41:37.515275+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:38.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:37 vm00 bash[20701]: audit 2026-03-10T07:41:37.515275+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:38.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:37 vm00 bash[20701]: audit 2026-03-10T07:41:37.516129+0000 mon.b (mon.1) 647 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:38.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:37 vm00 bash[20701]: audit 2026-03-10T07:41:37.516129+0000 mon.b (mon.1) 647 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:38.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:37 vm00 bash[28005]: cluster 2026-03-10T07:41:36.763367+0000 mgr.y (mgr.24407) 626 : cluster [DBG] pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:38.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:37 vm00 bash[28005]: cluster 2026-03-10T07:41:36.763367+0000 mgr.y (mgr.24407) 626 : cluster [DBG] pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:38.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:37 vm00 bash[28005]: audit 2026-03-10T07:41:37.515275+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:38.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:37 vm00 bash[28005]: audit 2026-03-10T07:41:37.515275+0000 mon.a (mon.0) 3438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:38.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:37 vm00 bash[28005]: audit 2026-03-10T07:41:37.516129+0000 mon.b (mon.1) 647 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:38.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:37 vm00 bash[28005]: audit 2026-03-10T07:41:37.516129+0000 mon.b (mon.1) 647 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: audit 2026-03-10T07:41:37.869030+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: audit 2026-03-10T07:41:37.869030+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: audit 2026-03-10T07:41:37.874014+0000 mon.b (mon.1) 648 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: audit 2026-03-10T07:41:37.874014+0000 mon.b (mon.1) 648 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: cluster 2026-03-10T07:41:37.877134+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: cluster 2026-03-10T07:41:37.877134+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: audit 2026-03-10T07:41:37.878389+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: audit 2026-03-10T07:41:37.878389+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: cluster 2026-03-10T07:41:38.869191+0000 mon.a (mon.0) 3442 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: cluster 2026-03-10T07:41:38.869191+0000 mon.a (mon.0) 3442 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: audit 2026-03-10T07:41:38.872194+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: audit 2026-03-10T07:41:38.872194+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: cluster 2026-03-10T07:41:38.877685+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T07:41:39.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:38 vm03 bash[23382]: cluster 2026-03-10T07:41:38.877685+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: audit 2026-03-10T07:41:37.869030+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: audit 2026-03-10T07:41:37.869030+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: audit 2026-03-10T07:41:37.874014+0000 mon.b (mon.1) 648 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: audit 2026-03-10T07:41:37.874014+0000 mon.b (mon.1) 648 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: cluster 2026-03-10T07:41:37.877134+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: cluster 2026-03-10T07:41:37.877134+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: audit 2026-03-10T07:41:37.878389+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: audit 2026-03-10T07:41:37.878389+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: cluster 2026-03-10T07:41:38.869191+0000 mon.a (mon.0) 3442 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: cluster 2026-03-10T07:41:38.869191+0000 mon.a (mon.0) 3442 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: audit 2026-03-10T07:41:38.872194+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: audit 2026-03-10T07:41:38.872194+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: cluster 2026-03-10T07:41:38.877685+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:38 vm00 bash[28005]: cluster 2026-03-10T07:41:38.877685+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: audit 2026-03-10T07:41:37.869030+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: audit 2026-03-10T07:41:37.869030+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: audit 2026-03-10T07:41:37.874014+0000 mon.b (mon.1) 648 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: audit 2026-03-10T07:41:37.874014+0000 mon.b (mon.1) 648 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: cluster 2026-03-10T07:41:37.877134+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: cluster 2026-03-10T07:41:37.877134+0000 mon.a (mon.0) 3440 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: audit 2026-03-10T07:41:37.878389+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: audit 2026-03-10T07:41:37.878389+0000 mon.a (mon.0) 3441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: cluster 2026-03-10T07:41:38.869191+0000 mon.a (mon.0) 3442 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: cluster 2026-03-10T07:41:38.869191+0000 mon.a (mon.0) 3442 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: audit 2026-03-10T07:41:38.872194+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: audit 2026-03-10T07:41:38.872194+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]': finished 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: cluster 2026-03-10T07:41:38.877685+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T07:41:39.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:38 vm00 bash[20701]: cluster 2026-03-10T07:41:38.877685+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: cluster 2026-03-10T07:41:38.763698+0000 mgr.y (mgr.24407) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: cluster 2026-03-10T07:41:38.763698+0000 mgr.y (mgr.24407) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: audit 2026-03-10T07:41:38.908902+0000 mon.b (mon.1) 649 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: audit 2026-03-10T07:41:38.908902+0000 mon.b (mon.1) 649 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: audit 2026-03-10T07:41:38.914656+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: audit 2026-03-10T07:41:38.914656+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: audit 2026-03-10T07:41:38.915341+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: audit 2026-03-10T07:41:38.915341+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: audit 2026-03-10T07:41:38.916777+0000 mon.b (mon.1) 650 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: audit 2026-03-10T07:41:38.916777+0000 mon.b (mon.1) 650 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: audit 2026-03-10T07:41:39.805667+0000 mon.c (mon.2) 364 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:40.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:39 vm03 bash[23382]: audit 2026-03-10T07:41:39.805667+0000 mon.c (mon.2) 364 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:40.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: cluster 2026-03-10T07:41:38.763698+0000 mgr.y (mgr.24407) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: cluster 2026-03-10T07:41:38.763698+0000 mgr.y (mgr.24407) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: audit 2026-03-10T07:41:38.908902+0000 mon.b (mon.1) 649 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: audit 2026-03-10T07:41:38.908902+0000 mon.b (mon.1) 649 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: audit 2026-03-10T07:41:38.914656+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: audit 2026-03-10T07:41:38.914656+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: audit 2026-03-10T07:41:38.915341+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: audit 2026-03-10T07:41:38.915341+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: audit 2026-03-10T07:41:38.916777+0000 mon.b (mon.1) 650 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: audit 2026-03-10T07:41:38.916777+0000 mon.b (mon.1) 650 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: audit 2026-03-10T07:41:39.805667+0000 mon.c (mon.2) 364 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:39 vm00 bash[28005]: audit 2026-03-10T07:41:39.805667+0000 mon.c (mon.2) 364 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: cluster 2026-03-10T07:41:38.763698+0000 mgr.y (mgr.24407) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: cluster 2026-03-10T07:41:38.763698+0000 mgr.y (mgr.24407) 627 : cluster [DBG] pgmap v1110: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: audit 2026-03-10T07:41:38.908902+0000 mon.b (mon.1) 649 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: audit 2026-03-10T07:41:38.908902+0000 mon.b (mon.1) 649 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: audit 2026-03-10T07:41:38.914656+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: audit 2026-03-10T07:41:38.914656+0000 mon.a (mon.0) 3445 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: audit 2026-03-10T07:41:38.915341+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: audit 2026-03-10T07:41:38.915341+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: audit 2026-03-10T07:41:38.916777+0000 mon.b (mon.1) 650 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: audit 2026-03-10T07:41:38.916777+0000 mon.b (mon.1) 650 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-144"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: audit 2026-03-10T07:41:39.805667+0000 mon.c (mon.2) 364 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:39 vm00 bash[20701]: audit 2026-03-10T07:41:39.805667+0000 mon.c (mon.2) 364 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:41.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:40 vm03 bash[23382]: cluster 2026-03-10T07:41:39.914454+0000 mon.a (mon.0) 3447 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T07:41:41.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:40 vm03 bash[23382]: cluster 2026-03-10T07:41:39.914454+0000 mon.a (mon.0) 3447 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T07:41:41.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:40 vm00 bash[28005]: cluster 2026-03-10T07:41:39.914454+0000 mon.a (mon.0) 3447 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T07:41:41.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:40 vm00 bash[28005]: cluster 2026-03-10T07:41:39.914454+0000 mon.a (mon.0) 3447 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T07:41:41.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:41:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:41:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:41:41.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:40 vm00 bash[20701]: cluster 2026-03-10T07:41:39.914454+0000 mon.a (mon.0) 3447 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T07:41:41.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:40 vm00 bash[20701]: cluster 2026-03-10T07:41:39.914454+0000 mon.a (mon.0) 3447 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T07:41:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:41 vm03 bash[23382]: cluster 2026-03-10T07:41:40.764022+0000 mgr.y (mgr.24407) 628 : cluster [DBG] pgmap v1113: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:41 vm03 bash[23382]: cluster 2026-03-10T07:41:40.764022+0000 mgr.y (mgr.24407) 628 : cluster [DBG] pgmap v1113: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:41 vm03 bash[23382]: cluster 2026-03-10T07:41:40.955221+0000 mon.a (mon.0) 3448 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T07:41:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:41 vm03 bash[23382]: cluster 2026-03-10T07:41:40.955221+0000 mon.a (mon.0) 3448 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T07:41:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:41 vm03 bash[23382]: audit 2026-03-10T07:41:40.963149+0000 mon.b (mon.1) 651 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:41 vm03 bash[23382]: audit 2026-03-10T07:41:40.963149+0000 mon.b (mon.1) 651 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:41 vm03 bash[23382]: audit 2026-03-10T07:41:40.963492+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:42.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:41 vm03 bash[23382]: audit 2026-03-10T07:41:40.963492+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:41 vm00 bash[28005]: cluster 2026-03-10T07:41:40.764022+0000 mgr.y (mgr.24407) 628 : cluster [DBG] pgmap v1113: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:41 vm00 bash[28005]: cluster 2026-03-10T07:41:40.764022+0000 mgr.y (mgr.24407) 628 : cluster [DBG] pgmap v1113: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:41 vm00 bash[28005]: cluster 2026-03-10T07:41:40.955221+0000 mon.a (mon.0) 3448 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:41 vm00 bash[28005]: cluster 2026-03-10T07:41:40.955221+0000 mon.a (mon.0) 3448 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:41 vm00 bash[28005]: audit 2026-03-10T07:41:40.963149+0000 mon.b (mon.1) 651 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:41 vm00 bash[28005]: audit 2026-03-10T07:41:40.963149+0000 mon.b (mon.1) 651 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:41 vm00 bash[28005]: audit 2026-03-10T07:41:40.963492+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:41 vm00 bash[28005]: audit 2026-03-10T07:41:40.963492+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:41 vm00 bash[20701]: cluster 2026-03-10T07:41:40.764022+0000 mgr.y (mgr.24407) 628 : cluster [DBG] pgmap v1113: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:41 vm00 bash[20701]: cluster 2026-03-10T07:41:40.764022+0000 mgr.y (mgr.24407) 628 : cluster [DBG] pgmap v1113: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:41 vm00 bash[20701]: cluster 2026-03-10T07:41:40.955221+0000 mon.a (mon.0) 3448 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:41 vm00 bash[20701]: cluster 2026-03-10T07:41:40.955221+0000 mon.a (mon.0) 3448 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:41 vm00 bash[20701]: audit 2026-03-10T07:41:40.963149+0000 mon.b (mon.1) 651 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:41 vm00 bash[20701]: audit 2026-03-10T07:41:40.963149+0000 mon.b (mon.1) 651 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:41 vm00 bash[20701]: audit 2026-03-10T07:41:40.963492+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:41 vm00 bash[20701]: audit 2026-03-10T07:41:40.963492+0000 mon.a (mon.0) 3449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:41:43.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:42 vm03 bash[23382]: audit 2026-03-10T07:41:41.943393+0000 mon.a (mon.0) 3450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:43.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:42 vm03 bash[23382]: audit 2026-03-10T07:41:41.943393+0000 mon.a (mon.0) 3450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:43.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:42 vm03 bash[23382]: cluster 2026-03-10T07:41:41.960048+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T07:41:43.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:42 vm03 bash[23382]: cluster 2026-03-10T07:41:41.960048+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T07:41:43.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:42 vm03 bash[23382]: audit 2026-03-10T07:41:42.007348+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:42 vm03 bash[23382]: audit 2026-03-10T07:41:42.007348+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:42 vm03 bash[23382]: audit 2026-03-10T07:41:42.008564+0000 mon.b (mon.1) 652 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:42 vm03 bash[23382]: audit 2026-03-10T07:41:42.008564+0000 mon.b (mon.1) 652 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:42 vm00 bash[28005]: audit 2026-03-10T07:41:41.943393+0000 mon.a (mon.0) 3450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:43.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:42 vm00 bash[28005]: audit 2026-03-10T07:41:41.943393+0000 mon.a (mon.0) 3450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:43.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:42 vm00 bash[28005]: cluster 2026-03-10T07:41:41.960048+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:42 vm00 bash[28005]: cluster 2026-03-10T07:41:41.960048+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:42 vm00 bash[28005]: audit 2026-03-10T07:41:42.007348+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:42 vm00 bash[28005]: audit 2026-03-10T07:41:42.007348+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:42 vm00 bash[28005]: audit 2026-03-10T07:41:42.008564+0000 mon.b (mon.1) 652 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:42 vm00 bash[28005]: audit 2026-03-10T07:41:42.008564+0000 mon.b (mon.1) 652 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:42 vm00 bash[20701]: audit 2026-03-10T07:41:41.943393+0000 mon.a (mon.0) 3450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:42 vm00 bash[20701]: audit 2026-03-10T07:41:41.943393+0000 mon.a (mon.0) 3450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:42 vm00 bash[20701]: cluster 2026-03-10T07:41:41.960048+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:42 vm00 bash[20701]: cluster 2026-03-10T07:41:41.960048+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:42 vm00 bash[20701]: audit 2026-03-10T07:41:42.007348+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:42 vm00 bash[20701]: audit 2026-03-10T07:41:42.007348+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:42 vm00 bash[20701]: audit 2026-03-10T07:41:42.008564+0000 mon.b (mon.1) 652 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:42 vm00 bash[20701]: audit 2026-03-10T07:41:42.008564+0000 mon.b (mon.1) 652 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:41:43.993 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:41:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:41:44.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:43 vm03 bash[23382]: cluster 2026-03-10T07:41:42.764370+0000 mgr.y (mgr.24407) 629 : cluster [DBG] pgmap v1116: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:43 vm03 bash[23382]: cluster 2026-03-10T07:41:42.764370+0000 mgr.y (mgr.24407) 629 : cluster [DBG] pgmap v1116: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:43 vm03 bash[23382]: audit 2026-03-10T07:41:42.980096+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:43 vm03 bash[23382]: audit 2026-03-10T07:41:42.980096+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:43 vm03 bash[23382]: cluster 2026-03-10T07:41:42.983708+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T07:41:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:43 vm03 bash[23382]: cluster 2026-03-10T07:41:42.983708+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T07:41:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:43 vm03 bash[23382]: audit 2026-03-10T07:41:42.993708+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:43 vm03 bash[23382]: audit 2026-03-10T07:41:42.993708+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:43 vm03 bash[23382]: audit 2026-03-10T07:41:42.994689+0000 mon.b (mon.1) 653 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:44.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:44 vm03 bash[23382]: audit 2026-03-10T07:41:42.994689+0000 mon.b (mon.1) 653 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:44.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:43 vm00 bash[28005]: cluster 2026-03-10T07:41:42.764370+0000 mgr.y (mgr.24407) 629 : cluster [DBG] pgmap v1116: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:43 vm00 bash[28005]: cluster 2026-03-10T07:41:42.764370+0000 mgr.y (mgr.24407) 629 : cluster [DBG] pgmap v1116: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:43 vm00 bash[28005]: audit 2026-03-10T07:41:42.980096+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:44 vm00 bash[28005]: audit 2026-03-10T07:41:42.980096+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:44 vm00 bash[28005]: cluster 2026-03-10T07:41:42.983708+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:44 vm00 bash[28005]: cluster 2026-03-10T07:41:42.983708+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:44 vm00 bash[28005]: audit 2026-03-10T07:41:42.993708+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:44 vm00 bash[28005]: audit 2026-03-10T07:41:42.993708+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:44 vm00 bash[28005]: audit 2026-03-10T07:41:42.994689+0000 mon.b (mon.1) 653 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:44 vm00 bash[28005]: audit 2026-03-10T07:41:42.994689+0000 mon.b (mon.1) 653 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:43 vm00 bash[20701]: cluster 2026-03-10T07:41:42.764370+0000 mgr.y (mgr.24407) 629 : cluster [DBG] pgmap v1116: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:43 vm00 bash[20701]: cluster 2026-03-10T07:41:42.764370+0000 mgr.y (mgr.24407) 629 : cluster [DBG] pgmap v1116: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:43 vm00 bash[20701]: audit 2026-03-10T07:41:42.980096+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:43 vm00 bash[20701]: audit 2026-03-10T07:41:42.980096+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:43 vm00 bash[20701]: cluster 2026-03-10T07:41:42.983708+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:43 vm00 bash[20701]: cluster 2026-03-10T07:41:42.983708+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:43 vm00 bash[20701]: audit 2026-03-10T07:41:42.993708+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:43 vm00 bash[20701]: audit 2026-03-10T07:41:42.993708+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:43 vm00 bash[20701]: audit 2026-03-10T07:41:42.994689+0000 mon.b (mon.1) 653 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:44.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:43 vm00 bash[20701]: audit 2026-03-10T07:41:42.994689+0000 mon.b (mon.1) 653 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: audit 2026-03-10T07:41:43.579992+0000 mgr.y (mgr.24407) 630 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: audit 2026-03-10T07:41:43.579992+0000 mgr.y (mgr.24407) 630 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: audit 2026-03-10T07:41:43.983201+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: audit 2026-03-10T07:41:43.983201+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: cluster 2026-03-10T07:41:43.986208+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: cluster 2026-03-10T07:41:43.986208+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: audit 2026-03-10T07:41:43.988833+0000 mon.b (mon.1) 654 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: audit 2026-03-10T07:41:43.988833+0000 mon.b (mon.1) 654 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: audit 2026-03-10T07:41:43.995411+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: audit 2026-03-10T07:41:43.995411+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: cluster 2026-03-10T07:41:44.983529+0000 mon.a (mon.0) 3459 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:45 vm00 bash[28005]: cluster 2026-03-10T07:41:44.983529+0000 mon.a (mon.0) 3459 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: audit 2026-03-10T07:41:43.579992+0000 mgr.y (mgr.24407) 630 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: audit 2026-03-10T07:41:43.579992+0000 mgr.y (mgr.24407) 630 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: audit 2026-03-10T07:41:43.983201+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: audit 2026-03-10T07:41:43.983201+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: cluster 2026-03-10T07:41:43.986208+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: cluster 2026-03-10T07:41:43.986208+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: audit 2026-03-10T07:41:43.988833+0000 mon.b (mon.1) 654 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: audit 2026-03-10T07:41:43.988833+0000 mon.b (mon.1) 654 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: audit 2026-03-10T07:41:43.995411+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: audit 2026-03-10T07:41:43.995411+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: cluster 2026-03-10T07:41:44.983529+0000 mon.a (mon.0) 3459 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:45.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:45 vm00 bash[20701]: cluster 2026-03-10T07:41:44.983529+0000 mon.a (mon.0) 3459 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: audit 2026-03-10T07:41:43.579992+0000 mgr.y (mgr.24407) 630 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: audit 2026-03-10T07:41:43.579992+0000 mgr.y (mgr.24407) 630 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: audit 2026-03-10T07:41:43.983201+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: audit 2026-03-10T07:41:43.983201+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm00-59782-111", "overlaypool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: cluster 2026-03-10T07:41:43.986208+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: cluster 2026-03-10T07:41:43.986208+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: audit 2026-03-10T07:41:43.988833+0000 mon.b (mon.1) 654 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: audit 2026-03-10T07:41:43.988833+0000 mon.b (mon.1) 654 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: audit 2026-03-10T07:41:43.995411+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: audit 2026-03-10T07:41:43.995411+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]: dispatch 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: cluster 2026-03-10T07:41:44.983529+0000 mon.a (mon.0) 3459 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:45.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:45 vm03 bash[23382]: cluster 2026-03-10T07:41:44.983529+0000 mon.a (mon.0) 3459 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:46 vm00 bash[28005]: cluster 2026-03-10T07:41:44.765183+0000 mgr.y (mgr.24407) 631 : cluster [DBG] pgmap v1119: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:46 vm00 bash[28005]: cluster 2026-03-10T07:41:44.765183+0000 mgr.y (mgr.24407) 631 : cluster [DBG] pgmap v1119: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:46 vm00 bash[28005]: audit 2026-03-10T07:41:45.008715+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]': finished 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:46 vm00 bash[28005]: audit 2026-03-10T07:41:45.008715+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]': finished 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:46 vm00 bash[28005]: cluster 2026-03-10T07:41:45.015945+0000 mon.a (mon.0) 3461 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:46 vm00 bash[28005]: cluster 2026-03-10T07:41:45.015945+0000 mon.a (mon.0) 3461 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:46 vm00 bash[28005]: audit 2026-03-10T07:41:45.067444+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:46 vm00 bash[28005]: audit 2026-03-10T07:41:45.067444+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:46 vm00 bash[28005]: audit 2026-03-10T07:41:45.068725+0000 mon.b (mon.1) 655 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:46 vm00 bash[28005]: audit 2026-03-10T07:41:45.068725+0000 mon.b (mon.1) 655 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:46 vm00 bash[20701]: cluster 2026-03-10T07:41:44.765183+0000 mgr.y (mgr.24407) 631 : cluster [DBG] pgmap v1119: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:46 vm00 bash[20701]: cluster 2026-03-10T07:41:44.765183+0000 mgr.y (mgr.24407) 631 : cluster [DBG] pgmap v1119: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:46 vm00 bash[20701]: audit 2026-03-10T07:41:45.008715+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]': finished 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:46 vm00 bash[20701]: audit 2026-03-10T07:41:45.008715+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]': finished 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:46 vm00 bash[20701]: cluster 2026-03-10T07:41:45.015945+0000 mon.a (mon.0) 3461 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:46 vm00 bash[20701]: cluster 2026-03-10T07:41:45.015945+0000 mon.a (mon.0) 3461 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:46 vm00 bash[20701]: audit 2026-03-10T07:41:45.067444+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:46 vm00 bash[20701]: audit 2026-03-10T07:41:45.067444+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:46 vm00 bash[20701]: audit 2026-03-10T07:41:45.068725+0000 mon.b (mon.1) 655 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:46.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:46 vm00 bash[20701]: audit 2026-03-10T07:41:45.068725+0000 mon.b (mon.1) 655 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:46 vm03 bash[23382]: cluster 2026-03-10T07:41:44.765183+0000 mgr.y (mgr.24407) 631 : cluster [DBG] pgmap v1119: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T07:41:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:46 vm03 bash[23382]: cluster 2026-03-10T07:41:44.765183+0000 mgr.y (mgr.24407) 631 : cluster [DBG] pgmap v1119: 268 pgs: 4 unknown, 264 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T07:41:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:46 vm03 bash[23382]: audit 2026-03-10T07:41:45.008715+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]': finished 2026-03-10T07:41:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:46 vm03 bash[23382]: audit 2026-03-10T07:41:45.008715+0000 mon.a (mon.0) 3460 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm00-59782-146", "mode": "writeback"}]': finished 2026-03-10T07:41:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:46 vm03 bash[23382]: cluster 2026-03-10T07:41:45.015945+0000 mon.a (mon.0) 3461 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T07:41:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:46 vm03 bash[23382]: cluster 2026-03-10T07:41:45.015945+0000 mon.a (mon.0) 3461 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T07:41:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:46 vm03 bash[23382]: audit 2026-03-10T07:41:45.067444+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:46 vm03 bash[23382]: audit 2026-03-10T07:41:45.067444+0000 mon.a (mon.0) 3462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:46 vm03 bash[23382]: audit 2026-03-10T07:41:45.068725+0000 mon.b (mon.1) 655 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:46.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:46 vm03 bash[23382]: audit 2026-03-10T07:41:45.068725+0000 mon.b (mon.1) 655 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: audit 2026-03-10T07:41:46.066116+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: audit 2026-03-10T07:41:46.066116+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: cluster 2026-03-10T07:41:46.068963+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: cluster 2026-03-10T07:41:46.068963+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: audit 2026-03-10T07:41:46.072022+0000 mon.b (mon.1) 656 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: audit 2026-03-10T07:41:46.072022+0000 mon.b (mon.1) 656 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: audit 2026-03-10T07:41:46.074124+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: audit 2026-03-10T07:41:46.074124+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: audit 2026-03-10T07:41:47.069449+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: audit 2026-03-10T07:41:47.069449+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: audit 2026-03-10T07:41:47.075301+0000 mon.b (mon.1) 657 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: audit 2026-03-10T07:41:47.075301+0000 mon.b (mon.1) 657 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: cluster 2026-03-10T07:41:47.080953+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:47 vm00 bash[28005]: cluster 2026-03-10T07:41:47.080953+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: audit 2026-03-10T07:41:46.066116+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: audit 2026-03-10T07:41:46.066116+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: cluster 2026-03-10T07:41:46.068963+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: cluster 2026-03-10T07:41:46.068963+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: audit 2026-03-10T07:41:46.072022+0000 mon.b (mon.1) 656 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: audit 2026-03-10T07:41:46.072022+0000 mon.b (mon.1) 656 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: audit 2026-03-10T07:41:46.074124+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: audit 2026-03-10T07:41:46.074124+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: audit 2026-03-10T07:41:47.069449+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: audit 2026-03-10T07:41:47.069449+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: audit 2026-03-10T07:41:47.075301+0000 mon.b (mon.1) 657 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: audit 2026-03-10T07:41:47.075301+0000 mon.b (mon.1) 657 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: cluster 2026-03-10T07:41:47.080953+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T07:41:47.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:47 vm00 bash[20701]: cluster 2026-03-10T07:41:47.080953+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: audit 2026-03-10T07:41:46.066116+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: audit 2026-03-10T07:41:46.066116+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: cluster 2026-03-10T07:41:46.068963+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: cluster 2026-03-10T07:41:46.068963+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: audit 2026-03-10T07:41:46.072022+0000 mon.b (mon.1) 656 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: audit 2026-03-10T07:41:46.072022+0000 mon.b (mon.1) 656 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: audit 2026-03-10T07:41:46.074124+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: audit 2026-03-10T07:41:46.074124+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: audit 2026-03-10T07:41:47.069449+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: audit 2026-03-10T07:41:47.069449+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: audit 2026-03-10T07:41:47.075301+0000 mon.b (mon.1) 657 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: audit 2026-03-10T07:41:47.075301+0000 mon.b (mon.1) 657 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: cluster 2026-03-10T07:41:47.080953+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T07:41:47.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:47 vm03 bash[23382]: cluster 2026-03-10T07:41:47.080953+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: cluster 2026-03-10T07:41:46.765597+0000 mgr.y (mgr.24407) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: cluster 2026-03-10T07:41:46.765597+0000 mgr.y (mgr.24407) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: audit 2026-03-10T07:41:47.082621+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: audit 2026-03-10T07:41:47.082621+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: cluster 2026-03-10T07:41:48.069705+0000 mon.a (mon.0) 3469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: cluster 2026-03-10T07:41:48.069705+0000 mon.a (mon.0) 3469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: audit 2026-03-10T07:41:48.072886+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: audit 2026-03-10T07:41:48.072886+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: audit 2026-03-10T07:41:48.079685+0000 mon.b (mon.1) 658 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: audit 2026-03-10T07:41:48.079685+0000 mon.b (mon.1) 658 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: cluster 2026-03-10T07:41:48.080553+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: cluster 2026-03-10T07:41:48.080553+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: audit 2026-03-10T07:41:48.081349+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:48 vm00 bash[28005]: audit 2026-03-10T07:41:48.081349+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: cluster 2026-03-10T07:41:46.765597+0000 mgr.y (mgr.24407) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: cluster 2026-03-10T07:41:46.765597+0000 mgr.y (mgr.24407) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: audit 2026-03-10T07:41:47.082621+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: audit 2026-03-10T07:41:47.082621+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: cluster 2026-03-10T07:41:48.069705+0000 mon.a (mon.0) 3469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: cluster 2026-03-10T07:41:48.069705+0000 mon.a (mon.0) 3469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: audit 2026-03-10T07:41:48.072886+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: audit 2026-03-10T07:41:48.072886+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: audit 2026-03-10T07:41:48.079685+0000 mon.b (mon.1) 658 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: audit 2026-03-10T07:41:48.079685+0000 mon.b (mon.1) 658 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: cluster 2026-03-10T07:41:48.080553+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: cluster 2026-03-10T07:41:48.080553+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: audit 2026-03-10T07:41:48.081349+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:48.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:48 vm00 bash[20701]: audit 2026-03-10T07:41:48.081349+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: cluster 2026-03-10T07:41:46.765597+0000 mgr.y (mgr.24407) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: cluster 2026-03-10T07:41:46.765597+0000 mgr.y (mgr.24407) 632 : cluster [DBG] pgmap v1122: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: audit 2026-03-10T07:41:47.082621+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: audit 2026-03-10T07:41:47.082621+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: cluster 2026-03-10T07:41:48.069705+0000 mon.a (mon.0) 3469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: cluster 2026-03-10T07:41:48.069705+0000 mon.a (mon.0) 3469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: audit 2026-03-10T07:41:48.072886+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: audit 2026-03-10T07:41:48.072886+0000 mon.a (mon.0) 3470 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: audit 2026-03-10T07:41:48.079685+0000 mon.b (mon.1) 658 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: audit 2026-03-10T07:41:48.079685+0000 mon.b (mon.1) 658 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: cluster 2026-03-10T07:41:48.080553+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: cluster 2026-03-10T07:41:48.080553+0000 mon.a (mon.0) 3471 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: audit 2026-03-10T07:41:48.081349+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:48.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:48 vm03 bash[23382]: audit 2026-03-10T07:41:48.081349+0000 mon.a (mon.0) 3472 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:50 vm00 bash[28005]: cluster 2026-03-10T07:41:48.765984+0000 mgr.y (mgr.24407) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:50 vm00 bash[28005]: cluster 2026-03-10T07:41:48.765984+0000 mgr.y (mgr.24407) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:50 vm00 bash[28005]: audit 2026-03-10T07:41:49.076070+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:50 vm00 bash[28005]: audit 2026-03-10T07:41:49.076070+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:50 vm00 bash[28005]: audit 2026-03-10T07:41:49.081647+0000 mon.b (mon.1) 659 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:50 vm00 bash[28005]: audit 2026-03-10T07:41:49.081647+0000 mon.b (mon.1) 659 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:50 vm00 bash[28005]: cluster 2026-03-10T07:41:49.082661+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:50 vm00 bash[28005]: cluster 2026-03-10T07:41:49.082661+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:50 vm00 bash[28005]: audit 2026-03-10T07:41:49.086324+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:50 vm00 bash[28005]: audit 2026-03-10T07:41:49.086324+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:50 vm00 bash[20701]: cluster 2026-03-10T07:41:48.765984+0000 mgr.y (mgr.24407) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:50 vm00 bash[20701]: cluster 2026-03-10T07:41:48.765984+0000 mgr.y (mgr.24407) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:50 vm00 bash[20701]: audit 2026-03-10T07:41:49.076070+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:50 vm00 bash[20701]: audit 2026-03-10T07:41:49.076070+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:50 vm00 bash[20701]: audit 2026-03-10T07:41:49.081647+0000 mon.b (mon.1) 659 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:50 vm00 bash[20701]: audit 2026-03-10T07:41:49.081647+0000 mon.b (mon.1) 659 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:50 vm00 bash[20701]: cluster 2026-03-10T07:41:49.082661+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:50 vm00 bash[20701]: cluster 2026-03-10T07:41:49.082661+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:50 vm00 bash[20701]: audit 2026-03-10T07:41:49.086324+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:50.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:50 vm00 bash[20701]: audit 2026-03-10T07:41:49.086324+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:50 vm03 bash[23382]: cluster 2026-03-10T07:41:48.765984+0000 mgr.y (mgr.24407) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:50 vm03 bash[23382]: cluster 2026-03-10T07:41:48.765984+0000 mgr.y (mgr.24407) 633 : cluster [DBG] pgmap v1125: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:41:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:50 vm03 bash[23382]: audit 2026-03-10T07:41:49.076070+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:50 vm03 bash[23382]: audit 2026-03-10T07:41:49.076070+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T07:41:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:50 vm03 bash[23382]: audit 2026-03-10T07:41:49.081647+0000 mon.b (mon.1) 659 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:50 vm03 bash[23382]: audit 2026-03-10T07:41:49.081647+0000 mon.b (mon.1) 659 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:50 vm03 bash[23382]: cluster 2026-03-10T07:41:49.082661+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T07:41:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:50 vm03 bash[23382]: cluster 2026-03-10T07:41:49.082661+0000 mon.a (mon.0) 3474 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T07:41:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:50 vm03 bash[23382]: audit 2026-03-10T07:41:49.086324+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:50.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:50 vm03 bash[23382]: audit 2026-03-10T07:41:49.086324+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T07:41:51.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:51 vm00 bash[28005]: audit 2026-03-10T07:41:50.089497+0000 mon.a (mon.0) 3476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:41:51.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:51 vm00 bash[28005]: audit 2026-03-10T07:41:50.089497+0000 mon.a (mon.0) 3476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:41:51.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:51 vm00 bash[28005]: cluster 2026-03-10T07:41:50.099786+0000 mon.a (mon.0) 3477 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T07:41:51.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:51 vm00 bash[28005]: cluster 2026-03-10T07:41:50.099786+0000 mon.a (mon.0) 3477 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T07:41:51.382 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:41:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:41:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:41:51.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:51 vm00 bash[20701]: audit 2026-03-10T07:41:50.089497+0000 mon.a (mon.0) 3476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:41:51.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:51 vm00 bash[20701]: audit 2026-03-10T07:41:50.089497+0000 mon.a (mon.0) 3476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:41:51.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:51 vm00 bash[20701]: cluster 2026-03-10T07:41:50.099786+0000 mon.a (mon.0) 3477 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T07:41:51.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:51 vm00 bash[20701]: cluster 2026-03-10T07:41:50.099786+0000 mon.a (mon.0) 3477 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T07:41:51.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:51 vm03 bash[23382]: audit 2026-03-10T07:41:50.089497+0000 mon.a (mon.0) 3476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:41:51.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:51 vm03 bash[23382]: audit 2026-03-10T07:41:50.089497+0000 mon.a (mon.0) 3476 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T07:41:51.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:51 vm03 bash[23382]: cluster 2026-03-10T07:41:50.099786+0000 mon.a (mon.0) 3477 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T07:41:51.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:51 vm03 bash[23382]: cluster 2026-03-10T07:41:50.099786+0000 mon.a (mon.0) 3477 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T07:41:52.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:52 vm00 bash[28005]: cluster 2026-03-10T07:41:50.766336+0000 mgr.y (mgr.24407) 634 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:41:52.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:52 vm00 bash[28005]: cluster 2026-03-10T07:41:50.766336+0000 mgr.y (mgr.24407) 634 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:41:52.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:52 vm00 bash[28005]: cluster 2026-03-10T07:41:51.089124+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:41:52.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:52 vm00 bash[28005]: cluster 2026-03-10T07:41:51.089124+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:41:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:52 vm00 bash[20701]: cluster 2026-03-10T07:41:50.766336+0000 mgr.y (mgr.24407) 634 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:41:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:52 vm00 bash[20701]: cluster 2026-03-10T07:41:50.766336+0000 mgr.y (mgr.24407) 634 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:41:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:52 vm00 bash[20701]: cluster 2026-03-10T07:41:51.089124+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:41:52.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:52 vm00 bash[20701]: cluster 2026-03-10T07:41:51.089124+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:41:52.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:52 vm03 bash[23382]: cluster 2026-03-10T07:41:50.766336+0000 mgr.y (mgr.24407) 634 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:41:52.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:52 vm03 bash[23382]: cluster 2026-03-10T07:41:50.766336+0000 mgr.y (mgr.24407) 634 : cluster [DBG] pgmap v1128: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T07:41:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:52 vm03 bash[23382]: cluster 2026-03-10T07:41:51.089124+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:41:52.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:52 vm03 bash[23382]: cluster 2026-03-10T07:41:51.089124+0000 mon.a (mon.0) 3478 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T07:41:54.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:41:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:41:54.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:54 vm00 bash[28005]: cluster 2026-03-10T07:41:52.766680+0000 mgr.y (mgr.24407) 635 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:54.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:54 vm00 bash[28005]: cluster 2026-03-10T07:41:52.766680+0000 mgr.y (mgr.24407) 635 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:54.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:54 vm00 bash[20701]: cluster 2026-03-10T07:41:52.766680+0000 mgr.y (mgr.24407) 635 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:54.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:54 vm00 bash[20701]: cluster 2026-03-10T07:41:52.766680+0000 mgr.y (mgr.24407) 635 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:54.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:54 vm03 bash[23382]: cluster 2026-03-10T07:41:52.766680+0000 mgr.y (mgr.24407) 635 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:54.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:54 vm03 bash[23382]: cluster 2026-03-10T07:41:52.766680+0000 mgr.y (mgr.24407) 635 : cluster [DBG] pgmap v1129: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:55 vm03 bash[23382]: audit 2026-03-10T07:41:53.590668+0000 mgr.y (mgr.24407) 636 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:55 vm03 bash[23382]: audit 2026-03-10T07:41:53.590668+0000 mgr.y (mgr.24407) 636 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:55 vm03 bash[23382]: audit 2026-03-10T07:41:54.823372+0000 mon.a (mon.0) 3479 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:55 vm03 bash[23382]: audit 2026-03-10T07:41:54.823372+0000 mon.a (mon.0) 3479 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:55 vm03 bash[23382]: audit 2026-03-10T07:41:54.824723+0000 mon.c (mon.2) 365 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:55.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:55 vm03 bash[23382]: audit 2026-03-10T07:41:54.824723+0000 mon.c (mon.2) 365 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:55.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:55 vm00 bash[28005]: audit 2026-03-10T07:41:53.590668+0000 mgr.y (mgr.24407) 636 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:55.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:55 vm00 bash[28005]: audit 2026-03-10T07:41:53.590668+0000 mgr.y (mgr.24407) 636 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:55.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:55 vm00 bash[28005]: audit 2026-03-10T07:41:54.823372+0000 mon.a (mon.0) 3479 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:55.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:55 vm00 bash[28005]: audit 2026-03-10T07:41:54.823372+0000 mon.a (mon.0) 3479 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:55.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:55 vm00 bash[28005]: audit 2026-03-10T07:41:54.824723+0000 mon.c (mon.2) 365 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:55.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:55 vm00 bash[28005]: audit 2026-03-10T07:41:54.824723+0000 mon.c (mon.2) 365 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:55.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:55 vm00 bash[20701]: audit 2026-03-10T07:41:53.590668+0000 mgr.y (mgr.24407) 636 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:55.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:55 vm00 bash[20701]: audit 2026-03-10T07:41:53.590668+0000 mgr.y (mgr.24407) 636 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:41:55.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:55 vm00 bash[20701]: audit 2026-03-10T07:41:54.823372+0000 mon.a (mon.0) 3479 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:55.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:55 vm00 bash[20701]: audit 2026-03-10T07:41:54.823372+0000 mon.a (mon.0) 3479 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:41:55.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:55 vm00 bash[20701]: audit 2026-03-10T07:41:54.824723+0000 mon.c (mon.2) 365 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:55.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:55 vm00 bash[20701]: audit 2026-03-10T07:41:54.824723+0000 mon.c (mon.2) 365 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:41:56.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:56 vm03 bash[23382]: cluster 2026-03-10T07:41:54.767224+0000 mgr.y (mgr.24407) 637 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:56.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:56 vm03 bash[23382]: cluster 2026-03-10T07:41:54.767224+0000 mgr.y (mgr.24407) 637 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:56.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:56 vm00 bash[28005]: cluster 2026-03-10T07:41:54.767224+0000 mgr.y (mgr.24407) 637 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:56.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:56 vm00 bash[28005]: cluster 2026-03-10T07:41:54.767224+0000 mgr.y (mgr.24407) 637 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:56.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:56 vm00 bash[20701]: cluster 2026-03-10T07:41:54.767224+0000 mgr.y (mgr.24407) 637 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:56.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:56 vm00 bash[20701]: cluster 2026-03-10T07:41:54.767224+0000 mgr.y (mgr.24407) 637 : cluster [DBG] pgmap v1130: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:58.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:58 vm03 bash[23382]: cluster 2026-03-10T07:41:56.767671+0000 mgr.y (mgr.24407) 638 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:58.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:41:58 vm03 bash[23382]: cluster 2026-03-10T07:41:56.767671+0000 mgr.y (mgr.24407) 638 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:58.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:58 vm00 bash[28005]: cluster 2026-03-10T07:41:56.767671+0000 mgr.y (mgr.24407) 638 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:58.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:41:58 vm00 bash[28005]: cluster 2026-03-10T07:41:56.767671+0000 mgr.y (mgr.24407) 638 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:58.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:58 vm00 bash[20701]: cluster 2026-03-10T07:41:56.767671+0000 mgr.y (mgr.24407) 638 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:41:58.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:41:58 vm00 bash[20701]: cluster 2026-03-10T07:41:56.767671+0000 mgr.y (mgr.24407) 638 : cluster [DBG] pgmap v1131: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:42:01.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:00 vm03 bash[23382]: cluster 2026-03-10T07:41:58.768068+0000 mgr.y (mgr.24407) 639 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:42:01.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:00 vm03 bash[23382]: cluster 2026-03-10T07:41:58.768068+0000 mgr.y (mgr.24407) 639 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:42:01.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:00 vm03 bash[23382]: audit 2026-03-10T07:42:00.102330+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:00 vm03 bash[23382]: audit 2026-03-10T07:42:00.102330+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:00 vm03 bash[23382]: audit 2026-03-10T07:42:00.103636+0000 mon.b (mon.1) 660 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:00 vm03 bash[23382]: audit 2026-03-10T07:42:00.103636+0000 mon.b (mon.1) 660 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:00 vm00 bash[20701]: cluster 2026-03-10T07:41:58.768068+0000 mgr.y (mgr.24407) 639 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:00 vm00 bash[20701]: cluster 2026-03-10T07:41:58.768068+0000 mgr.y (mgr.24407) 639 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:00 vm00 bash[20701]: audit 2026-03-10T07:42:00.102330+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:00 vm00 bash[20701]: audit 2026-03-10T07:42:00.102330+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:00 vm00 bash[20701]: audit 2026-03-10T07:42:00.103636+0000 mon.b (mon.1) 660 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:00 vm00 bash[20701]: audit 2026-03-10T07:42:00.103636+0000 mon.b (mon.1) 660 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:00 vm00 bash[28005]: cluster 2026-03-10T07:41:58.768068+0000 mgr.y (mgr.24407) 639 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:00 vm00 bash[28005]: cluster 2026-03-10T07:41:58.768068+0000 mgr.y (mgr.24407) 639 : cluster [DBG] pgmap v1132: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:00 vm00 bash[28005]: audit 2026-03-10T07:42:00.102330+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:00 vm00 bash[28005]: audit 2026-03-10T07:42:00.102330+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:00 vm00 bash[28005]: audit 2026-03-10T07:42:00.103636+0000 mon.b (mon.1) 660 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.079 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:00 vm00 bash[28005]: audit 2026-03-10T07:42:00.103636+0000 mon.b (mon.1) 660 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:01.382 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:42:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:42:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:00.388709+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:00.388709+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: cluster 2026-03-10T07:42:00.464342+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: cluster 2026-03-10T07:42:00.464342+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:00.476028+0000 mon.b (mon.1) 661 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:00.476028+0000 mon.b (mon.1) 661 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:00.487060+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:00.487060+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: cluster 2026-03-10T07:42:00.768487+0000 mgr.y (mgr.24407) 640 : cluster [DBG] pgmap v1134: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: cluster 2026-03-10T07:42:00.768487+0000 mgr.y (mgr.24407) 640 : cluster [DBG] pgmap v1134: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:01.391967+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:01.391967+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: cluster 2026-03-10T07:42:01.395808+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: cluster 2026-03-10T07:42:01.395808+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:01.427542+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:01.427542+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:01.428135+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:01.428135+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:01.429020+0000 mon.b (mon.1) 662 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:01.429020+0000 mon.b (mon.1) 662 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:01.429747+0000 mon.b (mon.1) 663 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:01 vm00 bash[28005]: audit 2026-03-10T07:42:01.429747+0000 mon.b (mon.1) 663 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:00.388709+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:00.388709+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: cluster 2026-03-10T07:42:00.464342+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: cluster 2026-03-10T07:42:00.464342+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:00.476028+0000 mon.b (mon.1) 661 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:00.476028+0000 mon.b (mon.1) 661 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:00.487060+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:00.487060+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: cluster 2026-03-10T07:42:00.768487+0000 mgr.y (mgr.24407) 640 : cluster [DBG] pgmap v1134: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: cluster 2026-03-10T07:42:00.768487+0000 mgr.y (mgr.24407) 640 : cluster [DBG] pgmap v1134: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:01.391967+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:01.391967+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: cluster 2026-03-10T07:42:01.395808+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: cluster 2026-03-10T07:42:01.395808+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:01.427542+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:01.427542+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:01.428135+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:01.428135+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:01.429020+0000 mon.b (mon.1) 662 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:01.429020+0000 mon.b (mon.1) 662 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:01.429747+0000 mon.b (mon.1) 663 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:01 vm00 bash[20701]: audit 2026-03-10T07:42:01.429747+0000 mon.b (mon.1) 663 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:00.388709+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:00.388709+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: cluster 2026-03-10T07:42:00.464342+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: cluster 2026-03-10T07:42:00.464342+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:00.476028+0000 mon.b (mon.1) 661 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:00.476028+0000 mon.b (mon.1) 661 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:00.487060+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:00.487060+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: cluster 2026-03-10T07:42:00.768487+0000 mgr.y (mgr.24407) 640 : cluster [DBG] pgmap v1134: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: cluster 2026-03-10T07:42:00.768487+0000 mgr.y (mgr.24407) 640 : cluster [DBG] pgmap v1134: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:01.391967+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:01.391967+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]': finished 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: cluster 2026-03-10T07:42:01.395808+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T07:42:02.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: cluster 2026-03-10T07:42:01.395808+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T07:42:02.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:01.427542+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:01.427542+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:01.428135+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:01.428135+0000 mon.a (mon.0) 3487 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:01.429020+0000 mon.b (mon.1) 662 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:01.429020+0000 mon.b (mon.1) 662 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:02.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:01.429747+0000 mon.b (mon.1) 663 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:02.264 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:01 vm03 bash[23382]: audit 2026-03-10T07:42:01.429747+0000 mon.b (mon.1) 663 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-146"}]: dispatch 2026-03-10T07:42:03.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:03 vm00 bash[28005]: cluster 2026-03-10T07:42:02.399546+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T07:42:03.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:03 vm00 bash[28005]: cluster 2026-03-10T07:42:02.399546+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T07:42:03.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:03 vm00 bash[20701]: cluster 2026-03-10T07:42:02.399546+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T07:42:03.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:03 vm00 bash[20701]: cluster 2026-03-10T07:42:02.399546+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T07:42:04.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:42:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:42:04.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:03 vm03 bash[23382]: cluster 2026-03-10T07:42:02.399546+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T07:42:04.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:03 vm03 bash[23382]: cluster 2026-03-10T07:42:02.399546+0000 mon.a (mon.0) 3488 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: cluster 2026-03-10T07:42:02.768882+0000 mgr.y (mgr.24407) 641 : cluster [DBG] pgmap v1137: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: cluster 2026-03-10T07:42:02.768882+0000 mgr.y (mgr.24407) 641 : cluster [DBG] pgmap v1137: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: cluster 2026-03-10T07:42:03.395854+0000 mon.a (mon.0) 3489 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: cluster 2026-03-10T07:42:03.395854+0000 mon.a (mon.0) 3489 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: audit 2026-03-10T07:42:03.600436+0000 mgr.y (mgr.24407) 642 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: audit 2026-03-10T07:42:03.600436+0000 mgr.y (mgr.24407) 642 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: cluster 2026-03-10T07:42:03.623106+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: cluster 2026-03-10T07:42:03.623106+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: audit 2026-03-10T07:42:03.628475+0000 mon.b (mon.1) 664 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: audit 2026-03-10T07:42:03.628475+0000 mon.b (mon.1) 664 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: audit 2026-03-10T07:42:03.634839+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:05.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:04 vm03 bash[23382]: audit 2026-03-10T07:42:03.634839+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: cluster 2026-03-10T07:42:02.768882+0000 mgr.y (mgr.24407) 641 : cluster [DBG] pgmap v1137: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: cluster 2026-03-10T07:42:02.768882+0000 mgr.y (mgr.24407) 641 : cluster [DBG] pgmap v1137: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: cluster 2026-03-10T07:42:03.395854+0000 mon.a (mon.0) 3489 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: cluster 2026-03-10T07:42:03.395854+0000 mon.a (mon.0) 3489 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: audit 2026-03-10T07:42:03.600436+0000 mgr.y (mgr.24407) 642 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: audit 2026-03-10T07:42:03.600436+0000 mgr.y (mgr.24407) 642 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: cluster 2026-03-10T07:42:03.623106+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: cluster 2026-03-10T07:42:03.623106+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: audit 2026-03-10T07:42:03.628475+0000 mon.b (mon.1) 664 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: audit 2026-03-10T07:42:03.628475+0000 mon.b (mon.1) 664 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: audit 2026-03-10T07:42:03.634839+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:04 vm00 bash[28005]: audit 2026-03-10T07:42:03.634839+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: cluster 2026-03-10T07:42:02.768882+0000 mgr.y (mgr.24407) 641 : cluster [DBG] pgmap v1137: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: cluster 2026-03-10T07:42:02.768882+0000 mgr.y (mgr.24407) 641 : cluster [DBG] pgmap v1137: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: cluster 2026-03-10T07:42:03.395854+0000 mon.a (mon.0) 3489 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: cluster 2026-03-10T07:42:03.395854+0000 mon.a (mon.0) 3489 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: audit 2026-03-10T07:42:03.600436+0000 mgr.y (mgr.24407) 642 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: audit 2026-03-10T07:42:03.600436+0000 mgr.y (mgr.24407) 642 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: cluster 2026-03-10T07:42:03.623106+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: cluster 2026-03-10T07:42:03.623106+0000 mon.a (mon.0) 3490 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: audit 2026-03-10T07:42:03.628475+0000 mon.b (mon.1) 664 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: audit 2026-03-10T07:42:03.628475+0000 mon.b (mon.1) 664 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: audit 2026-03-10T07:42:03.634839+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:05.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:04 vm00 bash[20701]: audit 2026-03-10T07:42:03.634839+0000 mon.a (mon.0) 3491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:04.618998+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:04.618998+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: cluster 2026-03-10T07:42:04.624349+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: cluster 2026-03-10T07:42:04.624349+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:04.671455+0000 mon.a (mon.0) 3494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:04.671455+0000 mon.a (mon.0) 3494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:04.673000+0000 mon.b (mon.1) 665 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:04.673000+0000 mon.b (mon.1) 665 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: cluster 2026-03-10T07:42:04.769608+0000 mgr.y (mgr.24407) 643 : cluster [DBG] pgmap v1140: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: cluster 2026-03-10T07:42:04.769608+0000 mgr.y (mgr.24407) 643 : cluster [DBG] pgmap v1140: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:05.623434+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:05.623434+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: cluster 2026-03-10T07:42:05.627692+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: cluster 2026-03-10T07:42:05.627692+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:05.648107+0000 mon.b (mon.1) 666 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:05.648107+0000 mon.b (mon.1) 666 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:05.650634+0000 mon.a (mon.0) 3497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:06.013 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:05 vm03 bash[23382]: audit 2026-03-10T07:42:05.650634+0000 mon.a (mon.0) 3497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:04.618998+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:04.618998+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: cluster 2026-03-10T07:42:04.624349+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: cluster 2026-03-10T07:42:04.624349+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:04.671455+0000 mon.a (mon.0) 3494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:04.671455+0000 mon.a (mon.0) 3494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:04.673000+0000 mon.b (mon.1) 665 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:04.673000+0000 mon.b (mon.1) 665 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: cluster 2026-03-10T07:42:04.769608+0000 mgr.y (mgr.24407) 643 : cluster [DBG] pgmap v1140: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: cluster 2026-03-10T07:42:04.769608+0000 mgr.y (mgr.24407) 643 : cluster [DBG] pgmap v1140: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:05.623434+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:05.623434+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: cluster 2026-03-10T07:42:05.627692+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: cluster 2026-03-10T07:42:05.627692+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:05.648107+0000 mon.b (mon.1) 666 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:05.648107+0000 mon.b (mon.1) 666 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:05.650634+0000 mon.a (mon.0) 3497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:05 vm00 bash[28005]: audit 2026-03-10T07:42:05.650634+0000 mon.a (mon.0) 3497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:04.618998+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:04.618998+0000 mon.a (mon.0) 3492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: cluster 2026-03-10T07:42:04.624349+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: cluster 2026-03-10T07:42:04.624349+0000 mon.a (mon.0) 3493 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:04.671455+0000 mon.a (mon.0) 3494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:04.671455+0000 mon.a (mon.0) 3494 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:04.673000+0000 mon.b (mon.1) 665 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:04.673000+0000 mon.b (mon.1) 665 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: cluster 2026-03-10T07:42:04.769608+0000 mgr.y (mgr.24407) 643 : cluster [DBG] pgmap v1140: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: cluster 2026-03-10T07:42:04.769608+0000 mgr.y (mgr.24407) 643 : cluster [DBG] pgmap v1140: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:05.623434+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:05.623434+0000 mon.a (mon.0) 3495 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: cluster 2026-03-10T07:42:05.627692+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: cluster 2026-03-10T07:42:05.627692+0000 mon.a (mon.0) 3496 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:05.648107+0000 mon.b (mon.1) 666 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:05.648107+0000 mon.b (mon.1) 666 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:05.650634+0000 mon.a (mon.0) 3497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:05 vm00 bash[20701]: audit 2026-03-10T07:42:05.650634+0000 mon.a (mon.0) 3497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: cluster 2026-03-10T07:42:06.770012+0000 mgr.y (mgr.24407) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 468 B/s wr, 1 op/s 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: cluster 2026-03-10T07:42:06.770012+0000 mgr.y (mgr.24407) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 468 B/s wr, 1 op/s 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: audit 2026-03-10T07:42:06.783756+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]': finished 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: audit 2026-03-10T07:42:06.783756+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]': finished 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: cluster 2026-03-10T07:42:06.883607+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: cluster 2026-03-10T07:42:06.883607+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: audit 2026-03-10T07:42:07.079557+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: audit 2026-03-10T07:42:07.079557+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: audit 2026-03-10T07:42:07.080567+0000 mon.b (mon.1) 667 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: audit 2026-03-10T07:42:07.080567+0000 mon.b (mon.1) 667 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: audit 2026-03-10T07:42:07.080917+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: audit 2026-03-10T07:42:07.080917+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: audit 2026-03-10T07:42:07.082207+0000 mon.b (mon.1) 668 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:07 vm00 bash[28005]: audit 2026-03-10T07:42:07.082207+0000 mon.b (mon.1) 668 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: cluster 2026-03-10T07:42:06.770012+0000 mgr.y (mgr.24407) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 468 B/s wr, 1 op/s 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: cluster 2026-03-10T07:42:06.770012+0000 mgr.y (mgr.24407) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 468 B/s wr, 1 op/s 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: audit 2026-03-10T07:42:06.783756+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]': finished 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: audit 2026-03-10T07:42:06.783756+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]': finished 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: cluster 2026-03-10T07:42:06.883607+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: cluster 2026-03-10T07:42:06.883607+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: audit 2026-03-10T07:42:07.079557+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: audit 2026-03-10T07:42:07.079557+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: audit 2026-03-10T07:42:07.080567+0000 mon.b (mon.1) 667 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: audit 2026-03-10T07:42:07.080567+0000 mon.b (mon.1) 667 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: audit 2026-03-10T07:42:07.080917+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: audit 2026-03-10T07:42:07.080917+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: audit 2026-03-10T07:42:07.082207+0000 mon.b (mon.1) 668 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:07 vm00 bash[20701]: audit 2026-03-10T07:42:07.082207+0000 mon.b (mon.1) 668 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: cluster 2026-03-10T07:42:06.770012+0000 mgr.y (mgr.24407) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 468 B/s wr, 1 op/s 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: cluster 2026-03-10T07:42:06.770012+0000 mgr.y (mgr.24407) 644 : cluster [DBG] pgmap v1142: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 468 B/s wr, 1 op/s 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: audit 2026-03-10T07:42:06.783756+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]': finished 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: audit 2026-03-10T07:42:06.783756+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]': finished 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: cluster 2026-03-10T07:42:06.883607+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: cluster 2026-03-10T07:42:06.883607+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: audit 2026-03-10T07:42:07.079557+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: audit 2026-03-10T07:42:07.079557+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: audit 2026-03-10T07:42:07.080567+0000 mon.b (mon.1) 667 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: audit 2026-03-10T07:42:07.080567+0000 mon.b (mon.1) 667 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: audit 2026-03-10T07:42:07.080917+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: audit 2026-03-10T07:42:07.080917+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: audit 2026-03-10T07:42:07.082207+0000 mon.b (mon.1) 668 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:08.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:07 vm03 bash[23382]: audit 2026-03-10T07:42:07.082207+0000 mon.b (mon.1) 668 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-148"}]: dispatch 2026-03-10T07:42:09.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:08 vm00 bash[28005]: cluster 2026-03-10T07:42:07.813120+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T07:42:09.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:08 vm00 bash[28005]: cluster 2026-03-10T07:42:07.813120+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T07:42:09.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:08 vm00 bash[20701]: cluster 2026-03-10T07:42:07.813120+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T07:42:09.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:08 vm00 bash[20701]: cluster 2026-03-10T07:42:07.813120+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T07:42:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:08 vm03 bash[23382]: cluster 2026-03-10T07:42:07.813120+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T07:42:09.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:08 vm03 bash[23382]: cluster 2026-03-10T07:42:07.813120+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:09 vm00 bash[28005]: cluster 2026-03-10T07:42:08.770336+0000 mgr.y (mgr.24407) 645 : cluster [DBG] pgmap v1145: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 246 B/s wr, 1 op/s 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:09 vm00 bash[28005]: cluster 2026-03-10T07:42:08.770336+0000 mgr.y (mgr.24407) 645 : cluster [DBG] pgmap v1145: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 246 B/s wr, 1 op/s 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:09 vm00 bash[28005]: cluster 2026-03-10T07:42:08.819868+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:09 vm00 bash[28005]: cluster 2026-03-10T07:42:08.819868+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:09 vm00 bash[28005]: audit 2026-03-10T07:42:08.823110+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:09 vm00 bash[28005]: audit 2026-03-10T07:42:08.823110+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:09 vm00 bash[28005]: audit 2026-03-10T07:42:08.823512+0000 mon.b (mon.1) 669 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:09 vm00 bash[28005]: audit 2026-03-10T07:42:08.823512+0000 mon.b (mon.1) 669 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:09 vm00 bash[20701]: cluster 2026-03-10T07:42:08.770336+0000 mgr.y (mgr.24407) 645 : cluster [DBG] pgmap v1145: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 246 B/s wr, 1 op/s 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:09 vm00 bash[20701]: cluster 2026-03-10T07:42:08.770336+0000 mgr.y (mgr.24407) 645 : cluster [DBG] pgmap v1145: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 246 B/s wr, 1 op/s 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:09 vm00 bash[20701]: cluster 2026-03-10T07:42:08.819868+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:09 vm00 bash[20701]: cluster 2026-03-10T07:42:08.819868+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:09 vm00 bash[20701]: audit 2026-03-10T07:42:08.823110+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:09 vm00 bash[20701]: audit 2026-03-10T07:42:08.823110+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:09 vm00 bash[20701]: audit 2026-03-10T07:42:08.823512+0000 mon.b (mon.1) 669 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:10.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:09 vm00 bash[20701]: audit 2026-03-10T07:42:08.823512+0000 mon.b (mon.1) 669 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:09 vm03 bash[23382]: cluster 2026-03-10T07:42:08.770336+0000 mgr.y (mgr.24407) 645 : cluster [DBG] pgmap v1145: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 246 B/s wr, 1 op/s 2026-03-10T07:42:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:09 vm03 bash[23382]: cluster 2026-03-10T07:42:08.770336+0000 mgr.y (mgr.24407) 645 : cluster [DBG] pgmap v1145: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 246 B/s wr, 1 op/s 2026-03-10T07:42:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:09 vm03 bash[23382]: cluster 2026-03-10T07:42:08.819868+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T07:42:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:09 vm03 bash[23382]: cluster 2026-03-10T07:42:08.819868+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T07:42:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:09 vm03 bash[23382]: audit 2026-03-10T07:42:08.823110+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:09 vm03 bash[23382]: audit 2026-03-10T07:42:08.823110+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:09 vm03 bash[23382]: audit 2026-03-10T07:42:08.823512+0000 mon.b (mon.1) 669 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:10.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:09 vm03 bash[23382]: audit 2026-03-10T07:42:08.823512+0000 mon.b (mon.1) 669 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.816031+0000 mon.a (mon.0) 3505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.816031+0000 mon.a (mon.0) 3505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: cluster 2026-03-10T07:42:09.819550+0000 mon.a (mon.0) 3506 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: cluster 2026-03-10T07:42:09.819550+0000 mon.a (mon.0) 3506 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.847913+0000 mon.a (mon.0) 3507 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.847913+0000 mon.a (mon.0) 3507 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.850177+0000 mon.c (mon.2) 366 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.850177+0000 mon.c (mon.2) 366 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.888411+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.888411+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.889286+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.889286+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.889990+0000 mon.b (mon.1) 670 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.889990+0000 mon.b (mon.1) 670 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.890921+0000 mon.b (mon.1) 671 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:11 vm00 bash[28005]: audit 2026-03-10T07:42:09.890921+0000 mon.b (mon.1) 671 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:42:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:42:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.816031+0000 mon.a (mon.0) 3505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.816031+0000 mon.a (mon.0) 3505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: cluster 2026-03-10T07:42:09.819550+0000 mon.a (mon.0) 3506 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: cluster 2026-03-10T07:42:09.819550+0000 mon.a (mon.0) 3506 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.847913+0000 mon.a (mon.0) 3507 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.847913+0000 mon.a (mon.0) 3507 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.850177+0000 mon.c (mon.2) 366 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:11.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.850177+0000 mon.c (mon.2) 366 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:11.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.888411+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.888411+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.889286+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:11.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.889286+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:11.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.889990+0000 mon.b (mon.1) 670 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.889990+0000 mon.b (mon.1) 670 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.890921+0000 mon.b (mon.1) 671 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:11.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:11 vm00 bash[20701]: audit 2026-03-10T07:42:09.890921+0000 mon.b (mon.1) 671 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.816031+0000 mon.a (mon.0) 3505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.816031+0000 mon.a (mon.0) 3505 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: cluster 2026-03-10T07:42:09.819550+0000 mon.a (mon.0) 3506 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: cluster 2026-03-10T07:42:09.819550+0000 mon.a (mon.0) 3506 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.847913+0000 mon.a (mon.0) 3507 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.847913+0000 mon.a (mon.0) 3507 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.850177+0000 mon.c (mon.2) 366 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.850177+0000 mon.c (mon.2) 366 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.888411+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.888411+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.889286+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.889286+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.889990+0000 mon.b (mon.1) 670 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.889990+0000 mon.b (mon.1) 670 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.890921+0000 mon.b (mon.1) 671 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:11.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:11 vm03 bash[23382]: audit 2026-03-10T07:42:09.890921+0000 mon.b (mon.1) 671 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-150"}]: dispatch 2026-03-10T07:42:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:12 vm03 bash[23382]: cluster 2026-03-10T07:42:10.770700+0000 mgr.y (mgr.24407) 646 : cluster [DBG] pgmap v1148: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:12 vm03 bash[23382]: cluster 2026-03-10T07:42:10.770700+0000 mgr.y (mgr.24407) 646 : cluster [DBG] pgmap v1148: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:12 vm03 bash[23382]: cluster 2026-03-10T07:42:11.101571+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T07:42:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:12 vm03 bash[23382]: cluster 2026-03-10T07:42:11.101571+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T07:42:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:12 vm03 bash[23382]: cluster 2026-03-10T07:42:12.101725+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T07:42:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:12 vm03 bash[23382]: cluster 2026-03-10T07:42:12.101725+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T07:42:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:12 vm03 bash[23382]: audit 2026-03-10T07:42:12.115123+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:12 vm03 bash[23382]: audit 2026-03-10T07:42:12.115123+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:12 vm03 bash[23382]: audit 2026-03-10T07:42:12.116320+0000 mon.b (mon.1) 672 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:12.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:12 vm03 bash[23382]: audit 2026-03-10T07:42:12.116320+0000 mon.b (mon.1) 672 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:12 vm00 bash[28005]: cluster 2026-03-10T07:42:10.770700+0000 mgr.y (mgr.24407) 646 : cluster [DBG] pgmap v1148: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:12 vm00 bash[28005]: cluster 2026-03-10T07:42:10.770700+0000 mgr.y (mgr.24407) 646 : cluster [DBG] pgmap v1148: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:12 vm00 bash[28005]: cluster 2026-03-10T07:42:11.101571+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:12 vm00 bash[28005]: cluster 2026-03-10T07:42:11.101571+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:12 vm00 bash[28005]: cluster 2026-03-10T07:42:12.101725+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:12 vm00 bash[28005]: cluster 2026-03-10T07:42:12.101725+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:12 vm00 bash[28005]: audit 2026-03-10T07:42:12.115123+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:12 vm00 bash[28005]: audit 2026-03-10T07:42:12.115123+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:12 vm00 bash[28005]: audit 2026-03-10T07:42:12.116320+0000 mon.b (mon.1) 672 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:12 vm00 bash[28005]: audit 2026-03-10T07:42:12.116320+0000 mon.b (mon.1) 672 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:12 vm00 bash[20701]: cluster 2026-03-10T07:42:10.770700+0000 mgr.y (mgr.24407) 646 : cluster [DBG] pgmap v1148: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:12 vm00 bash[20701]: cluster 2026-03-10T07:42:10.770700+0000 mgr.y (mgr.24407) 646 : cluster [DBG] pgmap v1148: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:12 vm00 bash[20701]: cluster 2026-03-10T07:42:11.101571+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:12 vm00 bash[20701]: cluster 2026-03-10T07:42:11.101571+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:12 vm00 bash[20701]: cluster 2026-03-10T07:42:12.101725+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:12 vm00 bash[20701]: cluster 2026-03-10T07:42:12.101725+0000 mon.a (mon.0) 3511 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:12 vm00 bash[20701]: audit 2026-03-10T07:42:12.115123+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:12 vm00 bash[20701]: audit 2026-03-10T07:42:12.115123+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:12 vm00 bash[20701]: audit 2026-03-10T07:42:12.116320+0000 mon.b (mon.1) 672 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:12.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:12 vm00 bash[20701]: audit 2026-03-10T07:42:12.116320+0000 mon.b (mon.1) 672 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:14.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:42:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: cluster 2026-03-10T07:42:12.771117+0000 mgr.y (mgr.24407) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: cluster 2026-03-10T07:42:12.771117+0000 mgr.y (mgr.24407) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: audit 2026-03-10T07:42:13.100804+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: audit 2026-03-10T07:42:13.100804+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: cluster 2026-03-10T07:42:13.103408+0000 mon.a (mon.0) 3514 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: cluster 2026-03-10T07:42:13.103408+0000 mon.a (mon.0) 3514 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: audit 2026-03-10T07:42:13.168881+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: audit 2026-03-10T07:42:13.168881+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: audit 2026-03-10T07:42:13.169390+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: audit 2026-03-10T07:42:13.169390+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: audit 2026-03-10T07:42:13.170562+0000 mon.b (mon.1) 673 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: audit 2026-03-10T07:42:13.170562+0000 mon.b (mon.1) 673 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: audit 2026-03-10T07:42:13.171125+0000 mon.b (mon.1) 674 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: audit 2026-03-10T07:42:13.171125+0000 mon.b (mon.1) 674 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: cluster 2026-03-10T07:42:13.241444+0000 mon.a (mon.0) 3517 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:14 vm00 bash[28005]: cluster 2026-03-10T07:42:13.241444+0000 mon.a (mon.0) 3517 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: cluster 2026-03-10T07:42:12.771117+0000 mgr.y (mgr.24407) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: cluster 2026-03-10T07:42:12.771117+0000 mgr.y (mgr.24407) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: audit 2026-03-10T07:42:13.100804+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: audit 2026-03-10T07:42:13.100804+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: cluster 2026-03-10T07:42:13.103408+0000 mon.a (mon.0) 3514 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: cluster 2026-03-10T07:42:13.103408+0000 mon.a (mon.0) 3514 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: audit 2026-03-10T07:42:13.168881+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: audit 2026-03-10T07:42:13.168881+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: audit 2026-03-10T07:42:13.169390+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: audit 2026-03-10T07:42:13.169390+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: audit 2026-03-10T07:42:13.170562+0000 mon.b (mon.1) 673 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: audit 2026-03-10T07:42:13.170562+0000 mon.b (mon.1) 673 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: audit 2026-03-10T07:42:13.171125+0000 mon.b (mon.1) 674 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: audit 2026-03-10T07:42:13.171125+0000 mon.b (mon.1) 674 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: cluster 2026-03-10T07:42:13.241444+0000 mon.a (mon.0) 3517 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:14.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:14 vm00 bash[20701]: cluster 2026-03-10T07:42:13.241444+0000 mon.a (mon.0) 3517 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: cluster 2026-03-10T07:42:12.771117+0000 mgr.y (mgr.24407) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: cluster 2026-03-10T07:42:12.771117+0000 mgr.y (mgr.24407) 647 : cluster [DBG] pgmap v1151: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: audit 2026-03-10T07:42:13.100804+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: audit 2026-03-10T07:42:13.100804+0000 mon.a (mon.0) 3513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: cluster 2026-03-10T07:42:13.103408+0000 mon.a (mon.0) 3514 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: cluster 2026-03-10T07:42:13.103408+0000 mon.a (mon.0) 3514 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: audit 2026-03-10T07:42:13.168881+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: audit 2026-03-10T07:42:13.168881+0000 mon.a (mon.0) 3515 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: audit 2026-03-10T07:42:13.169390+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: audit 2026-03-10T07:42:13.169390+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: audit 2026-03-10T07:42:13.170562+0000 mon.b (mon.1) 673 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: audit 2026-03-10T07:42:13.170562+0000 mon.b (mon.1) 673 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: audit 2026-03-10T07:42:13.171125+0000 mon.b (mon.1) 674 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: audit 2026-03-10T07:42:13.171125+0000 mon.b (mon.1) 674 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-152"}]: dispatch 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: cluster 2026-03-10T07:42:13.241444+0000 mon.a (mon.0) 3517 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:14.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:14 vm03 bash[23382]: cluster 2026-03-10T07:42:13.241444+0000 mon.a (mon.0) 3517 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:15 vm00 bash[20701]: audit 2026-03-10T07:42:13.605309+0000 mgr.y (mgr.24407) 648 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:15 vm00 bash[20701]: audit 2026-03-10T07:42:13.605309+0000 mgr.y (mgr.24407) 648 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:15 vm00 bash[20701]: cluster 2026-03-10T07:42:14.108102+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:15 vm00 bash[20701]: cluster 2026-03-10T07:42:14.108102+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:15 vm00 bash[20701]: cluster 2026-03-10T07:42:15.120469+0000 mon.a (mon.0) 3519 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:15 vm00 bash[20701]: cluster 2026-03-10T07:42:15.120469+0000 mon.a (mon.0) 3519 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:15 vm00 bash[20701]: audit 2026-03-10T07:42:15.123720+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:15 vm00 bash[20701]: audit 2026-03-10T07:42:15.123720+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:15 vm00 bash[20701]: audit 2026-03-10T07:42:15.123922+0000 mon.b (mon.1) 675 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:15 vm00 bash[20701]: audit 2026-03-10T07:42:15.123922+0000 mon.b (mon.1) 675 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:15 vm00 bash[28005]: audit 2026-03-10T07:42:13.605309+0000 mgr.y (mgr.24407) 648 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:15 vm00 bash[28005]: audit 2026-03-10T07:42:13.605309+0000 mgr.y (mgr.24407) 648 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:15 vm00 bash[28005]: cluster 2026-03-10T07:42:14.108102+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:15 vm00 bash[28005]: cluster 2026-03-10T07:42:14.108102+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:15 vm00 bash[28005]: cluster 2026-03-10T07:42:15.120469+0000 mon.a (mon.0) 3519 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:15 vm00 bash[28005]: cluster 2026-03-10T07:42:15.120469+0000 mon.a (mon.0) 3519 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T07:42:15.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:15 vm00 bash[28005]: audit 2026-03-10T07:42:15.123720+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:15.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:15 vm00 bash[28005]: audit 2026-03-10T07:42:15.123720+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:15.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:15 vm00 bash[28005]: audit 2026-03-10T07:42:15.123922+0000 mon.b (mon.1) 675 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:15.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:15 vm00 bash[28005]: audit 2026-03-10T07:42:15.123922+0000 mon.b (mon.1) 675 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:15.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:15 vm03 bash[23382]: audit 2026-03-10T07:42:13.605309+0000 mgr.y (mgr.24407) 648 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:15.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:15 vm03 bash[23382]: audit 2026-03-10T07:42:13.605309+0000 mgr.y (mgr.24407) 648 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:15.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:15 vm03 bash[23382]: cluster 2026-03-10T07:42:14.108102+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T07:42:15.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:15 vm03 bash[23382]: cluster 2026-03-10T07:42:14.108102+0000 mon.a (mon.0) 3518 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T07:42:15.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:15 vm03 bash[23382]: cluster 2026-03-10T07:42:15.120469+0000 mon.a (mon.0) 3519 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T07:42:15.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:15 vm03 bash[23382]: cluster 2026-03-10T07:42:15.120469+0000 mon.a (mon.0) 3519 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T07:42:15.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:15 vm03 bash[23382]: audit 2026-03-10T07:42:15.123720+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:15.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:15 vm03 bash[23382]: audit 2026-03-10T07:42:15.123720+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:15.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:15 vm03 bash[23382]: audit 2026-03-10T07:42:15.123922+0000 mon.b (mon.1) 675 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:15.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:15 vm03 bash[23382]: audit 2026-03-10T07:42:15.123922+0000 mon.b (mon.1) 675 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: cluster 2026-03-10T07:42:14.771861+0000 mgr.y (mgr.24407) 649 : cluster [DBG] pgmap v1154: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: cluster 2026-03-10T07:42:14.771861+0000 mgr.y (mgr.24407) 649 : cluster [DBG] pgmap v1154: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: audit 2026-03-10T07:42:15.182369+0000 mon.c (mon.2) 367 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: audit 2026-03-10T07:42:15.182369+0000 mon.c (mon.2) 367 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: audit 2026-03-10T07:42:15.533032+0000 mon.c (mon.2) 368 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: audit 2026-03-10T07:42:15.533032+0000 mon.c (mon.2) 368 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: audit 2026-03-10T07:42:15.533786+0000 mon.c (mon.2) 369 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: audit 2026-03-10T07:42:15.533786+0000 mon.c (mon.2) 369 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: audit 2026-03-10T07:42:15.561428+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: audit 2026-03-10T07:42:15.561428+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: audit 2026-03-10T07:42:16.134618+0000 mon.a (mon.0) 3522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:16.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:16 vm03 bash[23382]: audit 2026-03-10T07:42:16.134618+0000 mon.a (mon.0) 3522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: cluster 2026-03-10T07:42:14.771861+0000 mgr.y (mgr.24407) 649 : cluster [DBG] pgmap v1154: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: cluster 2026-03-10T07:42:14.771861+0000 mgr.y (mgr.24407) 649 : cluster [DBG] pgmap v1154: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: audit 2026-03-10T07:42:15.182369+0000 mon.c (mon.2) 367 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: audit 2026-03-10T07:42:15.182369+0000 mon.c (mon.2) 367 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: audit 2026-03-10T07:42:15.533032+0000 mon.c (mon.2) 368 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: audit 2026-03-10T07:42:15.533032+0000 mon.c (mon.2) 368 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: audit 2026-03-10T07:42:15.533786+0000 mon.c (mon.2) 369 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: audit 2026-03-10T07:42:15.533786+0000 mon.c (mon.2) 369 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: audit 2026-03-10T07:42:15.561428+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: audit 2026-03-10T07:42:15.561428+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: audit 2026-03-10T07:42:16.134618+0000 mon.a (mon.0) 3522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:16 vm00 bash[28005]: audit 2026-03-10T07:42:16.134618+0000 mon.a (mon.0) 3522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: cluster 2026-03-10T07:42:14.771861+0000 mgr.y (mgr.24407) 649 : cluster [DBG] pgmap v1154: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: cluster 2026-03-10T07:42:14.771861+0000 mgr.y (mgr.24407) 649 : cluster [DBG] pgmap v1154: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: audit 2026-03-10T07:42:15.182369+0000 mon.c (mon.2) 367 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: audit 2026-03-10T07:42:15.182369+0000 mon.c (mon.2) 367 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:42:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: audit 2026-03-10T07:42:15.533032+0000 mon.c (mon.2) 368 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:42:16.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: audit 2026-03-10T07:42:15.533032+0000 mon.c (mon.2) 368 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:42:16.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: audit 2026-03-10T07:42:15.533786+0000 mon.c (mon.2) 369 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:42:16.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: audit 2026-03-10T07:42:15.533786+0000 mon.c (mon.2) 369 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:42:16.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: audit 2026-03-10T07:42:15.561428+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:16.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: audit 2026-03-10T07:42:15.561428+0000 mon.a (mon.0) 3521 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:42:16.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: audit 2026-03-10T07:42:16.134618+0000 mon.a (mon.0) 3522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:16.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:16 vm00 bash[20701]: audit 2026-03-10T07:42:16.134618+0000 mon.a (mon.0) 3522 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm00-59782-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: cluster 2026-03-10T07:42:16.143348+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: cluster 2026-03-10T07:42:16.143348+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.151037+0000 mon.b (mon.1) 676 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.151037+0000 mon.b (mon.1) 676 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.151445+0000 mon.a (mon.0) 3524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.151445+0000 mon.a (mon.0) 3524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.201349+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.201349+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.202076+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.202076+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.202819+0000 mon.b (mon.1) 677 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.202819+0000 mon.b (mon.1) 677 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.203677+0000 mon.b (mon.1) 678 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:17 vm03 bash[23382]: audit 2026-03-10T07:42:16.203677+0000 mon.b (mon.1) 678 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: cluster 2026-03-10T07:42:16.143348+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: cluster 2026-03-10T07:42:16.143348+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.151037+0000 mon.b (mon.1) 676 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.151037+0000 mon.b (mon.1) 676 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.151445+0000 mon.a (mon.0) 3524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.151445+0000 mon.a (mon.0) 3524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.201349+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.201349+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.202076+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.202076+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.202819+0000 mon.b (mon.1) 677 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.202819+0000 mon.b (mon.1) 677 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.203677+0000 mon.b (mon.1) 678 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:17 vm00 bash[28005]: audit 2026-03-10T07:42:16.203677+0000 mon.b (mon.1) 678 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: cluster 2026-03-10T07:42:16.143348+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: cluster 2026-03-10T07:42:16.143348+0000 mon.a (mon.0) 3523 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.151037+0000 mon.b (mon.1) 676 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.151037+0000 mon.b (mon.1) 676 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.151445+0000 mon.a (mon.0) 3524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.151445+0000 mon.a (mon.0) 3524 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm00-59782-111","var": "dedup_tier","val": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.201349+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.201349+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.202076+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.202076+0000 mon.a (mon.0) 3526 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.202819+0000 mon.b (mon.1) 677 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.202819+0000 mon.b (mon.1) 677 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.203677+0000 mon.b (mon.1) 678 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:17.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:17 vm00 bash[20701]: audit 2026-03-10T07:42:16.203677+0000 mon.b (mon.1) 678 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm00-59782-111", "tierpool": "test-rados-api-vm00-59782-154"}]: dispatch 2026-03-10T07:42:18.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:18 vm03 bash[23382]: cluster 2026-03-10T07:42:16.772210+0000 mgr.y (mgr.24407) 650 : cluster [DBG] pgmap v1157: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:18.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:18 vm03 bash[23382]: cluster 2026-03-10T07:42:16.772210+0000 mgr.y (mgr.24407) 650 : cluster [DBG] pgmap v1157: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:18.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:18 vm03 bash[23382]: cluster 2026-03-10T07:42:17.188529+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T07:42:18.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:18 vm03 bash[23382]: cluster 2026-03-10T07:42:17.188529+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T07:42:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:18 vm00 bash[28005]: cluster 2026-03-10T07:42:16.772210+0000 mgr.y (mgr.24407) 650 : cluster [DBG] pgmap v1157: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:18 vm00 bash[28005]: cluster 2026-03-10T07:42:16.772210+0000 mgr.y (mgr.24407) 650 : cluster [DBG] pgmap v1157: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:18 vm00 bash[28005]: cluster 2026-03-10T07:42:17.188529+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T07:42:18.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:18 vm00 bash[28005]: cluster 2026-03-10T07:42:17.188529+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T07:42:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:18 vm00 bash[20701]: cluster 2026-03-10T07:42:16.772210+0000 mgr.y (mgr.24407) 650 : cluster [DBG] pgmap v1157: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:18 vm00 bash[20701]: cluster 2026-03-10T07:42:16.772210+0000 mgr.y (mgr.24407) 650 : cluster [DBG] pgmap v1157: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T07:42:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:18 vm00 bash[20701]: cluster 2026-03-10T07:42:17.188529+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T07:42:18.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:18 vm00 bash[20701]: cluster 2026-03-10T07:42:17.188529+0000 mon.a (mon.0) 3527 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T07:42:19.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:19 vm03 bash[23382]: cluster 2026-03-10T07:42:18.169922+0000 mon.a (mon.0) 3528 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T07:42:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:19 vm03 bash[23382]: cluster 2026-03-10T07:42:18.169922+0000 mon.a (mon.0) 3528 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T07:42:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:19 vm03 bash[23382]: audit 2026-03-10T07:42:18.171063+0000 mon.a (mon.0) 3529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:19 vm03 bash[23382]: audit 2026-03-10T07:42:18.171063+0000 mon.a (mon.0) 3529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:19 vm03 bash[23382]: audit 2026-03-10T07:42:18.172565+0000 mon.b (mon.1) 679 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:19 vm03 bash[23382]: audit 2026-03-10T07:42:18.172565+0000 mon.b (mon.1) 679 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:19 vm03 bash[23382]: audit 2026-03-10T07:42:19.170600+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:19 vm03 bash[23382]: audit 2026-03-10T07:42:19.170600+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:19 vm03 bash[23382]: cluster 2026-03-10T07:42:19.175592+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T07:42:19.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:19 vm03 bash[23382]: cluster 2026-03-10T07:42:19.175592+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T07:42:19.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:19 vm00 bash[28005]: cluster 2026-03-10T07:42:18.169922+0000 mon.a (mon.0) 3528 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:19 vm00 bash[28005]: cluster 2026-03-10T07:42:18.169922+0000 mon.a (mon.0) 3528 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:19 vm00 bash[28005]: audit 2026-03-10T07:42:18.171063+0000 mon.a (mon.0) 3529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:19 vm00 bash[28005]: audit 2026-03-10T07:42:18.171063+0000 mon.a (mon.0) 3529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:19 vm00 bash[28005]: audit 2026-03-10T07:42:18.172565+0000 mon.b (mon.1) 679 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:19 vm00 bash[28005]: audit 2026-03-10T07:42:18.172565+0000 mon.b (mon.1) 679 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:19 vm00 bash[28005]: audit 2026-03-10T07:42:19.170600+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:19 vm00 bash[28005]: audit 2026-03-10T07:42:19.170600+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:19 vm00 bash[28005]: cluster 2026-03-10T07:42:19.175592+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:19 vm00 bash[28005]: cluster 2026-03-10T07:42:19.175592+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:19 vm00 bash[20701]: cluster 2026-03-10T07:42:18.169922+0000 mon.a (mon.0) 3528 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:19 vm00 bash[20701]: cluster 2026-03-10T07:42:18.169922+0000 mon.a (mon.0) 3528 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:19 vm00 bash[20701]: audit 2026-03-10T07:42:18.171063+0000 mon.a (mon.0) 3529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:19 vm00 bash[20701]: audit 2026-03-10T07:42:18.171063+0000 mon.a (mon.0) 3529 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:19 vm00 bash[20701]: audit 2026-03-10T07:42:18.172565+0000 mon.b (mon.1) 679 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:19 vm00 bash[20701]: audit 2026-03-10T07:42:18.172565+0000 mon.b (mon.1) 679 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:19 vm00 bash[20701]: audit 2026-03-10T07:42:19.170600+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:19 vm00 bash[20701]: audit 2026-03-10T07:42:19.170600+0000 mon.a (mon.0) 3530 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:19 vm00 bash[20701]: cluster 2026-03-10T07:42:19.175592+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T07:42:19.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:19 vm00 bash[20701]: cluster 2026-03-10T07:42:19.175592+0000 mon.a (mon.0) 3531 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlush (7602 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FailedFlush 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FailedFlush (13123 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Flush 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Flush (8279 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushSnap 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushSnap (13276 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushTryFlushRaces 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushTryFlushRaces (7384 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlushReadRace 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlushReadRace (8342 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetRead 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: ok, hit_set contains 329:602f83fe:::foo:head 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetRead (9188 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetTrim 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: first is 1773128455 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,1773128457,1773128458,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,1773128457,1773128458,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,1773128457,1773128458,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,1773128457,1773128458,1773128460,1773128461,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,1773128457,1773128458,1773128460,1773128461,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,1773128457,1773128458,1773128460,1773128461,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,1773128457,1773128458,1773128460,1773128461,1773128463,1773128464,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,1773128457,1773128458,1773128460,1773128461,1773128463,1773128464,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128455,1773128457,1773128458,1773128460,1773128461,1773128463,1773128464,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: got ls 1773128458,1773128460,1773128461,1773128463,1773128464,1773128466,1773128467,0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: first now 1773128458, trimmed 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetTrim (20714 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteOn2ndRead 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: foo0 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteOn2ndRead (14278 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ProxyRead 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ProxyRead (17525 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.CachePin 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.CachePin (22484 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetRedirectRead 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetRedirectRead (5410 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetChunkRead 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetChunkRead (3314 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ManifestPromoteRead 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ManifestPromoteRead (2982 ms) 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TrySetDedupTier 2026-03-10T07:42:20.210 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TrySetDedupTier (3080 ms) 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP (235774 ms total) 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [----------] Global test environment tear-down 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [==========] 77 tests from 4 test suites ran. (856081 ms total) 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stdout: api_tier_pp: [ PASSED ] 77 tests. 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59674 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59674 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59863 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59863 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60262 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60262 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60024 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60024 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59967 2026-03-10T07:42:20.211 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59967 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59754 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59754 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60004 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60004 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60469 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60469 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60488 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60488 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59769 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59769 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60354 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60354 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59640 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59640 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59709 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59709 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60166 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60166 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59632 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59632 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59648 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59648 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60567 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60567 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59951 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59951 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60599 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60599 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.212 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59996 2026-03-10T07:42:20.213 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59996 2026-03-10T07:42:20.213 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.213 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60437 2026-03-10T07:42:20.213 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60437 2026-03-10T07:42:20.213 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:42:20.213 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60645 2026-03-10T07:42:20.213 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60645 2026-03-10T07:42:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: cluster 2026-03-10T07:42:18.772610+0000 mgr.y (mgr.24407) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: cluster 2026-03-10T07:42:18.772610+0000 mgr.y (mgr.24407) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: audit 2026-03-10T07:42:19.175762+0000 mon.b (mon.1) 680 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: audit 2026-03-10T07:42:19.175762+0000 mon.b (mon.1) 680 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: cluster 2026-03-10T07:42:19.177185+0000 mon.a (mon.0) 3532 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: cluster 2026-03-10T07:42:19.177185+0000 mon.a (mon.0) 3532 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: audit 2026-03-10T07:42:19.178494+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: audit 2026-03-10T07:42:19.178494+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: audit 2026-03-10T07:42:20.175539+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: audit 2026-03-10T07:42:20.175539+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: cluster 2026-03-10T07:42:20.209313+0000 mon.a (mon.0) 3535 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T07:42:20.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:20 vm03 bash[23382]: cluster 2026-03-10T07:42:20.209313+0000 mon.a (mon.0) 3535 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T07:42:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: cluster 2026-03-10T07:42:18.772610+0000 mgr.y (mgr.24407) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: cluster 2026-03-10T07:42:18.772610+0000 mgr.y (mgr.24407) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: audit 2026-03-10T07:42:19.175762+0000 mon.b (mon.1) 680 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: audit 2026-03-10T07:42:19.175762+0000 mon.b (mon.1) 680 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: cluster 2026-03-10T07:42:19.177185+0000 mon.a (mon.0) 3532 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: cluster 2026-03-10T07:42:19.177185+0000 mon.a (mon.0) 3532 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: audit 2026-03-10T07:42:19.178494+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: audit 2026-03-10T07:42:19.178494+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: audit 2026-03-10T07:42:20.175539+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: audit 2026-03-10T07:42:20.175539+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: cluster 2026-03-10T07:42:20.209313+0000 mon.a (mon.0) 3535 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:20 vm00 bash[20701]: cluster 2026-03-10T07:42:20.209313+0000 mon.a (mon.0) 3535 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: cluster 2026-03-10T07:42:18.772610+0000 mgr.y (mgr.24407) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: cluster 2026-03-10T07:42:18.772610+0000 mgr.y (mgr.24407) 651 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: audit 2026-03-10T07:42:19.175762+0000 mon.b (mon.1) 680 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: audit 2026-03-10T07:42:19.175762+0000 mon.b (mon.1) 680 : audit [INF] from='client.? 192.168.123.100:0/544910491' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: cluster 2026-03-10T07:42:19.177185+0000 mon.a (mon.0) 3532 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: cluster 2026-03-10T07:42:19.177185+0000 mon.a (mon.0) 3532 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: audit 2026-03-10T07:42:19.178494+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: audit 2026-03-10T07:42:19.178494+0000 mon.a (mon.0) 3533 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]: dispatch 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: audit 2026-03-10T07:42:20.175539+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: audit 2026-03-10T07:42:20.175539+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm00-59782-111"}]': finished 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: cluster 2026-03-10T07:42:20.209313+0000 mon.a (mon.0) 3535 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T07:42:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:20 vm00 bash[28005]: cluster 2026-03-10T07:42:20.209313+0000 mon.a (mon.0) 3535 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T07:42:21.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:42:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:42:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:42:22.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:22 vm03 bash[23382]: cluster 2026-03-10T07:42:20.772934+0000 mgr.y (mgr.24407) 652 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:22.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:22 vm03 bash[23382]: cluster 2026-03-10T07:42:20.772934+0000 mgr.y (mgr.24407) 652 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:22.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:22 vm00 bash[28005]: cluster 2026-03-10T07:42:20.772934+0000 mgr.y (mgr.24407) 652 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:22.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:22 vm00 bash[28005]: cluster 2026-03-10T07:42:20.772934+0000 mgr.y (mgr.24407) 652 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:22.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:22 vm00 bash[20701]: cluster 2026-03-10T07:42:20.772934+0000 mgr.y (mgr.24407) 652 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:22.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:22 vm00 bash[20701]: cluster 2026-03-10T07:42:20.772934+0000 mgr.y (mgr.24407) 652 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:24.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:42:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:42:24.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:24 vm03 bash[23382]: cluster 2026-03-10T07:42:22.773278+0000 mgr.y (mgr.24407) 653 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:24.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:24 vm03 bash[23382]: cluster 2026-03-10T07:42:22.773278+0000 mgr.y (mgr.24407) 653 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:24.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:24 vm00 bash[28005]: cluster 2026-03-10T07:42:22.773278+0000 mgr.y (mgr.24407) 653 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:24.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:24 vm00 bash[28005]: cluster 2026-03-10T07:42:22.773278+0000 mgr.y (mgr.24407) 653 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:24.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:24 vm00 bash[20701]: cluster 2026-03-10T07:42:22.773278+0000 mgr.y (mgr.24407) 653 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:24.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:24 vm00 bash[20701]: cluster 2026-03-10T07:42:22.773278+0000 mgr.y (mgr.24407) 653 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:42:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:25 vm03 bash[23382]: audit 2026-03-10T07:42:23.608846+0000 mgr.y (mgr.24407) 654 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:25.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:25 vm03 bash[23382]: audit 2026-03-10T07:42:23.608846+0000 mgr.y (mgr.24407) 654 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:25.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:25 vm03 bash[23382]: audit 2026-03-10T07:42:24.859441+0000 mon.c (mon.2) 370 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:25.513 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:25 vm03 bash[23382]: audit 2026-03-10T07:42:24.859441+0000 mon.c (mon.2) 370 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:25 vm00 bash[28005]: audit 2026-03-10T07:42:23.608846+0000 mgr.y (mgr.24407) 654 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:25.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:25 vm00 bash[28005]: audit 2026-03-10T07:42:23.608846+0000 mgr.y (mgr.24407) 654 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:25.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:25 vm00 bash[28005]: audit 2026-03-10T07:42:24.859441+0000 mon.c (mon.2) 370 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:25.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:25 vm00 bash[28005]: audit 2026-03-10T07:42:24.859441+0000 mon.c (mon.2) 370 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:25 vm00 bash[20701]: audit 2026-03-10T07:42:23.608846+0000 mgr.y (mgr.24407) 654 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:25 vm00 bash[20701]: audit 2026-03-10T07:42:23.608846+0000 mgr.y (mgr.24407) 654 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:25 vm00 bash[20701]: audit 2026-03-10T07:42:24.859441+0000 mon.c (mon.2) 370 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:25.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:25 vm00 bash[20701]: audit 2026-03-10T07:42:24.859441+0000 mon.c (mon.2) 370 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:26 vm03 bash[23382]: cluster 2026-03-10T07:42:24.773900+0000 mgr.y (mgr.24407) 655 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:42:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:26 vm03 bash[23382]: cluster 2026-03-10T07:42:24.773900+0000 mgr.y (mgr.24407) 655 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:42:26.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:26 vm00 bash[28005]: cluster 2026-03-10T07:42:24.773900+0000 mgr.y (mgr.24407) 655 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:42:26.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:26 vm00 bash[28005]: cluster 2026-03-10T07:42:24.773900+0000 mgr.y (mgr.24407) 655 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:42:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:26 vm00 bash[20701]: cluster 2026-03-10T07:42:24.773900+0000 mgr.y (mgr.24407) 655 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:42:26.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:26 vm00 bash[20701]: cluster 2026-03-10T07:42:24.773900+0000 mgr.y (mgr.24407) 655 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T07:42:28.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:28 vm03 bash[23382]: cluster 2026-03-10T07:42:26.774191+0000 mgr.y (mgr.24407) 656 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:28.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:28 vm03 bash[23382]: cluster 2026-03-10T07:42:26.774191+0000 mgr.y (mgr.24407) 656 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:28.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:28 vm00 bash[28005]: cluster 2026-03-10T07:42:26.774191+0000 mgr.y (mgr.24407) 656 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:28.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:28 vm00 bash[28005]: cluster 2026-03-10T07:42:26.774191+0000 mgr.y (mgr.24407) 656 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:28.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:28 vm00 bash[20701]: cluster 2026-03-10T07:42:26.774191+0000 mgr.y (mgr.24407) 656 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:28.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:28 vm00 bash[20701]: cluster 2026-03-10T07:42:26.774191+0000 mgr.y (mgr.24407) 656 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:30.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:30 vm00 bash[28005]: cluster 2026-03-10T07:42:28.774510+0000 mgr.y (mgr.24407) 657 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:42:30.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:30 vm00 bash[28005]: cluster 2026-03-10T07:42:28.774510+0000 mgr.y (mgr.24407) 657 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:42:30.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:30 vm00 bash[20701]: cluster 2026-03-10T07:42:28.774510+0000 mgr.y (mgr.24407) 657 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:42:30.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:30 vm00 bash[20701]: cluster 2026-03-10T07:42:28.774510+0000 mgr.y (mgr.24407) 657 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:42:30.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:30 vm03 bash[23382]: cluster 2026-03-10T07:42:28.774510+0000 mgr.y (mgr.24407) 657 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:42:30.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:30 vm03 bash[23382]: cluster 2026-03-10T07:42:28.774510+0000 mgr.y (mgr.24407) 657 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:42:31.382 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:42:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:42:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:42:32.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:32 vm00 bash[28005]: cluster 2026-03-10T07:42:30.775288+0000 mgr.y (mgr.24407) 658 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 969 B/s rd, 0 op/s 2026-03-10T07:42:32.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:32 vm00 bash[28005]: cluster 2026-03-10T07:42:30.775288+0000 mgr.y (mgr.24407) 658 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 969 B/s rd, 0 op/s 2026-03-10T07:42:32.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:32 vm00 bash[20701]: cluster 2026-03-10T07:42:30.775288+0000 mgr.y (mgr.24407) 658 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 969 B/s rd, 0 op/s 2026-03-10T07:42:32.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:32 vm00 bash[20701]: cluster 2026-03-10T07:42:30.775288+0000 mgr.y (mgr.24407) 658 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 969 B/s rd, 0 op/s 2026-03-10T07:42:32.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:32 vm03 bash[23382]: cluster 2026-03-10T07:42:30.775288+0000 mgr.y (mgr.24407) 658 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 969 B/s rd, 0 op/s 2026-03-10T07:42:32.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:32 vm03 bash[23382]: cluster 2026-03-10T07:42:30.775288+0000 mgr.y (mgr.24407) 658 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 969 B/s rd, 0 op/s 2026-03-10T07:42:34.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:42:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:42:34.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:34 vm00 bash[28005]: cluster 2026-03-10T07:42:32.775645+0000 mgr.y (mgr.24407) 659 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:34.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:34 vm00 bash[28005]: cluster 2026-03-10T07:42:32.775645+0000 mgr.y (mgr.24407) 659 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:34.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:34 vm00 bash[20701]: cluster 2026-03-10T07:42:32.775645+0000 mgr.y (mgr.24407) 659 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:34.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:34 vm00 bash[20701]: cluster 2026-03-10T07:42:32.775645+0000 mgr.y (mgr.24407) 659 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:34.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:34 vm03 bash[23382]: cluster 2026-03-10T07:42:32.775645+0000 mgr.y (mgr.24407) 659 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:34.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:34 vm03 bash[23382]: cluster 2026-03-10T07:42:32.775645+0000 mgr.y (mgr.24407) 659 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:35.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:35 vm00 bash[28005]: audit 2026-03-10T07:42:33.611146+0000 mgr.y (mgr.24407) 660 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:35.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:35 vm00 bash[28005]: audit 2026-03-10T07:42:33.611146+0000 mgr.y (mgr.24407) 660 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:35 vm00 bash[20701]: audit 2026-03-10T07:42:33.611146+0000 mgr.y (mgr.24407) 660 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:35 vm00 bash[20701]: audit 2026-03-10T07:42:33.611146+0000 mgr.y (mgr.24407) 660 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:35.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:35 vm03 bash[23382]: audit 2026-03-10T07:42:33.611146+0000 mgr.y (mgr.24407) 660 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:35.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:35 vm03 bash[23382]: audit 2026-03-10T07:42:33.611146+0000 mgr.y (mgr.24407) 660 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:36.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:36 vm00 bash[28005]: cluster 2026-03-10T07:42:34.776518+0000 mgr.y (mgr.24407) 661 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:36.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:36 vm00 bash[28005]: cluster 2026-03-10T07:42:34.776518+0000 mgr.y (mgr.24407) 661 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:36.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:36 vm00 bash[20701]: cluster 2026-03-10T07:42:34.776518+0000 mgr.y (mgr.24407) 661 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:36.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:36 vm00 bash[20701]: cluster 2026-03-10T07:42:34.776518+0000 mgr.y (mgr.24407) 661 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:36 vm03 bash[23382]: cluster 2026-03-10T07:42:34.776518+0000 mgr.y (mgr.24407) 661 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:36 vm03 bash[23382]: cluster 2026-03-10T07:42:34.776518+0000 mgr.y (mgr.24407) 661 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:38.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:38 vm00 bash[28005]: cluster 2026-03-10T07:42:36.776894+0000 mgr.y (mgr.24407) 662 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:38.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:38 vm00 bash[28005]: cluster 2026-03-10T07:42:36.776894+0000 mgr.y (mgr.24407) 662 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:38.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:38 vm00 bash[20701]: cluster 2026-03-10T07:42:36.776894+0000 mgr.y (mgr.24407) 662 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:38.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:38 vm00 bash[20701]: cluster 2026-03-10T07:42:36.776894+0000 mgr.y (mgr.24407) 662 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:38 vm03 bash[23382]: cluster 2026-03-10T07:42:36.776894+0000 mgr.y (mgr.24407) 662 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:38 vm03 bash[23382]: cluster 2026-03-10T07:42:36.776894+0000 mgr.y (mgr.24407) 662 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:40.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:40 vm03 bash[23382]: cluster 2026-03-10T07:42:38.777222+0000 mgr.y (mgr.24407) 663 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:40.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:40 vm03 bash[23382]: cluster 2026-03-10T07:42:38.777222+0000 mgr.y (mgr.24407) 663 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:40.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:40 vm03 bash[23382]: audit 2026-03-10T07:42:39.865732+0000 mon.c (mon.2) 371 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:40.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:40 vm03 bash[23382]: audit 2026-03-10T07:42:39.865732+0000 mon.c (mon.2) 371 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:40 vm00 bash[28005]: cluster 2026-03-10T07:42:38.777222+0000 mgr.y (mgr.24407) 663 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:40 vm00 bash[28005]: cluster 2026-03-10T07:42:38.777222+0000 mgr.y (mgr.24407) 663 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:40 vm00 bash[28005]: audit 2026-03-10T07:42:39.865732+0000 mon.c (mon.2) 371 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:40.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:40 vm00 bash[28005]: audit 2026-03-10T07:42:39.865732+0000 mon.c (mon.2) 371 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:40.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:40 vm00 bash[20701]: cluster 2026-03-10T07:42:38.777222+0000 mgr.y (mgr.24407) 663 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:40.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:40 vm00 bash[20701]: cluster 2026-03-10T07:42:38.777222+0000 mgr.y (mgr.24407) 663 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:40.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:40 vm00 bash[20701]: audit 2026-03-10T07:42:39.865732+0000 mon.c (mon.2) 371 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:40.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:40 vm00 bash[20701]: audit 2026-03-10T07:42:39.865732+0000 mon.c (mon.2) 371 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:41.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:42:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:42:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:42:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:42 vm03 bash[23382]: cluster 2026-03-10T07:42:40.777851+0000 mgr.y (mgr.24407) 664 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:42.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:42 vm03 bash[23382]: cluster 2026-03-10T07:42:40.777851+0000 mgr.y (mgr.24407) 664 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:42.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:42 vm00 bash[28005]: cluster 2026-03-10T07:42:40.777851+0000 mgr.y (mgr.24407) 664 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:42.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:42 vm00 bash[28005]: cluster 2026-03-10T07:42:40.777851+0000 mgr.y (mgr.24407) 664 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:42.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:42 vm00 bash[20701]: cluster 2026-03-10T07:42:40.777851+0000 mgr.y (mgr.24407) 664 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:42.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:42 vm00 bash[20701]: cluster 2026-03-10T07:42:40.777851+0000 mgr.y (mgr.24407) 664 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:43.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:42:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:42:43.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:43 vm03 bash[23382]: cluster 2026-03-10T07:42:42.778164+0000 mgr.y (mgr.24407) 665 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:43.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:43 vm03 bash[23382]: cluster 2026-03-10T07:42:42.778164+0000 mgr.y (mgr.24407) 665 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:43 vm00 bash[28005]: cluster 2026-03-10T07:42:42.778164+0000 mgr.y (mgr.24407) 665 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:43 vm00 bash[28005]: cluster 2026-03-10T07:42:42.778164+0000 mgr.y (mgr.24407) 665 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:43.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:43 vm00 bash[20701]: cluster 2026-03-10T07:42:42.778164+0000 mgr.y (mgr.24407) 665 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:43.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:43 vm00 bash[20701]: cluster 2026-03-10T07:42:42.778164+0000 mgr.y (mgr.24407) 665 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:44 vm03 bash[23382]: audit 2026-03-10T07:42:43.616279+0000 mgr.y (mgr.24407) 666 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:44.763 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:44 vm03 bash[23382]: audit 2026-03-10T07:42:43.616279+0000 mgr.y (mgr.24407) 666 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:44.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:44 vm00 bash[28005]: audit 2026-03-10T07:42:43.616279+0000 mgr.y (mgr.24407) 666 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:44.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:44 vm00 bash[28005]: audit 2026-03-10T07:42:43.616279+0000 mgr.y (mgr.24407) 666 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:44.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:44 vm00 bash[20701]: audit 2026-03-10T07:42:43.616279+0000 mgr.y (mgr.24407) 666 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:44.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:44 vm00 bash[20701]: audit 2026-03-10T07:42:43.616279+0000 mgr.y (mgr.24407) 666 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:45 vm03 bash[23382]: cluster 2026-03-10T07:42:44.778661+0000 mgr.y (mgr.24407) 667 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:45 vm03 bash[23382]: cluster 2026-03-10T07:42:44.778661+0000 mgr.y (mgr.24407) 667 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:45.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:45 vm00 bash[28005]: cluster 2026-03-10T07:42:44.778661+0000 mgr.y (mgr.24407) 667 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:45.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:45 vm00 bash[28005]: cluster 2026-03-10T07:42:44.778661+0000 mgr.y (mgr.24407) 667 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:45 vm00 bash[20701]: cluster 2026-03-10T07:42:44.778661+0000 mgr.y (mgr.24407) 667 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:45 vm00 bash[20701]: cluster 2026-03-10T07:42:44.778661+0000 mgr.y (mgr.24407) 667 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:48.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:47 vm00 bash[28005]: cluster 2026-03-10T07:42:46.778963+0000 mgr.y (mgr.24407) 668 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:48.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:47 vm00 bash[28005]: cluster 2026-03-10T07:42:46.778963+0000 mgr.y (mgr.24407) 668 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:48.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:47 vm00 bash[20701]: cluster 2026-03-10T07:42:46.778963+0000 mgr.y (mgr.24407) 668 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:48.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:47 vm00 bash[20701]: cluster 2026-03-10T07:42:46.778963+0000 mgr.y (mgr.24407) 668 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:48.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:47 vm03 bash[23382]: cluster 2026-03-10T07:42:46.778963+0000 mgr.y (mgr.24407) 668 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:48.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:47 vm03 bash[23382]: cluster 2026-03-10T07:42:46.778963+0000 mgr.y (mgr.24407) 668 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:50.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:49 vm00 bash[28005]: cluster 2026-03-10T07:42:48.779287+0000 mgr.y (mgr.24407) 669 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:50.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:49 vm00 bash[28005]: cluster 2026-03-10T07:42:48.779287+0000 mgr.y (mgr.24407) 669 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:49 vm00 bash[20701]: cluster 2026-03-10T07:42:48.779287+0000 mgr.y (mgr.24407) 669 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:50.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:49 vm00 bash[20701]: cluster 2026-03-10T07:42:48.779287+0000 mgr.y (mgr.24407) 669 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:50.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:49 vm03 bash[23382]: cluster 2026-03-10T07:42:48.779287+0000 mgr.y (mgr.24407) 669 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:50.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:49 vm03 bash[23382]: cluster 2026-03-10T07:42:48.779287+0000 mgr.y (mgr.24407) 669 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:51.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:42:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:42:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:42:52.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:51 vm00 bash[28005]: cluster 2026-03-10T07:42:50.780066+0000 mgr.y (mgr.24407) 670 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:52.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:51 vm00 bash[28005]: cluster 2026-03-10T07:42:50.780066+0000 mgr.y (mgr.24407) 670 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:52.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:51 vm00 bash[20701]: cluster 2026-03-10T07:42:50.780066+0000 mgr.y (mgr.24407) 670 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:52.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:51 vm00 bash[20701]: cluster 2026-03-10T07:42:50.780066+0000 mgr.y (mgr.24407) 670 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:52.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:51 vm03 bash[23382]: cluster 2026-03-10T07:42:50.780066+0000 mgr.y (mgr.24407) 670 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:52.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:51 vm03 bash[23382]: cluster 2026-03-10T07:42:50.780066+0000 mgr.y (mgr.24407) 670 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:53.884 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:42:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:42:54.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:53 vm03 bash[23382]: cluster 2026-03-10T07:42:52.780436+0000 mgr.y (mgr.24407) 671 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:54.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:53 vm03 bash[23382]: cluster 2026-03-10T07:42:52.780436+0000 mgr.y (mgr.24407) 671 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:54.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:53 vm00 bash[28005]: cluster 2026-03-10T07:42:52.780436+0000 mgr.y (mgr.24407) 671 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:54.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:53 vm00 bash[28005]: cluster 2026-03-10T07:42:52.780436+0000 mgr.y (mgr.24407) 671 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:53 vm00 bash[20701]: cluster 2026-03-10T07:42:52.780436+0000 mgr.y (mgr.24407) 671 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:53 vm00 bash[20701]: cluster 2026-03-10T07:42:52.780436+0000 mgr.y (mgr.24407) 671 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:55.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:54 vm03 bash[23382]: audit 2026-03-10T07:42:53.626222+0000 mgr.y (mgr.24407) 672 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:55.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:54 vm03 bash[23382]: audit 2026-03-10T07:42:53.626222+0000 mgr.y (mgr.24407) 672 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:55.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:54 vm03 bash[23382]: audit 2026-03-10T07:42:54.871948+0000 mon.c (mon.2) 372 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:55.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:54 vm03 bash[23382]: audit 2026-03-10T07:42:54.871948+0000 mon.c (mon.2) 372 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:55.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:54 vm00 bash[28005]: audit 2026-03-10T07:42:53.626222+0000 mgr.y (mgr.24407) 672 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:55.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:54 vm00 bash[28005]: audit 2026-03-10T07:42:53.626222+0000 mgr.y (mgr.24407) 672 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:55.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:54 vm00 bash[28005]: audit 2026-03-10T07:42:54.871948+0000 mon.c (mon.2) 372 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:55.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:54 vm00 bash[28005]: audit 2026-03-10T07:42:54.871948+0000 mon.c (mon.2) 372 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:55.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:54 vm00 bash[20701]: audit 2026-03-10T07:42:53.626222+0000 mgr.y (mgr.24407) 672 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:55.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:54 vm00 bash[20701]: audit 2026-03-10T07:42:53.626222+0000 mgr.y (mgr.24407) 672 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:42:55.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:54 vm00 bash[20701]: audit 2026-03-10T07:42:54.871948+0000 mon.c (mon.2) 372 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:55.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:54 vm00 bash[20701]: audit 2026-03-10T07:42:54.871948+0000 mon.c (mon.2) 372 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:42:56.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:55 vm03 bash[23382]: cluster 2026-03-10T07:42:54.781232+0000 mgr.y (mgr.24407) 673 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:56.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:55 vm03 bash[23382]: cluster 2026-03-10T07:42:54.781232+0000 mgr.y (mgr.24407) 673 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:56.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:55 vm00 bash[28005]: cluster 2026-03-10T07:42:54.781232+0000 mgr.y (mgr.24407) 673 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:56.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:55 vm00 bash[28005]: cluster 2026-03-10T07:42:54.781232+0000 mgr.y (mgr.24407) 673 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:56.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:55 vm00 bash[20701]: cluster 2026-03-10T07:42:54.781232+0000 mgr.y (mgr.24407) 673 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:56.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:55 vm00 bash[20701]: cluster 2026-03-10T07:42:54.781232+0000 mgr.y (mgr.24407) 673 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:42:58.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:57 vm03 bash[23382]: cluster 2026-03-10T07:42:56.781560+0000 mgr.y (mgr.24407) 674 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:58.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:57 vm03 bash[23382]: cluster 2026-03-10T07:42:56.781560+0000 mgr.y (mgr.24407) 674 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:57 vm00 bash[28005]: cluster 2026-03-10T07:42:56.781560+0000 mgr.y (mgr.24407) 674 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:57 vm00 bash[28005]: cluster 2026-03-10T07:42:56.781560+0000 mgr.y (mgr.24407) 674 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:57 vm00 bash[20701]: cluster 2026-03-10T07:42:56.781560+0000 mgr.y (mgr.24407) 674 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:42:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:57 vm00 bash[20701]: cluster 2026-03-10T07:42:56.781560+0000 mgr.y (mgr.24407) 674 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:00.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:59 vm03 bash[23382]: cluster 2026-03-10T07:42:58.781976+0000 mgr.y (mgr.24407) 675 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:00.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:42:59 vm03 bash[23382]: cluster 2026-03-10T07:42:58.781976+0000 mgr.y (mgr.24407) 675 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:59 vm00 bash[28005]: cluster 2026-03-10T07:42:58.781976+0000 mgr.y (mgr.24407) 675 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:42:59 vm00 bash[28005]: cluster 2026-03-10T07:42:58.781976+0000 mgr.y (mgr.24407) 675 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:00.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:59 vm00 bash[20701]: cluster 2026-03-10T07:42:58.781976+0000 mgr.y (mgr.24407) 675 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:00.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:42:59 vm00 bash[20701]: cluster 2026-03-10T07:42:58.781976+0000 mgr.y (mgr.24407) 675 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:01.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:43:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:43:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:43:02.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:01 vm03 bash[23382]: cluster 2026-03-10T07:43:00.782713+0000 mgr.y (mgr.24407) 676 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:02.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:01 vm03 bash[23382]: cluster 2026-03-10T07:43:00.782713+0000 mgr.y (mgr.24407) 676 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:01 vm00 bash[28005]: cluster 2026-03-10T07:43:00.782713+0000 mgr.y (mgr.24407) 676 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:01 vm00 bash[28005]: cluster 2026-03-10T07:43:00.782713+0000 mgr.y (mgr.24407) 676 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:02.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:01 vm00 bash[20701]: cluster 2026-03-10T07:43:00.782713+0000 mgr.y (mgr.24407) 676 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:02.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:01 vm00 bash[20701]: cluster 2026-03-10T07:43:00.782713+0000 mgr.y (mgr.24407) 676 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:03.918 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:43:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:43:04.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:03 vm03 bash[23382]: cluster 2026-03-10T07:43:02.783079+0000 mgr.y (mgr.24407) 677 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:04.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:03 vm03 bash[23382]: cluster 2026-03-10T07:43:02.783079+0000 mgr.y (mgr.24407) 677 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:03 vm00 bash[28005]: cluster 2026-03-10T07:43:02.783079+0000 mgr.y (mgr.24407) 677 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:03 vm00 bash[28005]: cluster 2026-03-10T07:43:02.783079+0000 mgr.y (mgr.24407) 677 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:03 vm00 bash[20701]: cluster 2026-03-10T07:43:02.783079+0000 mgr.y (mgr.24407) 677 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:04.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:03 vm00 bash[20701]: cluster 2026-03-10T07:43:02.783079+0000 mgr.y (mgr.24407) 677 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:05.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:04 vm03 bash[23382]: audit 2026-03-10T07:43:03.633562+0000 mgr.y (mgr.24407) 678 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:05.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:04 vm03 bash[23382]: audit 2026-03-10T07:43:03.633562+0000 mgr.y (mgr.24407) 678 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:04 vm00 bash[28005]: audit 2026-03-10T07:43:03.633562+0000 mgr.y (mgr.24407) 678 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:04 vm00 bash[28005]: audit 2026-03-10T07:43:03.633562+0000 mgr.y (mgr.24407) 678 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:04 vm00 bash[20701]: audit 2026-03-10T07:43:03.633562+0000 mgr.y (mgr.24407) 678 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:04 vm00 bash[20701]: audit 2026-03-10T07:43:03.633562+0000 mgr.y (mgr.24407) 678 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:06.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:05 vm03 bash[23382]: cluster 2026-03-10T07:43:04.783771+0000 mgr.y (mgr.24407) 679 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:06.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:05 vm03 bash[23382]: cluster 2026-03-10T07:43:04.783771+0000 mgr.y (mgr.24407) 679 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:05 vm00 bash[28005]: cluster 2026-03-10T07:43:04.783771+0000 mgr.y (mgr.24407) 679 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:05 vm00 bash[28005]: cluster 2026-03-10T07:43:04.783771+0000 mgr.y (mgr.24407) 679 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:06.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:05 vm00 bash[20701]: cluster 2026-03-10T07:43:04.783771+0000 mgr.y (mgr.24407) 679 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:06.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:05 vm00 bash[20701]: cluster 2026-03-10T07:43:04.783771+0000 mgr.y (mgr.24407) 679 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:08.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:07 vm03 bash[23382]: cluster 2026-03-10T07:43:06.784129+0000 mgr.y (mgr.24407) 680 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:08.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:07 vm03 bash[23382]: cluster 2026-03-10T07:43:06.784129+0000 mgr.y (mgr.24407) 680 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:07 vm00 bash[28005]: cluster 2026-03-10T07:43:06.784129+0000 mgr.y (mgr.24407) 680 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:07 vm00 bash[28005]: cluster 2026-03-10T07:43:06.784129+0000 mgr.y (mgr.24407) 680 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:08.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:07 vm00 bash[20701]: cluster 2026-03-10T07:43:06.784129+0000 mgr.y (mgr.24407) 680 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:08.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:07 vm00 bash[20701]: cluster 2026-03-10T07:43:06.784129+0000 mgr.y (mgr.24407) 680 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:10.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:09 vm03 bash[23382]: cluster 2026-03-10T07:43:08.784421+0000 mgr.y (mgr.24407) 681 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:10.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:09 vm03 bash[23382]: cluster 2026-03-10T07:43:08.784421+0000 mgr.y (mgr.24407) 681 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:10.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:09 vm03 bash[23382]: audit 2026-03-10T07:43:09.878298+0000 mon.c (mon.2) 373 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:10.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:09 vm03 bash[23382]: audit 2026-03-10T07:43:09.878298+0000 mon.c (mon.2) 373 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:10.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:09 vm00 bash[20701]: cluster 2026-03-10T07:43:08.784421+0000 mgr.y (mgr.24407) 681 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:10.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:09 vm00 bash[20701]: cluster 2026-03-10T07:43:08.784421+0000 mgr.y (mgr.24407) 681 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:10.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:09 vm00 bash[20701]: audit 2026-03-10T07:43:09.878298+0000 mon.c (mon.2) 373 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:10.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:09 vm00 bash[20701]: audit 2026-03-10T07:43:09.878298+0000 mon.c (mon.2) 373 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:10.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:09 vm00 bash[28005]: cluster 2026-03-10T07:43:08.784421+0000 mgr.y (mgr.24407) 681 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:10.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:09 vm00 bash[28005]: cluster 2026-03-10T07:43:08.784421+0000 mgr.y (mgr.24407) 681 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:10.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:09 vm00 bash[28005]: audit 2026-03-10T07:43:09.878298+0000 mon.c (mon.2) 373 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:10.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:09 vm00 bash[28005]: audit 2026-03-10T07:43:09.878298+0000 mon.c (mon.2) 373 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:11.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:43:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:43:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:43:12.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:11 vm03 bash[23382]: cluster 2026-03-10T07:43:10.785072+0000 mgr.y (mgr.24407) 682 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:12.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:11 vm03 bash[23382]: cluster 2026-03-10T07:43:10.785072+0000 mgr.y (mgr.24407) 682 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:12.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:11 vm00 bash[20701]: cluster 2026-03-10T07:43:10.785072+0000 mgr.y (mgr.24407) 682 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:12.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:11 vm00 bash[20701]: cluster 2026-03-10T07:43:10.785072+0000 mgr.y (mgr.24407) 682 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:12.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:11 vm00 bash[28005]: cluster 2026-03-10T07:43:10.785072+0000 mgr.y (mgr.24407) 682 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:12.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:11 vm00 bash[28005]: cluster 2026-03-10T07:43:10.785072+0000 mgr.y (mgr.24407) 682 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:13.971 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:43:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:43:14.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:13 vm03 bash[23382]: cluster 2026-03-10T07:43:12.785444+0000 mgr.y (mgr.24407) 683 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:14.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:13 vm03 bash[23382]: cluster 2026-03-10T07:43:12.785444+0000 mgr.y (mgr.24407) 683 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:14.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:13 vm00 bash[28005]: cluster 2026-03-10T07:43:12.785444+0000 mgr.y (mgr.24407) 683 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:14.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:13 vm00 bash[28005]: cluster 2026-03-10T07:43:12.785444+0000 mgr.y (mgr.24407) 683 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:13 vm00 bash[20701]: cluster 2026-03-10T07:43:12.785444+0000 mgr.y (mgr.24407) 683 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:14.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:13 vm00 bash[20701]: cluster 2026-03-10T07:43:12.785444+0000 mgr.y (mgr.24407) 683 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:15.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:14 vm03 bash[23382]: audit 2026-03-10T07:43:13.644298+0000 mgr.y (mgr.24407) 684 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:15.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:14 vm03 bash[23382]: audit 2026-03-10T07:43:13.644298+0000 mgr.y (mgr.24407) 684 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:15.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:14 vm00 bash[28005]: audit 2026-03-10T07:43:13.644298+0000 mgr.y (mgr.24407) 684 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:15.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:14 vm00 bash[28005]: audit 2026-03-10T07:43:13.644298+0000 mgr.y (mgr.24407) 684 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:15.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:14 vm00 bash[20701]: audit 2026-03-10T07:43:13.644298+0000 mgr.y (mgr.24407) 684 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:14 vm00 bash[20701]: audit 2026-03-10T07:43:13.644298+0000 mgr.y (mgr.24407) 684 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:16.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:15 vm03 bash[23382]: cluster 2026-03-10T07:43:14.786006+0000 mgr.y (mgr.24407) 685 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:16.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:15 vm03 bash[23382]: cluster 2026-03-10T07:43:14.786006+0000 mgr.y (mgr.24407) 685 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:16.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:15 vm03 bash[23382]: audit 2026-03-10T07:43:15.604461+0000 mon.c (mon.2) 374 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:43:16.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:15 vm03 bash[23382]: audit 2026-03-10T07:43:15.604461+0000 mon.c (mon.2) 374 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:43:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:15 vm03 bash[23382]: audit 2026-03-10T07:43:15.933447+0000 mon.c (mon.2) 375 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:43:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:15 vm03 bash[23382]: audit 2026-03-10T07:43:15.933447+0000 mon.c (mon.2) 375 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:43:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:15 vm03 bash[23382]: audit 2026-03-10T07:43:15.934535+0000 mon.c (mon.2) 376 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:43:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:15 vm03 bash[23382]: audit 2026-03-10T07:43:15.934535+0000 mon.c (mon.2) 376 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:43:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:15 vm03 bash[23382]: audit 2026-03-10T07:43:15.941141+0000 mon.a (mon.0) 3536 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:43:16.263 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:15 vm03 bash[23382]: audit 2026-03-10T07:43:15.941141+0000 mon.a (mon.0) 3536 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:15 vm00 bash[28005]: cluster 2026-03-10T07:43:14.786006+0000 mgr.y (mgr.24407) 685 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:15 vm00 bash[28005]: cluster 2026-03-10T07:43:14.786006+0000 mgr.y (mgr.24407) 685 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:15 vm00 bash[28005]: audit 2026-03-10T07:43:15.604461+0000 mon.c (mon.2) 374 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:15 vm00 bash[28005]: audit 2026-03-10T07:43:15.604461+0000 mon.c (mon.2) 374 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:15 vm00 bash[28005]: audit 2026-03-10T07:43:15.933447+0000 mon.c (mon.2) 375 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:15 vm00 bash[28005]: audit 2026-03-10T07:43:15.933447+0000 mon.c (mon.2) 375 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:15 vm00 bash[28005]: audit 2026-03-10T07:43:15.934535+0000 mon.c (mon.2) 376 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:15 vm00 bash[28005]: audit 2026-03-10T07:43:15.934535+0000 mon.c (mon.2) 376 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:15 vm00 bash[28005]: audit 2026-03-10T07:43:15.941141+0000 mon.a (mon.0) 3536 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:15 vm00 bash[28005]: audit 2026-03-10T07:43:15.941141+0000 mon.a (mon.0) 3536 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:15 vm00 bash[20701]: cluster 2026-03-10T07:43:14.786006+0000 mgr.y (mgr.24407) 685 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:15 vm00 bash[20701]: cluster 2026-03-10T07:43:14.786006+0000 mgr.y (mgr.24407) 685 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:15 vm00 bash[20701]: audit 2026-03-10T07:43:15.604461+0000 mon.c (mon.2) 374 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:15 vm00 bash[20701]: audit 2026-03-10T07:43:15.604461+0000 mon.c (mon.2) 374 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:15 vm00 bash[20701]: audit 2026-03-10T07:43:15.933447+0000 mon.c (mon.2) 375 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:15 vm00 bash[20701]: audit 2026-03-10T07:43:15.933447+0000 mon.c (mon.2) 375 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:15 vm00 bash[20701]: audit 2026-03-10T07:43:15.934535+0000 mon.c (mon.2) 376 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:15 vm00 bash[20701]: audit 2026-03-10T07:43:15.934535+0000 mon.c (mon.2) 376 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:15 vm00 bash[20701]: audit 2026-03-10T07:43:15.941141+0000 mon.a (mon.0) 3536 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:43:16.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:15 vm00 bash[20701]: audit 2026-03-10T07:43:15.941141+0000 mon.a (mon.0) 3536 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:43:18.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:18 vm03 bash[23382]: cluster 2026-03-10T07:43:16.786344+0000 mgr.y (mgr.24407) 686 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:18.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:18 vm03 bash[23382]: cluster 2026-03-10T07:43:16.786344+0000 mgr.y (mgr.24407) 686 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:18.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:18 vm00 bash[28005]: cluster 2026-03-10T07:43:16.786344+0000 mgr.y (mgr.24407) 686 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:18.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:18 vm00 bash[28005]: cluster 2026-03-10T07:43:16.786344+0000 mgr.y (mgr.24407) 686 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:18.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:18 vm00 bash[20701]: cluster 2026-03-10T07:43:16.786344+0000 mgr.y (mgr.24407) 686 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:18.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:18 vm00 bash[20701]: cluster 2026-03-10T07:43:16.786344+0000 mgr.y (mgr.24407) 686 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:20.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:20 vm03 bash[23382]: cluster 2026-03-10T07:43:18.786690+0000 mgr.y (mgr.24407) 687 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:20.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:20 vm03 bash[23382]: cluster 2026-03-10T07:43:18.786690+0000 mgr.y (mgr.24407) 687 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:20.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:20 vm00 bash[28005]: cluster 2026-03-10T07:43:18.786690+0000 mgr.y (mgr.24407) 687 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:20.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:20 vm00 bash[28005]: cluster 2026-03-10T07:43:18.786690+0000 mgr.y (mgr.24407) 687 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:20.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:20 vm00 bash[20701]: cluster 2026-03-10T07:43:18.786690+0000 mgr.y (mgr.24407) 687 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:20.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:20 vm00 bash[20701]: cluster 2026-03-10T07:43:18.786690+0000 mgr.y (mgr.24407) 687 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:21.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:43:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:43:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:43:22.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:22 vm00 bash[28005]: cluster 2026-03-10T07:43:20.787344+0000 mgr.y (mgr.24407) 688 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:22.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:22 vm00 bash[28005]: cluster 2026-03-10T07:43:20.787344+0000 mgr.y (mgr.24407) 688 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:22.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:22 vm00 bash[20701]: cluster 2026-03-10T07:43:20.787344+0000 mgr.y (mgr.24407) 688 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:22.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:22 vm00 bash[20701]: cluster 2026-03-10T07:43:20.787344+0000 mgr.y (mgr.24407) 688 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:22.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:22 vm03 bash[23382]: cluster 2026-03-10T07:43:20.787344+0000 mgr.y (mgr.24407) 688 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:22.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:22 vm03 bash[23382]: cluster 2026-03-10T07:43:20.787344+0000 mgr.y (mgr.24407) 688 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:24.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:43:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:43:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:24 vm00 bash[28005]: cluster 2026-03-10T07:43:22.787692+0000 mgr.y (mgr.24407) 689 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:24 vm00 bash[28005]: cluster 2026-03-10T07:43:22.787692+0000 mgr.y (mgr.24407) 689 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:24 vm00 bash[20701]: cluster 2026-03-10T07:43:22.787692+0000 mgr.y (mgr.24407) 689 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:24.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:24 vm00 bash[20701]: cluster 2026-03-10T07:43:22.787692+0000 mgr.y (mgr.24407) 689 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:24.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:24 vm03 bash[23382]: cluster 2026-03-10T07:43:22.787692+0000 mgr.y (mgr.24407) 689 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:24.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:24 vm03 bash[23382]: cluster 2026-03-10T07:43:22.787692+0000 mgr.y (mgr.24407) 689 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:25.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:25 vm00 bash[28005]: audit 2026-03-10T07:43:23.645990+0000 mgr.y (mgr.24407) 690 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:25.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:25 vm00 bash[28005]: audit 2026-03-10T07:43:23.645990+0000 mgr.y (mgr.24407) 690 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:25.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:25 vm00 bash[28005]: audit 2026-03-10T07:43:24.884637+0000 mon.c (mon.2) 377 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:25.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:25 vm00 bash[28005]: audit 2026-03-10T07:43:24.884637+0000 mon.c (mon.2) 377 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:25.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:25 vm00 bash[20701]: audit 2026-03-10T07:43:23.645990+0000 mgr.y (mgr.24407) 690 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:25.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:25 vm00 bash[20701]: audit 2026-03-10T07:43:23.645990+0000 mgr.y (mgr.24407) 690 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:25.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:25 vm00 bash[20701]: audit 2026-03-10T07:43:24.884637+0000 mon.c (mon.2) 377 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:25.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:25 vm00 bash[20701]: audit 2026-03-10T07:43:24.884637+0000 mon.c (mon.2) 377 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:25 vm03 bash[23382]: audit 2026-03-10T07:43:23.645990+0000 mgr.y (mgr.24407) 690 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:25 vm03 bash[23382]: audit 2026-03-10T07:43:23.645990+0000 mgr.y (mgr.24407) 690 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:25 vm03 bash[23382]: audit 2026-03-10T07:43:24.884637+0000 mon.c (mon.2) 377 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:25 vm03 bash[23382]: audit 2026-03-10T07:43:24.884637+0000 mon.c (mon.2) 377 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:26 vm00 bash[28005]: cluster 2026-03-10T07:43:24.788324+0000 mgr.y (mgr.24407) 691 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:26 vm00 bash[28005]: cluster 2026-03-10T07:43:24.788324+0000 mgr.y (mgr.24407) 691 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:26.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:26 vm00 bash[20701]: cluster 2026-03-10T07:43:24.788324+0000 mgr.y (mgr.24407) 691 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:26.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:26 vm00 bash[20701]: cluster 2026-03-10T07:43:24.788324+0000 mgr.y (mgr.24407) 691 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:26 vm03 bash[23382]: cluster 2026-03-10T07:43:24.788324+0000 mgr.y (mgr.24407) 691 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:26 vm03 bash[23382]: cluster 2026-03-10T07:43:24.788324+0000 mgr.y (mgr.24407) 691 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:28 vm00 bash[28005]: cluster 2026-03-10T07:43:26.788687+0000 mgr.y (mgr.24407) 692 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:28 vm00 bash[28005]: cluster 2026-03-10T07:43:26.788687+0000 mgr.y (mgr.24407) 692 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:28.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:28 vm00 bash[20701]: cluster 2026-03-10T07:43:26.788687+0000 mgr.y (mgr.24407) 692 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:28.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:28 vm00 bash[20701]: cluster 2026-03-10T07:43:26.788687+0000 mgr.y (mgr.24407) 692 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:28.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:28 vm03 bash[23382]: cluster 2026-03-10T07:43:26.788687+0000 mgr.y (mgr.24407) 692 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:28.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:28 vm03 bash[23382]: cluster 2026-03-10T07:43:26.788687+0000 mgr.y (mgr.24407) 692 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:30 vm00 bash[28005]: cluster 2026-03-10T07:43:28.788991+0000 mgr.y (mgr.24407) 693 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:30 vm00 bash[28005]: cluster 2026-03-10T07:43:28.788991+0000 mgr.y (mgr.24407) 693 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:30 vm00 bash[20701]: cluster 2026-03-10T07:43:28.788991+0000 mgr.y (mgr.24407) 693 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:30.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:30 vm00 bash[20701]: cluster 2026-03-10T07:43:28.788991+0000 mgr.y (mgr.24407) 693 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:30.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:30 vm03 bash[23382]: cluster 2026-03-10T07:43:28.788991+0000 mgr.y (mgr.24407) 693 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:30.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:30 vm03 bash[23382]: cluster 2026-03-10T07:43:28.788991+0000 mgr.y (mgr.24407) 693 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:31.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:43:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:43:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:43:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:32 vm00 bash[28005]: cluster 2026-03-10T07:43:30.789678+0000 mgr.y (mgr.24407) 694 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:32 vm00 bash[28005]: cluster 2026-03-10T07:43:30.789678+0000 mgr.y (mgr.24407) 694 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:32 vm00 bash[20701]: cluster 2026-03-10T07:43:30.789678+0000 mgr.y (mgr.24407) 694 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:32 vm00 bash[20701]: cluster 2026-03-10T07:43:30.789678+0000 mgr.y (mgr.24407) 694 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:32.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:32 vm03 bash[23382]: cluster 2026-03-10T07:43:30.789678+0000 mgr.y (mgr.24407) 694 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:32.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:32 vm03 bash[23382]: cluster 2026-03-10T07:43:30.789678+0000 mgr.y (mgr.24407) 694 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:34.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:43:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:43:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:34 vm00 bash[28005]: cluster 2026-03-10T07:43:32.790015+0000 mgr.y (mgr.24407) 695 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:34.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:34 vm00 bash[28005]: cluster 2026-03-10T07:43:32.790015+0000 mgr.y (mgr.24407) 695 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:34.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:34 vm00 bash[20701]: cluster 2026-03-10T07:43:32.790015+0000 mgr.y (mgr.24407) 695 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:34.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:34 vm00 bash[20701]: cluster 2026-03-10T07:43:32.790015+0000 mgr.y (mgr.24407) 695 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:34.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:34 vm03 bash[23382]: cluster 2026-03-10T07:43:32.790015+0000 mgr.y (mgr.24407) 695 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:34.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:34 vm03 bash[23382]: cluster 2026-03-10T07:43:32.790015+0000 mgr.y (mgr.24407) 695 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:35 vm00 bash[28005]: audit 2026-03-10T07:43:33.656730+0000 mgr.y (mgr.24407) 696 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:35.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:35 vm00 bash[28005]: audit 2026-03-10T07:43:33.656730+0000 mgr.y (mgr.24407) 696 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:35 vm00 bash[20701]: audit 2026-03-10T07:43:33.656730+0000 mgr.y (mgr.24407) 696 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:35.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:35 vm00 bash[20701]: audit 2026-03-10T07:43:33.656730+0000 mgr.y (mgr.24407) 696 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:35.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:35 vm03 bash[23382]: audit 2026-03-10T07:43:33.656730+0000 mgr.y (mgr.24407) 696 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:35.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:35 vm03 bash[23382]: audit 2026-03-10T07:43:33.656730+0000 mgr.y (mgr.24407) 696 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:36 vm00 bash[28005]: cluster 2026-03-10T07:43:34.790844+0000 mgr.y (mgr.24407) 697 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:36.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:36 vm00 bash[28005]: cluster 2026-03-10T07:43:34.790844+0000 mgr.y (mgr.24407) 697 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:36.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:36 vm00 bash[20701]: cluster 2026-03-10T07:43:34.790844+0000 mgr.y (mgr.24407) 697 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:36.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:36 vm00 bash[20701]: cluster 2026-03-10T07:43:34.790844+0000 mgr.y (mgr.24407) 697 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:36.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:36 vm03 bash[23382]: cluster 2026-03-10T07:43:34.790844+0000 mgr.y (mgr.24407) 697 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:36.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:36 vm03 bash[23382]: cluster 2026-03-10T07:43:34.790844+0000 mgr.y (mgr.24407) 697 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:38.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:38 vm00 bash[28005]: cluster 2026-03-10T07:43:36.791231+0000 mgr.y (mgr.24407) 698 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:38.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:38 vm00 bash[28005]: cluster 2026-03-10T07:43:36.791231+0000 mgr.y (mgr.24407) 698 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:38.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:38 vm00 bash[20701]: cluster 2026-03-10T07:43:36.791231+0000 mgr.y (mgr.24407) 698 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:38.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:38 vm00 bash[20701]: cluster 2026-03-10T07:43:36.791231+0000 mgr.y (mgr.24407) 698 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:38.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:38 vm03 bash[23382]: cluster 2026-03-10T07:43:36.791231+0000 mgr.y (mgr.24407) 698 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:38.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:38 vm03 bash[23382]: cluster 2026-03-10T07:43:36.791231+0000 mgr.y (mgr.24407) 698 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:40.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:40 vm00 bash[28005]: cluster 2026-03-10T07:43:38.791676+0000 mgr.y (mgr.24407) 699 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:40.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:40 vm00 bash[28005]: cluster 2026-03-10T07:43:38.791676+0000 mgr.y (mgr.24407) 699 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:40.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:40 vm00 bash[28005]: audit 2026-03-10T07:43:39.891295+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:40.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:40 vm00 bash[28005]: audit 2026-03-10T07:43:39.891295+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:40 vm00 bash[20701]: cluster 2026-03-10T07:43:38.791676+0000 mgr.y (mgr.24407) 699 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:40 vm00 bash[20701]: cluster 2026-03-10T07:43:38.791676+0000 mgr.y (mgr.24407) 699 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:40 vm00 bash[20701]: audit 2026-03-10T07:43:39.891295+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:40.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:40 vm00 bash[20701]: audit 2026-03-10T07:43:39.891295+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:40 vm03 bash[23382]: cluster 2026-03-10T07:43:38.791676+0000 mgr.y (mgr.24407) 699 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:40 vm03 bash[23382]: cluster 2026-03-10T07:43:38.791676+0000 mgr.y (mgr.24407) 699 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:40 vm03 bash[23382]: audit 2026-03-10T07:43:39.891295+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:40 vm03 bash[23382]: audit 2026-03-10T07:43:39.891295+0000 mon.c (mon.2) 378 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:41.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:43:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:43:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:43:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:42 vm00 bash[28005]: cluster 2026-03-10T07:43:40.792349+0000 mgr.y (mgr.24407) 700 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:42 vm00 bash[28005]: cluster 2026-03-10T07:43:40.792349+0000 mgr.y (mgr.24407) 700 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:42 vm00 bash[20701]: cluster 2026-03-10T07:43:40.792349+0000 mgr.y (mgr.24407) 700 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:42.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:42 vm00 bash[20701]: cluster 2026-03-10T07:43:40.792349+0000 mgr.y (mgr.24407) 700 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:42.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:42 vm03 bash[23382]: cluster 2026-03-10T07:43:40.792349+0000 mgr.y (mgr.24407) 700 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:42.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:42 vm03 bash[23382]: cluster 2026-03-10T07:43:40.792349+0000 mgr.y (mgr.24407) 700 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:44.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:43:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:43:44.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:44 vm03 bash[23382]: cluster 2026-03-10T07:43:42.792659+0000 mgr.y (mgr.24407) 701 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:44.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:44 vm03 bash[23382]: cluster 2026-03-10T07:43:42.792659+0000 mgr.y (mgr.24407) 701 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:44 vm00 bash[28005]: cluster 2026-03-10T07:43:42.792659+0000 mgr.y (mgr.24407) 701 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:44 vm00 bash[28005]: cluster 2026-03-10T07:43:42.792659+0000 mgr.y (mgr.24407) 701 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:44 vm00 bash[20701]: cluster 2026-03-10T07:43:42.792659+0000 mgr.y (mgr.24407) 701 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:44 vm00 bash[20701]: cluster 2026-03-10T07:43:42.792659+0000 mgr.y (mgr.24407) 701 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:45 vm03 bash[23382]: audit 2026-03-10T07:43:43.667534+0000 mgr.y (mgr.24407) 702 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:45 vm03 bash[23382]: audit 2026-03-10T07:43:43.667534+0000 mgr.y (mgr.24407) 702 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:45.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:45 vm00 bash[28005]: audit 2026-03-10T07:43:43.667534+0000 mgr.y (mgr.24407) 702 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:45.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:45 vm00 bash[28005]: audit 2026-03-10T07:43:43.667534+0000 mgr.y (mgr.24407) 702 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:45.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:45 vm00 bash[20701]: audit 2026-03-10T07:43:43.667534+0000 mgr.y (mgr.24407) 702 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:45.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:45 vm00 bash[20701]: audit 2026-03-10T07:43:43.667534+0000 mgr.y (mgr.24407) 702 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:46 vm03 bash[23382]: cluster 2026-03-10T07:43:44.793596+0000 mgr.y (mgr.24407) 703 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:46 vm03 bash[23382]: cluster 2026-03-10T07:43:44.793596+0000 mgr.y (mgr.24407) 703 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:46.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:46 vm00 bash[28005]: cluster 2026-03-10T07:43:44.793596+0000 mgr.y (mgr.24407) 703 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:46.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:46 vm00 bash[28005]: cluster 2026-03-10T07:43:44.793596+0000 mgr.y (mgr.24407) 703 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:46.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:46 vm00 bash[20701]: cluster 2026-03-10T07:43:44.793596+0000 mgr.y (mgr.24407) 703 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:46.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:46 vm00 bash[20701]: cluster 2026-03-10T07:43:44.793596+0000 mgr.y (mgr.24407) 703 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:48.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:48 vm03 bash[23382]: cluster 2026-03-10T07:43:46.793975+0000 mgr.y (mgr.24407) 704 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:48.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:48 vm03 bash[23382]: cluster 2026-03-10T07:43:46.793975+0000 mgr.y (mgr.24407) 704 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:48.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:48 vm00 bash[28005]: cluster 2026-03-10T07:43:46.793975+0000 mgr.y (mgr.24407) 704 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:48.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:48 vm00 bash[28005]: cluster 2026-03-10T07:43:46.793975+0000 mgr.y (mgr.24407) 704 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:48.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:48 vm00 bash[20701]: cluster 2026-03-10T07:43:46.793975+0000 mgr.y (mgr.24407) 704 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:48.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:48 vm00 bash[20701]: cluster 2026-03-10T07:43:46.793975+0000 mgr.y (mgr.24407) 704 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:50.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:50 vm03 bash[23382]: cluster 2026-03-10T07:43:48.794326+0000 mgr.y (mgr.24407) 705 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:50.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:50 vm03 bash[23382]: cluster 2026-03-10T07:43:48.794326+0000 mgr.y (mgr.24407) 705 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:50.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:50 vm00 bash[28005]: cluster 2026-03-10T07:43:48.794326+0000 mgr.y (mgr.24407) 705 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:50.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:50 vm00 bash[28005]: cluster 2026-03-10T07:43:48.794326+0000 mgr.y (mgr.24407) 705 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:50.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:50 vm00 bash[20701]: cluster 2026-03-10T07:43:48.794326+0000 mgr.y (mgr.24407) 705 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:50.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:50 vm00 bash[20701]: cluster 2026-03-10T07:43:48.794326+0000 mgr.y (mgr.24407) 705 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:51.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:43:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:43:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:43:52.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:52 vm03 bash[23382]: cluster 2026-03-10T07:43:50.795114+0000 mgr.y (mgr.24407) 706 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:52.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:52 vm03 bash[23382]: cluster 2026-03-10T07:43:50.795114+0000 mgr.y (mgr.24407) 706 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:52.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:52 vm00 bash[28005]: cluster 2026-03-10T07:43:50.795114+0000 mgr.y (mgr.24407) 706 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:52.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:52 vm00 bash[28005]: cluster 2026-03-10T07:43:50.795114+0000 mgr.y (mgr.24407) 706 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:52 vm00 bash[20701]: cluster 2026-03-10T07:43:50.795114+0000 mgr.y (mgr.24407) 706 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:52 vm00 bash[20701]: cluster 2026-03-10T07:43:50.795114+0000 mgr.y (mgr.24407) 706 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:54.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:43:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:43:54.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:54 vm03 bash[23382]: cluster 2026-03-10T07:43:52.795438+0000 mgr.y (mgr.24407) 707 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:54.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:54 vm03 bash[23382]: cluster 2026-03-10T07:43:52.795438+0000 mgr.y (mgr.24407) 707 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:54.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:54 vm00 bash[28005]: cluster 2026-03-10T07:43:52.795438+0000 mgr.y (mgr.24407) 707 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:54.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:54 vm00 bash[28005]: cluster 2026-03-10T07:43:52.795438+0000 mgr.y (mgr.24407) 707 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:54.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:54 vm00 bash[20701]: cluster 2026-03-10T07:43:52.795438+0000 mgr.y (mgr.24407) 707 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:54.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:54 vm00 bash[20701]: cluster 2026-03-10T07:43:52.795438+0000 mgr.y (mgr.24407) 707 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:55.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:55 vm03 bash[23382]: audit 2026-03-10T07:43:53.671946+0000 mgr.y (mgr.24407) 708 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:55.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:55 vm03 bash[23382]: audit 2026-03-10T07:43:53.671946+0000 mgr.y (mgr.24407) 708 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:55.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:55 vm03 bash[23382]: audit 2026-03-10T07:43:54.897494+0000 mon.c (mon.2) 379 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:55.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:55 vm03 bash[23382]: audit 2026-03-10T07:43:54.897494+0000 mon.c (mon.2) 379 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:55.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:55 vm00 bash[28005]: audit 2026-03-10T07:43:53.671946+0000 mgr.y (mgr.24407) 708 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:55.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:55 vm00 bash[28005]: audit 2026-03-10T07:43:53.671946+0000 mgr.y (mgr.24407) 708 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:55.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:55 vm00 bash[28005]: audit 2026-03-10T07:43:54.897494+0000 mon.c (mon.2) 379 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:55.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:55 vm00 bash[28005]: audit 2026-03-10T07:43:54.897494+0000 mon.c (mon.2) 379 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:55.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:55 vm00 bash[20701]: audit 2026-03-10T07:43:53.671946+0000 mgr.y (mgr.24407) 708 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:55.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:55 vm00 bash[20701]: audit 2026-03-10T07:43:53.671946+0000 mgr.y (mgr.24407) 708 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:43:55.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:55 vm00 bash[20701]: audit 2026-03-10T07:43:54.897494+0000 mon.c (mon.2) 379 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:55.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:55 vm00 bash[20701]: audit 2026-03-10T07:43:54.897494+0000 mon.c (mon.2) 379 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:43:56.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:56 vm03 bash[23382]: cluster 2026-03-10T07:43:54.796159+0000 mgr.y (mgr.24407) 709 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:56.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:56 vm03 bash[23382]: cluster 2026-03-10T07:43:54.796159+0000 mgr.y (mgr.24407) 709 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:56.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:56 vm00 bash[28005]: cluster 2026-03-10T07:43:54.796159+0000 mgr.y (mgr.24407) 709 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:56.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:56 vm00 bash[28005]: cluster 2026-03-10T07:43:54.796159+0000 mgr.y (mgr.24407) 709 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:56.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:56 vm00 bash[20701]: cluster 2026-03-10T07:43:54.796159+0000 mgr.y (mgr.24407) 709 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:56.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:56 vm00 bash[20701]: cluster 2026-03-10T07:43:54.796159+0000 mgr.y (mgr.24407) 709 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:43:58.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:58 vm03 bash[23382]: cluster 2026-03-10T07:43:56.796587+0000 mgr.y (mgr.24407) 710 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:58.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:43:58 vm03 bash[23382]: cluster 2026-03-10T07:43:56.796587+0000 mgr.y (mgr.24407) 710 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:58.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:58 vm00 bash[28005]: cluster 2026-03-10T07:43:56.796587+0000 mgr.y (mgr.24407) 710 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:58.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:43:58 vm00 bash[28005]: cluster 2026-03-10T07:43:56.796587+0000 mgr.y (mgr.24407) 710 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:58.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:58 vm00 bash[20701]: cluster 2026-03-10T07:43:56.796587+0000 mgr.y (mgr.24407) 710 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:43:58.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:43:58 vm00 bash[20701]: cluster 2026-03-10T07:43:56.796587+0000 mgr.y (mgr.24407) 710 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:00 vm03 bash[23382]: cluster 2026-03-10T07:43:58.796968+0000 mgr.y (mgr.24407) 711 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:00 vm03 bash[23382]: cluster 2026-03-10T07:43:58.796968+0000 mgr.y (mgr.24407) 711 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:00.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:00 vm00 bash[28005]: cluster 2026-03-10T07:43:58.796968+0000 mgr.y (mgr.24407) 711 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:00.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:00 vm00 bash[28005]: cluster 2026-03-10T07:43:58.796968+0000 mgr.y (mgr.24407) 711 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:00.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:00 vm00 bash[20701]: cluster 2026-03-10T07:43:58.796968+0000 mgr.y (mgr.24407) 711 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:00.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:00 vm00 bash[20701]: cluster 2026-03-10T07:43:58.796968+0000 mgr.y (mgr.24407) 711 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:01.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:44:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:44:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:44:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:02 vm03 bash[23382]: cluster 2026-03-10T07:44:00.797646+0000 mgr.y (mgr.24407) 712 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:02 vm03 bash[23382]: cluster 2026-03-10T07:44:00.797646+0000 mgr.y (mgr.24407) 712 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:02.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:02 vm00 bash[28005]: cluster 2026-03-10T07:44:00.797646+0000 mgr.y (mgr.24407) 712 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:02.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:02 vm00 bash[28005]: cluster 2026-03-10T07:44:00.797646+0000 mgr.y (mgr.24407) 712 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:02.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:02 vm00 bash[20701]: cluster 2026-03-10T07:44:00.797646+0000 mgr.y (mgr.24407) 712 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:02.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:02 vm00 bash[20701]: cluster 2026-03-10T07:44:00.797646+0000 mgr.y (mgr.24407) 712 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:04.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:44:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:44:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:04 vm03 bash[23382]: cluster 2026-03-10T07:44:02.798054+0000 mgr.y (mgr.24407) 713 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:04 vm03 bash[23382]: cluster 2026-03-10T07:44:02.798054+0000 mgr.y (mgr.24407) 713 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:04 vm00 bash[28005]: cluster 2026-03-10T07:44:02.798054+0000 mgr.y (mgr.24407) 713 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:04.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:04 vm00 bash[28005]: cluster 2026-03-10T07:44:02.798054+0000 mgr.y (mgr.24407) 713 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:04 vm00 bash[20701]: cluster 2026-03-10T07:44:02.798054+0000 mgr.y (mgr.24407) 713 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:04 vm00 bash[20701]: cluster 2026-03-10T07:44:02.798054+0000 mgr.y (mgr.24407) 713 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:05.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:05 vm03 bash[23382]: audit 2026-03-10T07:44:03.672815+0000 mgr.y (mgr.24407) 714 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:05.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:05 vm03 bash[23382]: audit 2026-03-10T07:44:03.672815+0000 mgr.y (mgr.24407) 714 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:05 vm00 bash[28005]: audit 2026-03-10T07:44:03.672815+0000 mgr.y (mgr.24407) 714 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:05 vm00 bash[28005]: audit 2026-03-10T07:44:03.672815+0000 mgr.y (mgr.24407) 714 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:05 vm00 bash[20701]: audit 2026-03-10T07:44:03.672815+0000 mgr.y (mgr.24407) 714 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:05 vm00 bash[20701]: audit 2026-03-10T07:44:03.672815+0000 mgr.y (mgr.24407) 714 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:06.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:06 vm03 bash[23382]: cluster 2026-03-10T07:44:04.798748+0000 mgr.y (mgr.24407) 715 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:06.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:06 vm03 bash[23382]: cluster 2026-03-10T07:44:04.798748+0000 mgr.y (mgr.24407) 715 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:06 vm00 bash[28005]: cluster 2026-03-10T07:44:04.798748+0000 mgr.y (mgr.24407) 715 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:06 vm00 bash[28005]: cluster 2026-03-10T07:44:04.798748+0000 mgr.y (mgr.24407) 715 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:06.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:06 vm00 bash[20701]: cluster 2026-03-10T07:44:04.798748+0000 mgr.y (mgr.24407) 715 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:06.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:06 vm00 bash[20701]: cluster 2026-03-10T07:44:04.798748+0000 mgr.y (mgr.24407) 715 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:08 vm00 bash[28005]: cluster 2026-03-10T07:44:06.799128+0000 mgr.y (mgr.24407) 716 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:08.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:08 vm00 bash[28005]: cluster 2026-03-10T07:44:06.799128+0000 mgr.y (mgr.24407) 716 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:08 vm00 bash[20701]: cluster 2026-03-10T07:44:06.799128+0000 mgr.y (mgr.24407) 716 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:08 vm00 bash[20701]: cluster 2026-03-10T07:44:06.799128+0000 mgr.y (mgr.24407) 716 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:08 vm03 bash[23382]: cluster 2026-03-10T07:44:06.799128+0000 mgr.y (mgr.24407) 716 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:08 vm03 bash[23382]: cluster 2026-03-10T07:44:06.799128+0000 mgr.y (mgr.24407) 716 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:10 vm00 bash[20701]: cluster 2026-03-10T07:44:08.799534+0000 mgr.y (mgr.24407) 717 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:10 vm00 bash[20701]: cluster 2026-03-10T07:44:08.799534+0000 mgr.y (mgr.24407) 717 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:10 vm00 bash[20701]: audit 2026-03-10T07:44:09.913490+0000 mon.c (mon.2) 380 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:10.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:10 vm00 bash[20701]: audit 2026-03-10T07:44:09.913490+0000 mon.c (mon.2) 380 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:10 vm00 bash[28005]: cluster 2026-03-10T07:44:08.799534+0000 mgr.y (mgr.24407) 717 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:10 vm00 bash[28005]: cluster 2026-03-10T07:44:08.799534+0000 mgr.y (mgr.24407) 717 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:10 vm00 bash[28005]: audit 2026-03-10T07:44:09.913490+0000 mon.c (mon.2) 380 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:10.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:10 vm00 bash[28005]: audit 2026-03-10T07:44:09.913490+0000 mon.c (mon.2) 380 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:10 vm03 bash[23382]: cluster 2026-03-10T07:44:08.799534+0000 mgr.y (mgr.24407) 717 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:10 vm03 bash[23382]: cluster 2026-03-10T07:44:08.799534+0000 mgr.y (mgr.24407) 717 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:10 vm03 bash[23382]: audit 2026-03-10T07:44:09.913490+0000 mon.c (mon.2) 380 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:10 vm03 bash[23382]: audit 2026-03-10T07:44:09.913490+0000 mon.c (mon.2) 380 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:11.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:44:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:44:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:44:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:12 vm00 bash[28005]: cluster 2026-03-10T07:44:10.800247+0000 mgr.y (mgr.24407) 718 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:12.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:12 vm00 bash[28005]: cluster 2026-03-10T07:44:10.800247+0000 mgr.y (mgr.24407) 718 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:12.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:12 vm00 bash[20701]: cluster 2026-03-10T07:44:10.800247+0000 mgr.y (mgr.24407) 718 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:12.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:12 vm00 bash[20701]: cluster 2026-03-10T07:44:10.800247+0000 mgr.y (mgr.24407) 718 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:12.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:12 vm03 bash[23382]: cluster 2026-03-10T07:44:10.800247+0000 mgr.y (mgr.24407) 718 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:12.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:12 vm03 bash[23382]: cluster 2026-03-10T07:44:10.800247+0000 mgr.y (mgr.24407) 718 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:14.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:44:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:44:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:14 vm00 bash[28005]: cluster 2026-03-10T07:44:12.800597+0000 mgr.y (mgr.24407) 719 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:14 vm00 bash[28005]: cluster 2026-03-10T07:44:12.800597+0000 mgr.y (mgr.24407) 719 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:14.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:14 vm00 bash[20701]: cluster 2026-03-10T07:44:12.800597+0000 mgr.y (mgr.24407) 719 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:14.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:14 vm00 bash[20701]: cluster 2026-03-10T07:44:12.800597+0000 mgr.y (mgr.24407) 719 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:14.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:14 vm03 bash[23382]: cluster 2026-03-10T07:44:12.800597+0000 mgr.y (mgr.24407) 719 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:14.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:14 vm03 bash[23382]: cluster 2026-03-10T07:44:12.800597+0000 mgr.y (mgr.24407) 719 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:15.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:15 vm00 bash[28005]: audit 2026-03-10T07:44:13.673828+0000 mgr.y (mgr.24407) 720 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:15.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:15 vm00 bash[28005]: audit 2026-03-10T07:44:13.673828+0000 mgr.y (mgr.24407) 720 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:15 vm00 bash[20701]: audit 2026-03-10T07:44:13.673828+0000 mgr.y (mgr.24407) 720 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:15 vm00 bash[20701]: audit 2026-03-10T07:44:13.673828+0000 mgr.y (mgr.24407) 720 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:15.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:15 vm03 bash[23382]: audit 2026-03-10T07:44:13.673828+0000 mgr.y (mgr.24407) 720 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:15.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:15 vm03 bash[23382]: audit 2026-03-10T07:44:13.673828+0000 mgr.y (mgr.24407) 720 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:16 vm00 bash[28005]: cluster 2026-03-10T07:44:14.801261+0000 mgr.y (mgr.24407) 721 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:16 vm00 bash[28005]: cluster 2026-03-10T07:44:14.801261+0000 mgr.y (mgr.24407) 721 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:16 vm00 bash[28005]: audit 2026-03-10T07:44:15.985275+0000 mon.c (mon.2) 381 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:44:16.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:16 vm00 bash[28005]: audit 2026-03-10T07:44:15.985275+0000 mon.c (mon.2) 381 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:44:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:16 vm00 bash[20701]: cluster 2026-03-10T07:44:14.801261+0000 mgr.y (mgr.24407) 721 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:16 vm00 bash[20701]: cluster 2026-03-10T07:44:14.801261+0000 mgr.y (mgr.24407) 721 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:16 vm00 bash[20701]: audit 2026-03-10T07:44:15.985275+0000 mon.c (mon.2) 381 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:44:16.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:16 vm00 bash[20701]: audit 2026-03-10T07:44:15.985275+0000 mon.c (mon.2) 381 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:44:16.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:16 vm03 bash[23382]: cluster 2026-03-10T07:44:14.801261+0000 mgr.y (mgr.24407) 721 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:16.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:16 vm03 bash[23382]: cluster 2026-03-10T07:44:14.801261+0000 mgr.y (mgr.24407) 721 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:16.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:16 vm03 bash[23382]: audit 2026-03-10T07:44:15.985275+0000 mon.c (mon.2) 381 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:44:16.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:16 vm03 bash[23382]: audit 2026-03-10T07:44:15.985275+0000 mon.c (mon.2) 381 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:17 vm00 bash[28005]: audit 2026-03-10T07:44:16.348702+0000 mon.c (mon.2) 382 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:17 vm00 bash[28005]: audit 2026-03-10T07:44:16.348702+0000 mon.c (mon.2) 382 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:17 vm00 bash[28005]: audit 2026-03-10T07:44:16.349959+0000 mon.c (mon.2) 383 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:17 vm00 bash[28005]: audit 2026-03-10T07:44:16.349959+0000 mon.c (mon.2) 383 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:17 vm00 bash[28005]: audit 2026-03-10T07:44:16.355459+0000 mon.a (mon.0) 3537 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:17 vm00 bash[28005]: audit 2026-03-10T07:44:16.355459+0000 mon.a (mon.0) 3537 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:17 vm00 bash[20701]: audit 2026-03-10T07:44:16.348702+0000 mon.c (mon.2) 382 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:17 vm00 bash[20701]: audit 2026-03-10T07:44:16.348702+0000 mon.c (mon.2) 382 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:17 vm00 bash[20701]: audit 2026-03-10T07:44:16.349959+0000 mon.c (mon.2) 383 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:17 vm00 bash[20701]: audit 2026-03-10T07:44:16.349959+0000 mon.c (mon.2) 383 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:17 vm00 bash[20701]: audit 2026-03-10T07:44:16.355459+0000 mon.a (mon.0) 3537 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:44:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:17 vm00 bash[20701]: audit 2026-03-10T07:44:16.355459+0000 mon.a (mon.0) 3537 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:44:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:17 vm03 bash[23382]: audit 2026-03-10T07:44:16.348702+0000 mon.c (mon.2) 382 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:44:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:17 vm03 bash[23382]: audit 2026-03-10T07:44:16.348702+0000 mon.c (mon.2) 382 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:44:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:17 vm03 bash[23382]: audit 2026-03-10T07:44:16.349959+0000 mon.c (mon.2) 383 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:44:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:17 vm03 bash[23382]: audit 2026-03-10T07:44:16.349959+0000 mon.c (mon.2) 383 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:44:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:17 vm03 bash[23382]: audit 2026-03-10T07:44:16.355459+0000 mon.a (mon.0) 3537 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:44:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:17 vm03 bash[23382]: audit 2026-03-10T07:44:16.355459+0000 mon.a (mon.0) 3537 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:44:18.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:18 vm00 bash[28005]: cluster 2026-03-10T07:44:16.801743+0000 mgr.y (mgr.24407) 722 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:18.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:18 vm00 bash[28005]: cluster 2026-03-10T07:44:16.801743+0000 mgr.y (mgr.24407) 722 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:18 vm00 bash[20701]: cluster 2026-03-10T07:44:16.801743+0000 mgr.y (mgr.24407) 722 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:18 vm00 bash[20701]: cluster 2026-03-10T07:44:16.801743+0000 mgr.y (mgr.24407) 722 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:18.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:18 vm03 bash[23382]: cluster 2026-03-10T07:44:16.801743+0000 mgr.y (mgr.24407) 722 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:18.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:18 vm03 bash[23382]: cluster 2026-03-10T07:44:16.801743+0000 mgr.y (mgr.24407) 722 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:20 vm00 bash[28005]: cluster 2026-03-10T07:44:18.802096+0000 mgr.y (mgr.24407) 723 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:20 vm00 bash[28005]: cluster 2026-03-10T07:44:18.802096+0000 mgr.y (mgr.24407) 723 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:20 vm00 bash[20701]: cluster 2026-03-10T07:44:18.802096+0000 mgr.y (mgr.24407) 723 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:20 vm00 bash[20701]: cluster 2026-03-10T07:44:18.802096+0000 mgr.y (mgr.24407) 723 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:20.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:20 vm03 bash[23382]: cluster 2026-03-10T07:44:18.802096+0000 mgr.y (mgr.24407) 723 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:20.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:20 vm03 bash[23382]: cluster 2026-03-10T07:44:18.802096+0000 mgr.y (mgr.24407) 723 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:21.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:44:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:44:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:44:22.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:22 vm00 bash[28005]: cluster 2026-03-10T07:44:20.802805+0000 mgr.y (mgr.24407) 724 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:22.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:22 vm00 bash[28005]: cluster 2026-03-10T07:44:20.802805+0000 mgr.y (mgr.24407) 724 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:22.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:22 vm00 bash[20701]: cluster 2026-03-10T07:44:20.802805+0000 mgr.y (mgr.24407) 724 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:22.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:22 vm00 bash[20701]: cluster 2026-03-10T07:44:20.802805+0000 mgr.y (mgr.24407) 724 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:22.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:22 vm03 bash[23382]: cluster 2026-03-10T07:44:20.802805+0000 mgr.y (mgr.24407) 724 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:22.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:22 vm03 bash[23382]: cluster 2026-03-10T07:44:20.802805+0000 mgr.y (mgr.24407) 724 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:24.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:44:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:44:24.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:24 vm00 bash[28005]: cluster 2026-03-10T07:44:22.803180+0000 mgr.y (mgr.24407) 725 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:24.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:24 vm00 bash[28005]: cluster 2026-03-10T07:44:22.803180+0000 mgr.y (mgr.24407) 725 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:24.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:24 vm00 bash[20701]: cluster 2026-03-10T07:44:22.803180+0000 mgr.y (mgr.24407) 725 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:24.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:24 vm00 bash[20701]: cluster 2026-03-10T07:44:22.803180+0000 mgr.y (mgr.24407) 725 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:24.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:24 vm03 bash[23382]: cluster 2026-03-10T07:44:22.803180+0000 mgr.y (mgr.24407) 725 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:24.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:24 vm03 bash[23382]: cluster 2026-03-10T07:44:22.803180+0000 mgr.y (mgr.24407) 725 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:25 vm00 bash[28005]: audit 2026-03-10T07:44:23.684444+0000 mgr.y (mgr.24407) 726 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:25 vm00 bash[28005]: audit 2026-03-10T07:44:23.684444+0000 mgr.y (mgr.24407) 726 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:25 vm00 bash[28005]: audit 2026-03-10T07:44:24.921531+0000 mon.c (mon.2) 384 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:25 vm00 bash[28005]: audit 2026-03-10T07:44:24.921531+0000 mon.c (mon.2) 384 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:25 vm00 bash[20701]: audit 2026-03-10T07:44:23.684444+0000 mgr.y (mgr.24407) 726 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:25 vm00 bash[20701]: audit 2026-03-10T07:44:23.684444+0000 mgr.y (mgr.24407) 726 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:25 vm00 bash[20701]: audit 2026-03-10T07:44:24.921531+0000 mon.c (mon.2) 384 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:25 vm00 bash[20701]: audit 2026-03-10T07:44:24.921531+0000 mon.c (mon.2) 384 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:25.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:25 vm03 bash[23382]: audit 2026-03-10T07:44:23.684444+0000 mgr.y (mgr.24407) 726 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:25.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:25 vm03 bash[23382]: audit 2026-03-10T07:44:23.684444+0000 mgr.y (mgr.24407) 726 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:25.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:25 vm03 bash[23382]: audit 2026-03-10T07:44:24.921531+0000 mon.c (mon.2) 384 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:25.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:25 vm03 bash[23382]: audit 2026-03-10T07:44:24.921531+0000 mon.c (mon.2) 384 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:26.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:26 vm03 bash[23382]: cluster 2026-03-10T07:44:24.803993+0000 mgr.y (mgr.24407) 727 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:26.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:26 vm03 bash[23382]: cluster 2026-03-10T07:44:24.803993+0000 mgr.y (mgr.24407) 727 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:26.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:26 vm00 bash[28005]: cluster 2026-03-10T07:44:24.803993+0000 mgr.y (mgr.24407) 727 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:26.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:26 vm00 bash[28005]: cluster 2026-03-10T07:44:24.803993+0000 mgr.y (mgr.24407) 727 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:26.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:26 vm00 bash[20701]: cluster 2026-03-10T07:44:24.803993+0000 mgr.y (mgr.24407) 727 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:26.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:26 vm00 bash[20701]: cluster 2026-03-10T07:44:24.803993+0000 mgr.y (mgr.24407) 727 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:28.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:28 vm03 bash[23382]: cluster 2026-03-10T07:44:26.804438+0000 mgr.y (mgr.24407) 728 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:28.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:28 vm03 bash[23382]: cluster 2026-03-10T07:44:26.804438+0000 mgr.y (mgr.24407) 728 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:28.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:28 vm00 bash[28005]: cluster 2026-03-10T07:44:26.804438+0000 mgr.y (mgr.24407) 728 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:28.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:28 vm00 bash[28005]: cluster 2026-03-10T07:44:26.804438+0000 mgr.y (mgr.24407) 728 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:28 vm00 bash[20701]: cluster 2026-03-10T07:44:26.804438+0000 mgr.y (mgr.24407) 728 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:28 vm00 bash[20701]: cluster 2026-03-10T07:44:26.804438+0000 mgr.y (mgr.24407) 728 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:30.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:30 vm03 bash[23382]: cluster 2026-03-10T07:44:28.804740+0000 mgr.y (mgr.24407) 729 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:30.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:30 vm03 bash[23382]: cluster 2026-03-10T07:44:28.804740+0000 mgr.y (mgr.24407) 729 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:30.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:30 vm00 bash[28005]: cluster 2026-03-10T07:44:28.804740+0000 mgr.y (mgr.24407) 729 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:30.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:30 vm00 bash[28005]: cluster 2026-03-10T07:44:28.804740+0000 mgr.y (mgr.24407) 729 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:30.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:30 vm00 bash[20701]: cluster 2026-03-10T07:44:28.804740+0000 mgr.y (mgr.24407) 729 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:30.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:30 vm00 bash[20701]: cluster 2026-03-10T07:44:28.804740+0000 mgr.y (mgr.24407) 729 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:31.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:44:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:44:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:44:32.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:32 vm03 bash[23382]: cluster 2026-03-10T07:44:30.805353+0000 mgr.y (mgr.24407) 730 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:32.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:32 vm03 bash[23382]: cluster 2026-03-10T07:44:30.805353+0000 mgr.y (mgr.24407) 730 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:32.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:32 vm00 bash[28005]: cluster 2026-03-10T07:44:30.805353+0000 mgr.y (mgr.24407) 730 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:32.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:32 vm00 bash[28005]: cluster 2026-03-10T07:44:30.805353+0000 mgr.y (mgr.24407) 730 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:32 vm00 bash[20701]: cluster 2026-03-10T07:44:30.805353+0000 mgr.y (mgr.24407) 730 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:32.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:32 vm00 bash[20701]: cluster 2026-03-10T07:44:30.805353+0000 mgr.y (mgr.24407) 730 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:34.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:44:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:44:34.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:34 vm03 bash[23382]: cluster 2026-03-10T07:44:32.805733+0000 mgr.y (mgr.24407) 731 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:34.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:34 vm03 bash[23382]: cluster 2026-03-10T07:44:32.805733+0000 mgr.y (mgr.24407) 731 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:34 vm00 bash[28005]: cluster 2026-03-10T07:44:32.805733+0000 mgr.y (mgr.24407) 731 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:34.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:34 vm00 bash[28005]: cluster 2026-03-10T07:44:32.805733+0000 mgr.y (mgr.24407) 731 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:34.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:34 vm00 bash[20701]: cluster 2026-03-10T07:44:32.805733+0000 mgr.y (mgr.24407) 731 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:34.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:34 vm00 bash[20701]: cluster 2026-03-10T07:44:32.805733+0000 mgr.y (mgr.24407) 731 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:35.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:35 vm03 bash[23382]: audit 2026-03-10T07:44:33.690641+0000 mgr.y (mgr.24407) 732 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:35.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:35 vm03 bash[23382]: audit 2026-03-10T07:44:33.690641+0000 mgr.y (mgr.24407) 732 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:35.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:35 vm00 bash[28005]: audit 2026-03-10T07:44:33.690641+0000 mgr.y (mgr.24407) 732 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:35.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:35 vm00 bash[28005]: audit 2026-03-10T07:44:33.690641+0000 mgr.y (mgr.24407) 732 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:35 vm00 bash[20701]: audit 2026-03-10T07:44:33.690641+0000 mgr.y (mgr.24407) 732 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:35 vm00 bash[20701]: audit 2026-03-10T07:44:33.690641+0000 mgr.y (mgr.24407) 732 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:36 vm03 bash[23382]: cluster 2026-03-10T07:44:34.806432+0000 mgr.y (mgr.24407) 733 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:36 vm03 bash[23382]: cluster 2026-03-10T07:44:34.806432+0000 mgr.y (mgr.24407) 733 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:36.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:36 vm00 bash[28005]: cluster 2026-03-10T07:44:34.806432+0000 mgr.y (mgr.24407) 733 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:36.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:36 vm00 bash[28005]: cluster 2026-03-10T07:44:34.806432+0000 mgr.y (mgr.24407) 733 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:36.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:36 vm00 bash[20701]: cluster 2026-03-10T07:44:34.806432+0000 mgr.y (mgr.24407) 733 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:36.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:36 vm00 bash[20701]: cluster 2026-03-10T07:44:34.806432+0000 mgr.y (mgr.24407) 733 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:37 vm03 bash[23382]: cluster 2026-03-10T07:44:36.806779+0000 mgr.y (mgr.24407) 734 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:37 vm03 bash[23382]: cluster 2026-03-10T07:44:36.806779+0000 mgr.y (mgr.24407) 734 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:37.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:37 vm00 bash[28005]: cluster 2026-03-10T07:44:36.806779+0000 mgr.y (mgr.24407) 734 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:37.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:37 vm00 bash[28005]: cluster 2026-03-10T07:44:36.806779+0000 mgr.y (mgr.24407) 734 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:37 vm00 bash[20701]: cluster 2026-03-10T07:44:36.806779+0000 mgr.y (mgr.24407) 734 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:37 vm00 bash[20701]: cluster 2026-03-10T07:44:36.806779+0000 mgr.y (mgr.24407) 734 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:40.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:39 vm03 bash[23382]: cluster 2026-03-10T07:44:38.807081+0000 mgr.y (mgr.24407) 735 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:40.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:39 vm03 bash[23382]: cluster 2026-03-10T07:44:38.807081+0000 mgr.y (mgr.24407) 735 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:40.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:39 vm00 bash[28005]: cluster 2026-03-10T07:44:38.807081+0000 mgr.y (mgr.24407) 735 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:40.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:39 vm00 bash[28005]: cluster 2026-03-10T07:44:38.807081+0000 mgr.y (mgr.24407) 735 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:39 vm00 bash[20701]: cluster 2026-03-10T07:44:38.807081+0000 mgr.y (mgr.24407) 735 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:39 vm00 bash[20701]: cluster 2026-03-10T07:44:38.807081+0000 mgr.y (mgr.24407) 735 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:41.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:40 vm03 bash[23382]: audit 2026-03-10T07:44:39.928243+0000 mon.c (mon.2) 385 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:41.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:40 vm03 bash[23382]: audit 2026-03-10T07:44:39.928243+0000 mon.c (mon.2) 385 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:41.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:40 vm00 bash[28005]: audit 2026-03-10T07:44:39.928243+0000 mon.c (mon.2) 385 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:41.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:40 vm00 bash[28005]: audit 2026-03-10T07:44:39.928243+0000 mon.c (mon.2) 385 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:41.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:44:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:44:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:44:41.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:40 vm00 bash[20701]: audit 2026-03-10T07:44:39.928243+0000 mon.c (mon.2) 385 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:41.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:40 vm00 bash[20701]: audit 2026-03-10T07:44:39.928243+0000 mon.c (mon.2) 385 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:42.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:41 vm03 bash[23382]: cluster 2026-03-10T07:44:40.807772+0000 mgr.y (mgr.24407) 736 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:42.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:41 vm03 bash[23382]: cluster 2026-03-10T07:44:40.807772+0000 mgr.y (mgr.24407) 736 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:41 vm00 bash[28005]: cluster 2026-03-10T07:44:40.807772+0000 mgr.y (mgr.24407) 736 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:41 vm00 bash[28005]: cluster 2026-03-10T07:44:40.807772+0000 mgr.y (mgr.24407) 736 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:41 vm00 bash[20701]: cluster 2026-03-10T07:44:40.807772+0000 mgr.y (mgr.24407) 736 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:41 vm00 bash[20701]: cluster 2026-03-10T07:44:40.807772+0000 mgr.y (mgr.24407) 736 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:44.011 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:44:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:44:44.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:43 vm03 bash[23382]: cluster 2026-03-10T07:44:42.808099+0000 mgr.y (mgr.24407) 737 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:44.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:43 vm03 bash[23382]: cluster 2026-03-10T07:44:42.808099+0000 mgr.y (mgr.24407) 737 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:44.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:43 vm00 bash[28005]: cluster 2026-03-10T07:44:42.808099+0000 mgr.y (mgr.24407) 737 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:44.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:43 vm00 bash[28005]: cluster 2026-03-10T07:44:42.808099+0000 mgr.y (mgr.24407) 737 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:44.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:43 vm00 bash[20701]: cluster 2026-03-10T07:44:42.808099+0000 mgr.y (mgr.24407) 737 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:44.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:43 vm00 bash[20701]: cluster 2026-03-10T07:44:42.808099+0000 mgr.y (mgr.24407) 737 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:45.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:44 vm03 bash[23382]: audit 2026-03-10T07:44:43.697753+0000 mgr.y (mgr.24407) 738 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:45.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:44 vm03 bash[23382]: audit 2026-03-10T07:44:43.697753+0000 mgr.y (mgr.24407) 738 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:45.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:44 vm00 bash[28005]: audit 2026-03-10T07:44:43.697753+0000 mgr.y (mgr.24407) 738 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:45.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:44 vm00 bash[28005]: audit 2026-03-10T07:44:43.697753+0000 mgr.y (mgr.24407) 738 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:45.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:44 vm00 bash[20701]: audit 2026-03-10T07:44:43.697753+0000 mgr.y (mgr.24407) 738 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:45.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:44 vm00 bash[20701]: audit 2026-03-10T07:44:43.697753+0000 mgr.y (mgr.24407) 738 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:46.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:45 vm03 bash[23382]: cluster 2026-03-10T07:44:44.808807+0000 mgr.y (mgr.24407) 739 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:46.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:45 vm03 bash[23382]: cluster 2026-03-10T07:44:44.808807+0000 mgr.y (mgr.24407) 739 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:46.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:45 vm00 bash[28005]: cluster 2026-03-10T07:44:44.808807+0000 mgr.y (mgr.24407) 739 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:46.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:45 vm00 bash[28005]: cluster 2026-03-10T07:44:44.808807+0000 mgr.y (mgr.24407) 739 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:46.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:45 vm00 bash[20701]: cluster 2026-03-10T07:44:44.808807+0000 mgr.y (mgr.24407) 739 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:46.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:45 vm00 bash[20701]: cluster 2026-03-10T07:44:44.808807+0000 mgr.y (mgr.24407) 739 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:48.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:47 vm03 bash[23382]: cluster 2026-03-10T07:44:46.809118+0000 mgr.y (mgr.24407) 740 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:48.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:47 vm03 bash[23382]: cluster 2026-03-10T07:44:46.809118+0000 mgr.y (mgr.24407) 740 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:48.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:47 vm00 bash[28005]: cluster 2026-03-10T07:44:46.809118+0000 mgr.y (mgr.24407) 740 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:48.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:47 vm00 bash[28005]: cluster 2026-03-10T07:44:46.809118+0000 mgr.y (mgr.24407) 740 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:47 vm00 bash[20701]: cluster 2026-03-10T07:44:46.809118+0000 mgr.y (mgr.24407) 740 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:47 vm00 bash[20701]: cluster 2026-03-10T07:44:46.809118+0000 mgr.y (mgr.24407) 740 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:50.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:49 vm03 bash[23382]: cluster 2026-03-10T07:44:48.809451+0000 mgr.y (mgr.24407) 741 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:50.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:49 vm03 bash[23382]: cluster 2026-03-10T07:44:48.809451+0000 mgr.y (mgr.24407) 741 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:50.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:49 vm00 bash[28005]: cluster 2026-03-10T07:44:48.809451+0000 mgr.y (mgr.24407) 741 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:50.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:49 vm00 bash[28005]: cluster 2026-03-10T07:44:48.809451+0000 mgr.y (mgr.24407) 741 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:50.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:49 vm00 bash[20701]: cluster 2026-03-10T07:44:48.809451+0000 mgr.y (mgr.24407) 741 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:50.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:49 vm00 bash[20701]: cluster 2026-03-10T07:44:48.809451+0000 mgr.y (mgr.24407) 741 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:51.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:44:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:44:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:44:52.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:51 vm03 bash[23382]: cluster 2026-03-10T07:44:50.810107+0000 mgr.y (mgr.24407) 742 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:52.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:51 vm03 bash[23382]: cluster 2026-03-10T07:44:50.810107+0000 mgr.y (mgr.24407) 742 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:52.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:51 vm00 bash[20701]: cluster 2026-03-10T07:44:50.810107+0000 mgr.y (mgr.24407) 742 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:52.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:51 vm00 bash[20701]: cluster 2026-03-10T07:44:50.810107+0000 mgr.y (mgr.24407) 742 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:52.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:51 vm00 bash[28005]: cluster 2026-03-10T07:44:50.810107+0000 mgr.y (mgr.24407) 742 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:52.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:51 vm00 bash[28005]: cluster 2026-03-10T07:44:50.810107+0000 mgr.y (mgr.24407) 742 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:54.011 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:44:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:44:54.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:53 vm03 bash[23382]: cluster 2026-03-10T07:44:52.810533+0000 mgr.y (mgr.24407) 743 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:54.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:53 vm03 bash[23382]: cluster 2026-03-10T07:44:52.810533+0000 mgr.y (mgr.24407) 743 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:53 vm00 bash[20701]: cluster 2026-03-10T07:44:52.810533+0000 mgr.y (mgr.24407) 743 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:53 vm00 bash[20701]: cluster 2026-03-10T07:44:52.810533+0000 mgr.y (mgr.24407) 743 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:54.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:53 vm00 bash[28005]: cluster 2026-03-10T07:44:52.810533+0000 mgr.y (mgr.24407) 743 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:54.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:53 vm00 bash[28005]: cluster 2026-03-10T07:44:52.810533+0000 mgr.y (mgr.24407) 743 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:55.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:54 vm03 bash[23382]: audit 2026-03-10T07:44:53.705370+0000 mgr.y (mgr.24407) 744 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:55.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:54 vm03 bash[23382]: audit 2026-03-10T07:44:53.705370+0000 mgr.y (mgr.24407) 744 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:55.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:54 vm03 bash[23382]: audit 2026-03-10T07:44:54.935436+0000 mon.c (mon.2) 386 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:55.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:54 vm03 bash[23382]: audit 2026-03-10T07:44:54.935436+0000 mon.c (mon.2) 386 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:55.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:54 vm00 bash[28005]: audit 2026-03-10T07:44:53.705370+0000 mgr.y (mgr.24407) 744 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:55.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:54 vm00 bash[28005]: audit 2026-03-10T07:44:53.705370+0000 mgr.y (mgr.24407) 744 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:55.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:54 vm00 bash[28005]: audit 2026-03-10T07:44:54.935436+0000 mon.c (mon.2) 386 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:55.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:54 vm00 bash[28005]: audit 2026-03-10T07:44:54.935436+0000 mon.c (mon.2) 386 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:55.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:54 vm00 bash[20701]: audit 2026-03-10T07:44:53.705370+0000 mgr.y (mgr.24407) 744 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:55.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:54 vm00 bash[20701]: audit 2026-03-10T07:44:53.705370+0000 mgr.y (mgr.24407) 744 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:44:55.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:54 vm00 bash[20701]: audit 2026-03-10T07:44:54.935436+0000 mon.c (mon.2) 386 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:55.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:54 vm00 bash[20701]: audit 2026-03-10T07:44:54.935436+0000 mon.c (mon.2) 386 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:44:56.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:55 vm03 bash[23382]: cluster 2026-03-10T07:44:54.811378+0000 mgr.y (mgr.24407) 745 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:56.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:55 vm03 bash[23382]: cluster 2026-03-10T07:44:54.811378+0000 mgr.y (mgr.24407) 745 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:56.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:55 vm00 bash[28005]: cluster 2026-03-10T07:44:54.811378+0000 mgr.y (mgr.24407) 745 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:56.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:55 vm00 bash[28005]: cluster 2026-03-10T07:44:54.811378+0000 mgr.y (mgr.24407) 745 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:56.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:55 vm00 bash[20701]: cluster 2026-03-10T07:44:54.811378+0000 mgr.y (mgr.24407) 745 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:56.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:55 vm00 bash[20701]: cluster 2026-03-10T07:44:54.811378+0000 mgr.y (mgr.24407) 745 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:44:58.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:57 vm03 bash[23382]: cluster 2026-03-10T07:44:56.811743+0000 mgr.y (mgr.24407) 746 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:58.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:57 vm03 bash[23382]: cluster 2026-03-10T07:44:56.811743+0000 mgr.y (mgr.24407) 746 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:57 vm00 bash[28005]: cluster 2026-03-10T07:44:56.811743+0000 mgr.y (mgr.24407) 746 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:57 vm00 bash[28005]: cluster 2026-03-10T07:44:56.811743+0000 mgr.y (mgr.24407) 746 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:57 vm00 bash[20701]: cluster 2026-03-10T07:44:56.811743+0000 mgr.y (mgr.24407) 746 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:44:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:57 vm00 bash[20701]: cluster 2026-03-10T07:44:56.811743+0000 mgr.y (mgr.24407) 746 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:00.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:59 vm03 bash[23382]: cluster 2026-03-10T07:44:58.812087+0000 mgr.y (mgr.24407) 747 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:00.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:44:59 vm03 bash[23382]: cluster 2026-03-10T07:44:58.812087+0000 mgr.y (mgr.24407) 747 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:59 vm00 bash[28005]: cluster 2026-03-10T07:44:58.812087+0000 mgr.y (mgr.24407) 747 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:44:59 vm00 bash[28005]: cluster 2026-03-10T07:44:58.812087+0000 mgr.y (mgr.24407) 747 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:59 vm00 bash[20701]: cluster 2026-03-10T07:44:58.812087+0000 mgr.y (mgr.24407) 747 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:44:59 vm00 bash[20701]: cluster 2026-03-10T07:44:58.812087+0000 mgr.y (mgr.24407) 747 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:01.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:45:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:45:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:45:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:02 vm00 bash[28005]: cluster 2026-03-10T07:45:00.812854+0000 mgr.y (mgr.24407) 748 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:02 vm00 bash[28005]: cluster 2026-03-10T07:45:00.812854+0000 mgr.y (mgr.24407) 748 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:02 vm00 bash[20701]: cluster 2026-03-10T07:45:00.812854+0000 mgr.y (mgr.24407) 748 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:02.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:02 vm00 bash[20701]: cluster 2026-03-10T07:45:00.812854+0000 mgr.y (mgr.24407) 748 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:02 vm03 bash[23382]: cluster 2026-03-10T07:45:00.812854+0000 mgr.y (mgr.24407) 748 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:02 vm03 bash[23382]: cluster 2026-03-10T07:45:00.812854+0000 mgr.y (mgr.24407) 748 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:04.011 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:45:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:45:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:04 vm00 bash[28005]: cluster 2026-03-10T07:45:02.813194+0000 mgr.y (mgr.24407) 749 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:04 vm00 bash[28005]: cluster 2026-03-10T07:45:02.813194+0000 mgr.y (mgr.24407) 749 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:04 vm00 bash[20701]: cluster 2026-03-10T07:45:02.813194+0000 mgr.y (mgr.24407) 749 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:04 vm00 bash[20701]: cluster 2026-03-10T07:45:02.813194+0000 mgr.y (mgr.24407) 749 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:04.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:04 vm03 bash[23382]: cluster 2026-03-10T07:45:02.813194+0000 mgr.y (mgr.24407) 749 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:04.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:04 vm03 bash[23382]: cluster 2026-03-10T07:45:02.813194+0000 mgr.y (mgr.24407) 749 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:05.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:05 vm03 bash[23382]: audit 2026-03-10T07:45:03.716198+0000 mgr.y (mgr.24407) 750 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:05.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:05 vm03 bash[23382]: audit 2026-03-10T07:45:03.716198+0000 mgr.y (mgr.24407) 750 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:05 vm00 bash[28005]: audit 2026-03-10T07:45:03.716198+0000 mgr.y (mgr.24407) 750 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:05.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:05 vm00 bash[28005]: audit 2026-03-10T07:45:03.716198+0000 mgr.y (mgr.24407) 750 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:05 vm00 bash[20701]: audit 2026-03-10T07:45:03.716198+0000 mgr.y (mgr.24407) 750 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:05.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:05 vm00 bash[20701]: audit 2026-03-10T07:45:03.716198+0000 mgr.y (mgr.24407) 750 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:06.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:06 vm03 bash[23382]: cluster 2026-03-10T07:45:04.813962+0000 mgr.y (mgr.24407) 751 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:06.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:06 vm03 bash[23382]: cluster 2026-03-10T07:45:04.813962+0000 mgr.y (mgr.24407) 751 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:06 vm00 bash[28005]: cluster 2026-03-10T07:45:04.813962+0000 mgr.y (mgr.24407) 751 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:06.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:06 vm00 bash[28005]: cluster 2026-03-10T07:45:04.813962+0000 mgr.y (mgr.24407) 751 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:06.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:06 vm00 bash[20701]: cluster 2026-03-10T07:45:04.813962+0000 mgr.y (mgr.24407) 751 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:06.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:06 vm00 bash[20701]: cluster 2026-03-10T07:45:04.813962+0000 mgr.y (mgr.24407) 751 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:08 vm03 bash[23382]: cluster 2026-03-10T07:45:06.814425+0000 mgr.y (mgr.24407) 752 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:08 vm03 bash[23382]: cluster 2026-03-10T07:45:06.814425+0000 mgr.y (mgr.24407) 752 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:08.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:08 vm00 bash[28005]: cluster 2026-03-10T07:45:06.814425+0000 mgr.y (mgr.24407) 752 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:08.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:08 vm00 bash[28005]: cluster 2026-03-10T07:45:06.814425+0000 mgr.y (mgr.24407) 752 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:08.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:08 vm00 bash[20701]: cluster 2026-03-10T07:45:06.814425+0000 mgr.y (mgr.24407) 752 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:08.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:08 vm00 bash[20701]: cluster 2026-03-10T07:45:06.814425+0000 mgr.y (mgr.24407) 752 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:10 vm03 bash[23382]: cluster 2026-03-10T07:45:08.814836+0000 mgr.y (mgr.24407) 753 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:10 vm03 bash[23382]: cluster 2026-03-10T07:45:08.814836+0000 mgr.y (mgr.24407) 753 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:10 vm03 bash[23382]: audit 2026-03-10T07:45:09.942129+0000 mon.c (mon.2) 387 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:10 vm03 bash[23382]: audit 2026-03-10T07:45:09.942129+0000 mon.c (mon.2) 387 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:10 vm00 bash[28005]: cluster 2026-03-10T07:45:08.814836+0000 mgr.y (mgr.24407) 753 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:10 vm00 bash[28005]: cluster 2026-03-10T07:45:08.814836+0000 mgr.y (mgr.24407) 753 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:10 vm00 bash[28005]: audit 2026-03-10T07:45:09.942129+0000 mon.c (mon.2) 387 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:10.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:10 vm00 bash[28005]: audit 2026-03-10T07:45:09.942129+0000 mon.c (mon.2) 387 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:10 vm00 bash[20701]: cluster 2026-03-10T07:45:08.814836+0000 mgr.y (mgr.24407) 753 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:10.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:10 vm00 bash[20701]: cluster 2026-03-10T07:45:08.814836+0000 mgr.y (mgr.24407) 753 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:10.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:10 vm00 bash[20701]: audit 2026-03-10T07:45:09.942129+0000 mon.c (mon.2) 387 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:10.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:10 vm00 bash[20701]: audit 2026-03-10T07:45:09.942129+0000 mon.c (mon.2) 387 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:11.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:45:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:45:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:45:11.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:11 vm00 bash[28005]: cluster 2026-03-10T07:45:10.815613+0000 mgr.y (mgr.24407) 754 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:11.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:11 vm00 bash[28005]: cluster 2026-03-10T07:45:10.815613+0000 mgr.y (mgr.24407) 754 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:11.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:11 vm00 bash[20701]: cluster 2026-03-10T07:45:10.815613+0000 mgr.y (mgr.24407) 754 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:11.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:11 vm00 bash[20701]: cluster 2026-03-10T07:45:10.815613+0000 mgr.y (mgr.24407) 754 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:12.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:11 vm03 bash[23382]: cluster 2026-03-10T07:45:10.815613+0000 mgr.y (mgr.24407) 754 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:12.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:11 vm03 bash[23382]: cluster 2026-03-10T07:45:10.815613+0000 mgr.y (mgr.24407) 754 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:14.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:13 vm03 bash[23382]: cluster 2026-03-10T07:45:12.816019+0000 mgr.y (mgr.24407) 755 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:14.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:13 vm03 bash[23382]: cluster 2026-03-10T07:45:12.816019+0000 mgr.y (mgr.24407) 755 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:14.012 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:45:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:45:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:13 vm00 bash[28005]: cluster 2026-03-10T07:45:12.816019+0000 mgr.y (mgr.24407) 755 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:14.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:13 vm00 bash[28005]: cluster 2026-03-10T07:45:12.816019+0000 mgr.y (mgr.24407) 755 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:13 vm00 bash[20701]: cluster 2026-03-10T07:45:12.816019+0000 mgr.y (mgr.24407) 755 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:14.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:13 vm00 bash[20701]: cluster 2026-03-10T07:45:12.816019+0000 mgr.y (mgr.24407) 755 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:14 vm00 bash[28005]: audit 2026-03-10T07:45:13.727022+0000 mgr.y (mgr.24407) 756 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:15.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:14 vm00 bash[28005]: audit 2026-03-10T07:45:13.727022+0000 mgr.y (mgr.24407) 756 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:14 vm00 bash[20701]: audit 2026-03-10T07:45:13.727022+0000 mgr.y (mgr.24407) 756 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:15.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:14 vm00 bash[20701]: audit 2026-03-10T07:45:13.727022+0000 mgr.y (mgr.24407) 756 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:15.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:14 vm03 bash[23382]: audit 2026-03-10T07:45:13.727022+0000 mgr.y (mgr.24407) 756 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:15.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:14 vm03 bash[23382]: audit 2026-03-10T07:45:13.727022+0000 mgr.y (mgr.24407) 756 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:15 vm00 bash[28005]: cluster 2026-03-10T07:45:14.816697+0000 mgr.y (mgr.24407) 757 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:15 vm00 bash[28005]: cluster 2026-03-10T07:45:14.816697+0000 mgr.y (mgr.24407) 757 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:15 vm00 bash[20701]: cluster 2026-03-10T07:45:14.816697+0000 mgr.y (mgr.24407) 757 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:15 vm00 bash[20701]: cluster 2026-03-10T07:45:14.816697+0000 mgr.y (mgr.24407) 757 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:16.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:15 vm03 bash[23382]: cluster 2026-03-10T07:45:14.816697+0000 mgr.y (mgr.24407) 757 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:16.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:15 vm03 bash[23382]: cluster 2026-03-10T07:45:14.816697+0000 mgr.y (mgr.24407) 757 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:17.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:16 vm03 bash[23382]: audit 2026-03-10T07:45:16.396199+0000 mon.c (mon.2) 388 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:45:17.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:16 vm03 bash[23382]: audit 2026-03-10T07:45:16.396199+0000 mon.c (mon.2) 388 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:45:17.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:16 vm03 bash[23382]: audit 2026-03-10T07:45:16.733697+0000 mon.c (mon.2) 389 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:45:17.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:16 vm03 bash[23382]: audit 2026-03-10T07:45:16.733697+0000 mon.c (mon.2) 389 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:45:17.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:16 vm03 bash[23382]: audit 2026-03-10T07:45:16.734736+0000 mon.c (mon.2) 390 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:45:17.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:16 vm03 bash[23382]: audit 2026-03-10T07:45:16.734736+0000 mon.c (mon.2) 390 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:45:17.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:16 vm03 bash[23382]: audit 2026-03-10T07:45:16.740683+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:45:17.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:16 vm03 bash[23382]: audit 2026-03-10T07:45:16.740683+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:45:17.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:16 vm00 bash[28005]: audit 2026-03-10T07:45:16.396199+0000 mon.c (mon.2) 388 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:45:17.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:16 vm00 bash[28005]: audit 2026-03-10T07:45:16.396199+0000 mon.c (mon.2) 388 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:45:17.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:16 vm00 bash[28005]: audit 2026-03-10T07:45:16.733697+0000 mon.c (mon.2) 389 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:45:17.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:16 vm00 bash[28005]: audit 2026-03-10T07:45:16.733697+0000 mon.c (mon.2) 389 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:45:17.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:16 vm00 bash[28005]: audit 2026-03-10T07:45:16.734736+0000 mon.c (mon.2) 390 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:45:17.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:16 vm00 bash[28005]: audit 2026-03-10T07:45:16.734736+0000 mon.c (mon.2) 390 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:45:17.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:16 vm00 bash[28005]: audit 2026-03-10T07:45:16.740683+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:45:17.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:16 vm00 bash[28005]: audit 2026-03-10T07:45:16.740683+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:45:17.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:16 vm00 bash[20701]: audit 2026-03-10T07:45:16.396199+0000 mon.c (mon.2) 388 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:45:17.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:16 vm00 bash[20701]: audit 2026-03-10T07:45:16.396199+0000 mon.c (mon.2) 388 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:45:17.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:16 vm00 bash[20701]: audit 2026-03-10T07:45:16.733697+0000 mon.c (mon.2) 389 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:45:17.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:16 vm00 bash[20701]: audit 2026-03-10T07:45:16.733697+0000 mon.c (mon.2) 389 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:45:17.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:16 vm00 bash[20701]: audit 2026-03-10T07:45:16.734736+0000 mon.c (mon.2) 390 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:45:17.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:16 vm00 bash[20701]: audit 2026-03-10T07:45:16.734736+0000 mon.c (mon.2) 390 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:45:17.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:16 vm00 bash[20701]: audit 2026-03-10T07:45:16.740683+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:45:17.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:16 vm00 bash[20701]: audit 2026-03-10T07:45:16.740683+0000 mon.a (mon.0) 3538 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:45:18.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:17 vm03 bash[23382]: cluster 2026-03-10T07:45:16.817113+0000 mgr.y (mgr.24407) 758 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:18.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:17 vm03 bash[23382]: cluster 2026-03-10T07:45:16.817113+0000 mgr.y (mgr.24407) 758 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:17 vm00 bash[28005]: cluster 2026-03-10T07:45:16.817113+0000 mgr.y (mgr.24407) 758 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:17 vm00 bash[28005]: cluster 2026-03-10T07:45:16.817113+0000 mgr.y (mgr.24407) 758 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:17 vm00 bash[20701]: cluster 2026-03-10T07:45:16.817113+0000 mgr.y (mgr.24407) 758 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:17 vm00 bash[20701]: cluster 2026-03-10T07:45:16.817113+0000 mgr.y (mgr.24407) 758 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:20.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:19 vm03 bash[23382]: cluster 2026-03-10T07:45:18.817621+0000 mgr.y (mgr.24407) 759 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:20.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:19 vm03 bash[23382]: cluster 2026-03-10T07:45:18.817621+0000 mgr.y (mgr.24407) 759 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:20.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:19 vm00 bash[28005]: cluster 2026-03-10T07:45:18.817621+0000 mgr.y (mgr.24407) 759 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:20.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:19 vm00 bash[28005]: cluster 2026-03-10T07:45:18.817621+0000 mgr.y (mgr.24407) 759 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:20.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:19 vm00 bash[20701]: cluster 2026-03-10T07:45:18.817621+0000 mgr.y (mgr.24407) 759 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:20.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:19 vm00 bash[20701]: cluster 2026-03-10T07:45:18.817621+0000 mgr.y (mgr.24407) 759 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:21.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:45:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:45:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:45:22.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:21 vm03 bash[23382]: cluster 2026-03-10T07:45:20.818338+0000 mgr.y (mgr.24407) 760 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:22.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:21 vm03 bash[23382]: cluster 2026-03-10T07:45:20.818338+0000 mgr.y (mgr.24407) 760 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:22.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:21 vm00 bash[28005]: cluster 2026-03-10T07:45:20.818338+0000 mgr.y (mgr.24407) 760 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:22.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:21 vm00 bash[28005]: cluster 2026-03-10T07:45:20.818338+0000 mgr.y (mgr.24407) 760 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:22.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:21 vm00 bash[20701]: cluster 2026-03-10T07:45:20.818338+0000 mgr.y (mgr.24407) 760 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:22.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:21 vm00 bash[20701]: cluster 2026-03-10T07:45:20.818338+0000 mgr.y (mgr.24407) 760 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:23 vm03 bash[23382]: cluster 2026-03-10T07:45:22.818735+0000 mgr.y (mgr.24407) 761 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:23 vm03 bash[23382]: cluster 2026-03-10T07:45:22.818735+0000 mgr.y (mgr.24407) 761 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:24.011 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:45:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:45:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:23 vm00 bash[28005]: cluster 2026-03-10T07:45:22.818735+0000 mgr.y (mgr.24407) 761 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:24.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:23 vm00 bash[28005]: cluster 2026-03-10T07:45:22.818735+0000 mgr.y (mgr.24407) 761 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:23 vm00 bash[20701]: cluster 2026-03-10T07:45:22.818735+0000 mgr.y (mgr.24407) 761 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:24.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:23 vm00 bash[20701]: cluster 2026-03-10T07:45:22.818735+0000 mgr.y (mgr.24407) 761 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:25.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:25 vm00 bash[28005]: audit 2026-03-10T07:45:23.736925+0000 mgr.y (mgr.24407) 762 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:25.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:25 vm00 bash[28005]: audit 2026-03-10T07:45:23.736925+0000 mgr.y (mgr.24407) 762 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:25.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:25 vm00 bash[20701]: audit 2026-03-10T07:45:23.736925+0000 mgr.y (mgr.24407) 762 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:25.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:25 vm00 bash[20701]: audit 2026-03-10T07:45:23.736925+0000 mgr.y (mgr.24407) 762 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:25 vm03 bash[23382]: audit 2026-03-10T07:45:23.736925+0000 mgr.y (mgr.24407) 762 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:25 vm03 bash[23382]: audit 2026-03-10T07:45:23.736925+0000 mgr.y (mgr.24407) 762 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:26 vm00 bash[28005]: cluster 2026-03-10T07:45:24.819434+0000 mgr.y (mgr.24407) 763 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:26 vm00 bash[28005]: cluster 2026-03-10T07:45:24.819434+0000 mgr.y (mgr.24407) 763 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:26 vm00 bash[28005]: audit 2026-03-10T07:45:24.994901+0000 mon.c (mon.2) 391 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:26 vm00 bash[28005]: audit 2026-03-10T07:45:24.994901+0000 mon.c (mon.2) 391 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:26 vm00 bash[20701]: cluster 2026-03-10T07:45:24.819434+0000 mgr.y (mgr.24407) 763 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:26 vm00 bash[20701]: cluster 2026-03-10T07:45:24.819434+0000 mgr.y (mgr.24407) 763 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:26 vm00 bash[20701]: audit 2026-03-10T07:45:24.994901+0000 mon.c (mon.2) 391 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:26 vm00 bash[20701]: audit 2026-03-10T07:45:24.994901+0000 mon.c (mon.2) 391 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:26 vm03 bash[23382]: cluster 2026-03-10T07:45:24.819434+0000 mgr.y (mgr.24407) 763 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:26 vm03 bash[23382]: cluster 2026-03-10T07:45:24.819434+0000 mgr.y (mgr.24407) 763 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:26 vm03 bash[23382]: audit 2026-03-10T07:45:24.994901+0000 mon.c (mon.2) 391 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:26 vm03 bash[23382]: audit 2026-03-10T07:45:24.994901+0000 mon.c (mon.2) 391 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:28 vm00 bash[28005]: cluster 2026-03-10T07:45:26.819786+0000 mgr.y (mgr.24407) 764 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:28.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:28 vm00 bash[28005]: cluster 2026-03-10T07:45:26.819786+0000 mgr.y (mgr.24407) 764 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:28 vm00 bash[20701]: cluster 2026-03-10T07:45:26.819786+0000 mgr.y (mgr.24407) 764 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:28 vm00 bash[20701]: cluster 2026-03-10T07:45:26.819786+0000 mgr.y (mgr.24407) 764 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:28 vm03 bash[23382]: cluster 2026-03-10T07:45:26.819786+0000 mgr.y (mgr.24407) 764 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:28 vm03 bash[23382]: cluster 2026-03-10T07:45:26.819786+0000 mgr.y (mgr.24407) 764 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:30 vm03 bash[23382]: cluster 2026-03-10T07:45:28.820140+0000 mgr.y (mgr.24407) 765 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:30 vm03 bash[23382]: cluster 2026-03-10T07:45:28.820140+0000 mgr.y (mgr.24407) 765 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:30.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:30 vm00 bash[28005]: cluster 2026-03-10T07:45:28.820140+0000 mgr.y (mgr.24407) 765 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:30.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:30 vm00 bash[28005]: cluster 2026-03-10T07:45:28.820140+0000 mgr.y (mgr.24407) 765 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:30.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:30 vm00 bash[20701]: cluster 2026-03-10T07:45:28.820140+0000 mgr.y (mgr.24407) 765 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:30.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:30 vm00 bash[20701]: cluster 2026-03-10T07:45:28.820140+0000 mgr.y (mgr.24407) 765 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:31.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:45:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:45:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:45:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:32 vm03 bash[23382]: cluster 2026-03-10T07:45:30.820916+0000 mgr.y (mgr.24407) 766 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:32 vm03 bash[23382]: cluster 2026-03-10T07:45:30.820916+0000 mgr.y (mgr.24407) 766 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:32.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:32 vm00 bash[28005]: cluster 2026-03-10T07:45:30.820916+0000 mgr.y (mgr.24407) 766 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:32.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:32 vm00 bash[28005]: cluster 2026-03-10T07:45:30.820916+0000 mgr.y (mgr.24407) 766 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:32 vm00 bash[20701]: cluster 2026-03-10T07:45:30.820916+0000 mgr.y (mgr.24407) 766 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:32 vm00 bash[20701]: cluster 2026-03-10T07:45:30.820916+0000 mgr.y (mgr.24407) 766 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:34.011 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:45:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:45:34.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:34 vm00 bash[20701]: cluster 2026-03-10T07:45:32.821251+0000 mgr.y (mgr.24407) 767 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:34.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:34 vm00 bash[20701]: cluster 2026-03-10T07:45:32.821251+0000 mgr.y (mgr.24407) 767 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:34 vm00 bash[28005]: cluster 2026-03-10T07:45:32.821251+0000 mgr.y (mgr.24407) 767 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:34 vm00 bash[28005]: cluster 2026-03-10T07:45:32.821251+0000 mgr.y (mgr.24407) 767 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:35.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:34 vm03 bash[23382]: cluster 2026-03-10T07:45:32.821251+0000 mgr.y (mgr.24407) 767 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:35.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:34 vm03 bash[23382]: cluster 2026-03-10T07:45:32.821251+0000 mgr.y (mgr.24407) 767 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:35 vm00 bash[20701]: audit 2026-03-10T07:45:33.740218+0000 mgr.y (mgr.24407) 768 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:35 vm00 bash[20701]: audit 2026-03-10T07:45:33.740218+0000 mgr.y (mgr.24407) 768 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:35 vm00 bash[20701]: cluster 2026-03-10T07:45:34.822215+0000 mgr.y (mgr.24407) 769 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:35 vm00 bash[20701]: cluster 2026-03-10T07:45:34.822215+0000 mgr.y (mgr.24407) 769 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:35.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:35 vm00 bash[28005]: audit 2026-03-10T07:45:33.740218+0000 mgr.y (mgr.24407) 768 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:35.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:35 vm00 bash[28005]: audit 2026-03-10T07:45:33.740218+0000 mgr.y (mgr.24407) 768 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:35.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:35 vm00 bash[28005]: cluster 2026-03-10T07:45:34.822215+0000 mgr.y (mgr.24407) 769 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:35.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:35 vm00 bash[28005]: cluster 2026-03-10T07:45:34.822215+0000 mgr.y (mgr.24407) 769 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:36.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:35 vm03 bash[23382]: audit 2026-03-10T07:45:33.740218+0000 mgr.y (mgr.24407) 768 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:36.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:35 vm03 bash[23382]: audit 2026-03-10T07:45:33.740218+0000 mgr.y (mgr.24407) 768 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:36.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:35 vm03 bash[23382]: cluster 2026-03-10T07:45:34.822215+0000 mgr.y (mgr.24407) 769 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:36.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:35 vm03 bash[23382]: cluster 2026-03-10T07:45:34.822215+0000 mgr.y (mgr.24407) 769 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:38.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:37 vm03 bash[23382]: cluster 2026-03-10T07:45:36.822589+0000 mgr.y (mgr.24407) 770 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:38.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:37 vm03 bash[23382]: cluster 2026-03-10T07:45:36.822589+0000 mgr.y (mgr.24407) 770 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:38.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:37 vm00 bash[28005]: cluster 2026-03-10T07:45:36.822589+0000 mgr.y (mgr.24407) 770 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:38.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:37 vm00 bash[28005]: cluster 2026-03-10T07:45:36.822589+0000 mgr.y (mgr.24407) 770 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:38.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:37 vm00 bash[20701]: cluster 2026-03-10T07:45:36.822589+0000 mgr.y (mgr.24407) 770 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:38.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:37 vm00 bash[20701]: cluster 2026-03-10T07:45:36.822589+0000 mgr.y (mgr.24407) 770 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:40.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:39 vm03 bash[23382]: cluster 2026-03-10T07:45:38.822924+0000 mgr.y (mgr.24407) 771 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:40.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:39 vm03 bash[23382]: cluster 2026-03-10T07:45:38.822924+0000 mgr.y (mgr.24407) 771 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:40.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:39 vm00 bash[28005]: cluster 2026-03-10T07:45:38.822924+0000 mgr.y (mgr.24407) 771 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:40.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:39 vm00 bash[28005]: cluster 2026-03-10T07:45:38.822924+0000 mgr.y (mgr.24407) 771 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:39 vm00 bash[20701]: cluster 2026-03-10T07:45:38.822924+0000 mgr.y (mgr.24407) 771 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:39 vm00 bash[20701]: cluster 2026-03-10T07:45:38.822924+0000 mgr.y (mgr.24407) 771 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:41.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:40 vm03 bash[23382]: audit 2026-03-10T07:45:40.109157+0000 mon.c (mon.2) 392 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:41.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:40 vm03 bash[23382]: audit 2026-03-10T07:45:40.109157+0000 mon.c (mon.2) 392 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:41.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:40 vm00 bash[28005]: audit 2026-03-10T07:45:40.109157+0000 mon.c (mon.2) 392 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:41.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:40 vm00 bash[28005]: audit 2026-03-10T07:45:40.109157+0000 mon.c (mon.2) 392 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:41.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:45:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:45:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:45:41.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:40 vm00 bash[20701]: audit 2026-03-10T07:45:40.109157+0000 mon.c (mon.2) 392 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:41.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:40 vm00 bash[20701]: audit 2026-03-10T07:45:40.109157+0000 mon.c (mon.2) 392 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:42.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:41 vm03 bash[23382]: cluster 2026-03-10T07:45:40.823633+0000 mgr.y (mgr.24407) 772 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:42.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:41 vm03 bash[23382]: cluster 2026-03-10T07:45:40.823633+0000 mgr.y (mgr.24407) 772 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:41 vm00 bash[28005]: cluster 2026-03-10T07:45:40.823633+0000 mgr.y (mgr.24407) 772 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:42.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:41 vm00 bash[28005]: cluster 2026-03-10T07:45:40.823633+0000 mgr.y (mgr.24407) 772 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:41 vm00 bash[20701]: cluster 2026-03-10T07:45:40.823633+0000 mgr.y (mgr.24407) 772 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:42.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:41 vm00 bash[20701]: cluster 2026-03-10T07:45:40.823633+0000 mgr.y (mgr.24407) 772 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:45:44.011 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:45:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:45:44.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:43 vm03 bash[23382]: cluster 2026-03-10T07:45:42.823943+0000 mgr.y (mgr.24407) 773 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:44.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:43 vm03 bash[23382]: cluster 2026-03-10T07:45:42.823943+0000 mgr.y (mgr.24407) 773 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:44.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:43 vm00 bash[28005]: cluster 2026-03-10T07:45:42.823943+0000 mgr.y (mgr.24407) 773 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:44.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:43 vm00 bash[28005]: cluster 2026-03-10T07:45:42.823943+0000 mgr.y (mgr.24407) 773 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:44.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:43 vm00 bash[20701]: cluster 2026-03-10T07:45:42.823943+0000 mgr.y (mgr.24407) 773 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:44.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:43 vm00 bash[20701]: cluster 2026-03-10T07:45:42.823943+0000 mgr.y (mgr.24407) 773 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:45:45.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:44 vm03 bash[23382]: audit 2026-03-10T07:45:43.749989+0000 mgr.y (mgr.24407) 774 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:45.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:44 vm03 bash[23382]: audit 2026-03-10T07:45:43.749989+0000 mgr.y (mgr.24407) 774 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:45.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:44 vm00 bash[28005]: audit 2026-03-10T07:45:43.749989+0000 mgr.y (mgr.24407) 774 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:45.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:44 vm00 bash[28005]: audit 2026-03-10T07:45:43.749989+0000 mgr.y (mgr.24407) 774 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:45.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:44 vm00 bash[20701]: audit 2026-03-10T07:45:43.749989+0000 mgr.y (mgr.24407) 774 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:45.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:44 vm00 bash[20701]: audit 2026-03-10T07:45:43.749989+0000 mgr.y (mgr.24407) 774 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:46.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:45 vm03 bash[23382]: cluster 2026-03-10T07:45:44.824574+0000 mgr.y (mgr.24407) 775 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:46.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:45 vm03 bash[23382]: cluster 2026-03-10T07:45:44.824574+0000 mgr.y (mgr.24407) 775 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:46.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:45 vm00 bash[28005]: cluster 2026-03-10T07:45:44.824574+0000 mgr.y (mgr.24407) 775 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:46.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:45 vm00 bash[28005]: cluster 2026-03-10T07:45:44.824574+0000 mgr.y (mgr.24407) 775 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:46.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:45 vm00 bash[20701]: cluster 2026-03-10T07:45:44.824574+0000 mgr.y (mgr.24407) 775 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:46.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:45 vm00 bash[20701]: cluster 2026-03-10T07:45:44.824574+0000 mgr.y (mgr.24407) 775 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:48.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:47 vm03 bash[23382]: cluster 2026-03-10T07:45:46.824905+0000 mgr.y (mgr.24407) 776 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:48.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:47 vm03 bash[23382]: cluster 2026-03-10T07:45:46.824905+0000 mgr.y (mgr.24407) 776 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:48.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:47 vm00 bash[28005]: cluster 2026-03-10T07:45:46.824905+0000 mgr.y (mgr.24407) 776 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:48.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:47 vm00 bash[28005]: cluster 2026-03-10T07:45:46.824905+0000 mgr.y (mgr.24407) 776 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:47 vm00 bash[20701]: cluster 2026-03-10T07:45:46.824905+0000 mgr.y (mgr.24407) 776 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:48.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:47 vm00 bash[20701]: cluster 2026-03-10T07:45:46.824905+0000 mgr.y (mgr.24407) 776 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:50.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:49 vm03 bash[23382]: cluster 2026-03-10T07:45:48.825192+0000 mgr.y (mgr.24407) 777 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:50.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:49 vm03 bash[23382]: cluster 2026-03-10T07:45:48.825192+0000 mgr.y (mgr.24407) 777 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:50.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:49 vm00 bash[28005]: cluster 2026-03-10T07:45:48.825192+0000 mgr.y (mgr.24407) 777 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:50.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:49 vm00 bash[28005]: cluster 2026-03-10T07:45:48.825192+0000 mgr.y (mgr.24407) 777 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:50.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:49 vm00 bash[20701]: cluster 2026-03-10T07:45:48.825192+0000 mgr.y (mgr.24407) 777 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:50.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:49 vm00 bash[20701]: cluster 2026-03-10T07:45:48.825192+0000 mgr.y (mgr.24407) 777 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:45:51.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:45:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:45:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:45:52.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:52 vm03 bash[23382]: cluster 2026-03-10T07:45:50.825857+0000 mgr.y (mgr.24407) 778 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:52.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:52 vm03 bash[23382]: cluster 2026-03-10T07:45:50.825857+0000 mgr.y (mgr.24407) 778 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:52.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:52 vm00 bash[28005]: cluster 2026-03-10T07:45:50.825857+0000 mgr.y (mgr.24407) 778 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:52.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:52 vm00 bash[28005]: cluster 2026-03-10T07:45:50.825857+0000 mgr.y (mgr.24407) 778 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:52.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:52 vm00 bash[20701]: cluster 2026-03-10T07:45:50.825857+0000 mgr.y (mgr.24407) 778 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:52.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:52 vm00 bash[20701]: cluster 2026-03-10T07:45:50.825857+0000 mgr.y (mgr.24407) 778 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:54.011 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:45:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:45:54.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:54 vm03 bash[23382]: cluster 2026-03-10T07:45:52.826205+0000 mgr.y (mgr.24407) 779 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:54.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:54 vm03 bash[23382]: cluster 2026-03-10T07:45:52.826205+0000 mgr.y (mgr.24407) 779 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:54.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:54 vm00 bash[28005]: cluster 2026-03-10T07:45:52.826205+0000 mgr.y (mgr.24407) 779 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:54.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:54 vm00 bash[28005]: cluster 2026-03-10T07:45:52.826205+0000 mgr.y (mgr.24407) 779 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:54 vm00 bash[20701]: cluster 2026-03-10T07:45:52.826205+0000 mgr.y (mgr.24407) 779 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:54.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:54 vm00 bash[20701]: cluster 2026-03-10T07:45:52.826205+0000 mgr.y (mgr.24407) 779 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:55.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:55 vm00 bash[28005]: audit 2026-03-10T07:45:53.759857+0000 mgr.y (mgr.24407) 780 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:55.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:55 vm00 bash[28005]: audit 2026-03-10T07:45:53.759857+0000 mgr.y (mgr.24407) 780 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:55.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:55 vm00 bash[20701]: audit 2026-03-10T07:45:53.759857+0000 mgr.y (mgr.24407) 780 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:55.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:55 vm00 bash[20701]: audit 2026-03-10T07:45:53.759857+0000 mgr.y (mgr.24407) 780 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:55 vm03 bash[23382]: audit 2026-03-10T07:45:53.759857+0000 mgr.y (mgr.24407) 780 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:55 vm03 bash[23382]: audit 2026-03-10T07:45:53.759857+0000 mgr.y (mgr.24407) 780 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:45:56.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:56 vm00 bash[28005]: cluster 2026-03-10T07:45:54.826712+0000 mgr.y (mgr.24407) 781 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:56.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:56 vm00 bash[28005]: cluster 2026-03-10T07:45:54.826712+0000 mgr.y (mgr.24407) 781 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:56.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:56 vm00 bash[28005]: audit 2026-03-10T07:45:55.116540+0000 mon.c (mon.2) 393 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:56.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:56 vm00 bash[28005]: audit 2026-03-10T07:45:55.116540+0000 mon.c (mon.2) 393 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:56.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:56 vm00 bash[20701]: cluster 2026-03-10T07:45:54.826712+0000 mgr.y (mgr.24407) 781 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:56.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:56 vm00 bash[20701]: cluster 2026-03-10T07:45:54.826712+0000 mgr.y (mgr.24407) 781 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:56.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:56 vm00 bash[20701]: audit 2026-03-10T07:45:55.116540+0000 mon.c (mon.2) 393 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:56.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:56 vm00 bash[20701]: audit 2026-03-10T07:45:55.116540+0000 mon.c (mon.2) 393 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:56 vm03 bash[23382]: cluster 2026-03-10T07:45:54.826712+0000 mgr.y (mgr.24407) 781 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:56 vm03 bash[23382]: cluster 2026-03-10T07:45:54.826712+0000 mgr.y (mgr.24407) 781 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:45:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:56 vm03 bash[23382]: audit 2026-03-10T07:45:55.116540+0000 mon.c (mon.2) 393 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:56 vm03 bash[23382]: audit 2026-03-10T07:45:55.116540+0000 mon.c (mon.2) 393 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:45:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:58 vm00 bash[28005]: cluster 2026-03-10T07:45:56.827025+0000 mgr.y (mgr.24407) 782 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:45:58.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:45:58 vm00 bash[28005]: cluster 2026-03-10T07:45:56.827025+0000 mgr.y (mgr.24407) 782 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:45:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:58 vm00 bash[20701]: cluster 2026-03-10T07:45:56.827025+0000 mgr.y (mgr.24407) 782 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:45:58.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:45:58 vm00 bash[20701]: cluster 2026-03-10T07:45:56.827025+0000 mgr.y (mgr.24407) 782 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:45:58.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:58 vm03 bash[23382]: cluster 2026-03-10T07:45:56.827025+0000 mgr.y (mgr.24407) 782 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:45:58.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:45:58 vm03 bash[23382]: cluster 2026-03-10T07:45:56.827025+0000 mgr.y (mgr.24407) 782 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:46:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:00 vm00 bash[28005]: cluster 2026-03-10T07:45:58.827370+0000 mgr.y (mgr.24407) 783 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:46:00.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:00 vm00 bash[28005]: cluster 2026-03-10T07:45:58.827370+0000 mgr.y (mgr.24407) 783 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:46:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:00 vm00 bash[20701]: cluster 2026-03-10T07:45:58.827370+0000 mgr.y (mgr.24407) 783 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:46:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:00 vm00 bash[20701]: cluster 2026-03-10T07:45:58.827370+0000 mgr.y (mgr.24407) 783 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:46:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:00 vm03 bash[23382]: cluster 2026-03-10T07:45:58.827370+0000 mgr.y (mgr.24407) 783 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:46:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:00 vm03 bash[23382]: cluster 2026-03-10T07:45:58.827370+0000 mgr.y (mgr.24407) 783 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:46:01.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:46:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:46:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:46:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:02 vm00 bash[28005]: cluster 2026-03-10T07:46:00.828086+0000 mgr.y (mgr.24407) 784 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:46:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:02 vm00 bash[28005]: cluster 2026-03-10T07:46:00.828086+0000 mgr.y (mgr.24407) 784 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:46:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:02 vm00 bash[20701]: cluster 2026-03-10T07:46:00.828086+0000 mgr.y (mgr.24407) 784 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:46:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:02 vm00 bash[20701]: cluster 2026-03-10T07:46:00.828086+0000 mgr.y (mgr.24407) 784 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:46:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:02 vm03 bash[23382]: cluster 2026-03-10T07:46:00.828086+0000 mgr.y (mgr.24407) 784 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:46:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:02 vm03 bash[23382]: cluster 2026-03-10T07:46:00.828086+0000 mgr.y (mgr.24407) 784 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:46:04.055 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:46:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:46:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:04 vm00 bash[28005]: cluster 2026-03-10T07:46:02.828506+0000 mgr.y (mgr.24407) 785 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:04.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:04 vm00 bash[28005]: cluster 2026-03-10T07:46:02.828506+0000 mgr.y (mgr.24407) 785 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:04 vm00 bash[20701]: cluster 2026-03-10T07:46:02.828506+0000 mgr.y (mgr.24407) 785 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:04 vm00 bash[20701]: cluster 2026-03-10T07:46:02.828506+0000 mgr.y (mgr.24407) 785 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:04.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:04 vm03 bash[23382]: cluster 2026-03-10T07:46:02.828506+0000 mgr.y (mgr.24407) 785 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:04.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:04 vm03 bash[23382]: cluster 2026-03-10T07:46:02.828506+0000 mgr.y (mgr.24407) 785 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:05 vm00 bash[28005]: audit 2026-03-10T07:46:03.768474+0000 mgr.y (mgr.24407) 786 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:05 vm00 bash[28005]: audit 2026-03-10T07:46:03.768474+0000 mgr.y (mgr.24407) 786 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:05.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:05 vm00 bash[20701]: audit 2026-03-10T07:46:03.768474+0000 mgr.y (mgr.24407) 786 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:05.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:05 vm00 bash[20701]: audit 2026-03-10T07:46:03.768474+0000 mgr.y (mgr.24407) 786 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:05.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:05 vm03 bash[23382]: audit 2026-03-10T07:46:03.768474+0000 mgr.y (mgr.24407) 786 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:05.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:05 vm03 bash[23382]: audit 2026-03-10T07:46:03.768474+0000 mgr.y (mgr.24407) 786 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:06 vm00 bash[28005]: cluster 2026-03-10T07:46:04.829071+0000 mgr.y (mgr.24407) 787 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:06.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:06 vm00 bash[28005]: cluster 2026-03-10T07:46:04.829071+0000 mgr.y (mgr.24407) 787 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:06 vm00 bash[20701]: cluster 2026-03-10T07:46:04.829071+0000 mgr.y (mgr.24407) 787 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:06 vm00 bash[20701]: cluster 2026-03-10T07:46:04.829071+0000 mgr.y (mgr.24407) 787 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:06.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:06 vm03 bash[23382]: cluster 2026-03-10T07:46:04.829071+0000 mgr.y (mgr.24407) 787 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:06.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:06 vm03 bash[23382]: cluster 2026-03-10T07:46:04.829071+0000 mgr.y (mgr.24407) 787 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:08 vm00 bash[28005]: cluster 2026-03-10T07:46:06.829449+0000 mgr.y (mgr.24407) 788 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:08 vm00 bash[28005]: cluster 2026-03-10T07:46:06.829449+0000 mgr.y (mgr.24407) 788 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:08.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:08 vm00 bash[20701]: cluster 2026-03-10T07:46:06.829449+0000 mgr.y (mgr.24407) 788 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:08.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:08 vm00 bash[20701]: cluster 2026-03-10T07:46:06.829449+0000 mgr.y (mgr.24407) 788 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:08.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:08 vm03 bash[23382]: cluster 2026-03-10T07:46:06.829449+0000 mgr.y (mgr.24407) 788 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:08.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:08 vm03 bash[23382]: cluster 2026-03-10T07:46:06.829449+0000 mgr.y (mgr.24407) 788 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:10.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:10 vm00 bash[28005]: cluster 2026-03-10T07:46:08.829751+0000 mgr.y (mgr.24407) 789 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:10.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:10 vm00 bash[28005]: cluster 2026-03-10T07:46:08.829751+0000 mgr.y (mgr.24407) 789 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:10.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:10 vm00 bash[20701]: cluster 2026-03-10T07:46:08.829751+0000 mgr.y (mgr.24407) 789 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:10.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:10 vm00 bash[20701]: cluster 2026-03-10T07:46:08.829751+0000 mgr.y (mgr.24407) 789 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:10.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:10 vm03 bash[23382]: cluster 2026-03-10T07:46:08.829751+0000 mgr.y (mgr.24407) 789 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:10.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:10 vm03 bash[23382]: cluster 2026-03-10T07:46:08.829751+0000 mgr.y (mgr.24407) 789 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:11.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:11 vm00 bash[28005]: audit 2026-03-10T07:46:10.123813+0000 mon.c (mon.2) 394 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:11.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:11 vm00 bash[28005]: audit 2026-03-10T07:46:10.123813+0000 mon.c (mon.2) 394 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:11.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:46:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:46:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:46:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:11 vm00 bash[20701]: audit 2026-03-10T07:46:10.123813+0000 mon.c (mon.2) 394 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:11 vm00 bash[20701]: audit 2026-03-10T07:46:10.123813+0000 mon.c (mon.2) 394 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:11.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:11 vm03 bash[23382]: audit 2026-03-10T07:46:10.123813+0000 mon.c (mon.2) 394 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:11.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:11 vm03 bash[23382]: audit 2026-03-10T07:46:10.123813+0000 mon.c (mon.2) 394 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:12.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:12 vm00 bash[28005]: cluster 2026-03-10T07:46:10.830554+0000 mgr.y (mgr.24407) 790 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:12.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:12 vm00 bash[28005]: cluster 2026-03-10T07:46:10.830554+0000 mgr.y (mgr.24407) 790 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:12.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:12 vm00 bash[20701]: cluster 2026-03-10T07:46:10.830554+0000 mgr.y (mgr.24407) 790 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:12.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:12 vm00 bash[20701]: cluster 2026-03-10T07:46:10.830554+0000 mgr.y (mgr.24407) 790 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:12.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:12 vm03 bash[23382]: cluster 2026-03-10T07:46:10.830554+0000 mgr.y (mgr.24407) 790 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:12.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:12 vm03 bash[23382]: cluster 2026-03-10T07:46:10.830554+0000 mgr.y (mgr.24407) 790 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:14.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:46:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:46:14.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:14 vm03 bash[23382]: cluster 2026-03-10T07:46:12.831003+0000 mgr.y (mgr.24407) 791 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:14.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:14 vm03 bash[23382]: cluster 2026-03-10T07:46:12.831003+0000 mgr.y (mgr.24407) 791 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:14 vm00 bash[28005]: cluster 2026-03-10T07:46:12.831003+0000 mgr.y (mgr.24407) 791 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:14.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:14 vm00 bash[28005]: cluster 2026-03-10T07:46:12.831003+0000 mgr.y (mgr.24407) 791 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:14.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:14 vm00 bash[20701]: cluster 2026-03-10T07:46:12.831003+0000 mgr.y (mgr.24407) 791 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:14.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:14 vm00 bash[20701]: cluster 2026-03-10T07:46:12.831003+0000 mgr.y (mgr.24407) 791 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:15.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:15 vm03 bash[23382]: audit 2026-03-10T07:46:13.776288+0000 mgr.y (mgr.24407) 792 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:15.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:15 vm03 bash[23382]: audit 2026-03-10T07:46:13.776288+0000 mgr.y (mgr.24407) 792 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:15.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:15 vm00 bash[28005]: audit 2026-03-10T07:46:13.776288+0000 mgr.y (mgr.24407) 792 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:15.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:15 vm00 bash[28005]: audit 2026-03-10T07:46:13.776288+0000 mgr.y (mgr.24407) 792 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:15 vm00 bash[20701]: audit 2026-03-10T07:46:13.776288+0000 mgr.y (mgr.24407) 792 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:15.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:15 vm00 bash[20701]: audit 2026-03-10T07:46:13.776288+0000 mgr.y (mgr.24407) 792 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:16.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:16 vm03 bash[23382]: cluster 2026-03-10T07:46:14.831780+0000 mgr.y (mgr.24407) 793 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:16.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:16 vm03 bash[23382]: cluster 2026-03-10T07:46:14.831780+0000 mgr.y (mgr.24407) 793 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:16 vm00 bash[28005]: cluster 2026-03-10T07:46:14.831780+0000 mgr.y (mgr.24407) 793 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:16.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:16 vm00 bash[28005]: cluster 2026-03-10T07:46:14.831780+0000 mgr.y (mgr.24407) 793 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:16 vm00 bash[20701]: cluster 2026-03-10T07:46:14.831780+0000 mgr.y (mgr.24407) 793 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:16.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:16 vm00 bash[20701]: cluster 2026-03-10T07:46:14.831780+0000 mgr.y (mgr.24407) 793 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:17.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:17 vm03 bash[23382]: audit 2026-03-10T07:46:16.783839+0000 mon.c (mon.2) 395 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:46:17.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:17 vm03 bash[23382]: audit 2026-03-10T07:46:16.783839+0000 mon.c (mon.2) 395 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:46:17.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:17 vm03 bash[23382]: audit 2026-03-10T07:46:17.110936+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:46:17.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:17 vm03 bash[23382]: audit 2026-03-10T07:46:17.110936+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:46:17.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:17 vm03 bash[23382]: audit 2026-03-10T07:46:17.112016+0000 mon.c (mon.2) 397 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:46:17.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:17 vm03 bash[23382]: audit 2026-03-10T07:46:17.112016+0000 mon.c (mon.2) 397 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:46:17.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:17 vm03 bash[23382]: audit 2026-03-10T07:46:17.117606+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:46:17.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:17 vm03 bash[23382]: audit 2026-03-10T07:46:17.117606+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:17 vm00 bash[28005]: audit 2026-03-10T07:46:16.783839+0000 mon.c (mon.2) 395 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:17 vm00 bash[28005]: audit 2026-03-10T07:46:16.783839+0000 mon.c (mon.2) 395 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:17 vm00 bash[28005]: audit 2026-03-10T07:46:17.110936+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:17 vm00 bash[28005]: audit 2026-03-10T07:46:17.110936+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:17 vm00 bash[28005]: audit 2026-03-10T07:46:17.112016+0000 mon.c (mon.2) 397 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:17 vm00 bash[28005]: audit 2026-03-10T07:46:17.112016+0000 mon.c (mon.2) 397 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:17 vm00 bash[28005]: audit 2026-03-10T07:46:17.117606+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:17 vm00 bash[28005]: audit 2026-03-10T07:46:17.117606+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:17 vm00 bash[20701]: audit 2026-03-10T07:46:16.783839+0000 mon.c (mon.2) 395 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:17 vm00 bash[20701]: audit 2026-03-10T07:46:16.783839+0000 mon.c (mon.2) 395 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:17 vm00 bash[20701]: audit 2026-03-10T07:46:17.110936+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:17 vm00 bash[20701]: audit 2026-03-10T07:46:17.110936+0000 mon.c (mon.2) 396 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:17 vm00 bash[20701]: audit 2026-03-10T07:46:17.112016+0000 mon.c (mon.2) 397 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:17 vm00 bash[20701]: audit 2026-03-10T07:46:17.112016+0000 mon.c (mon.2) 397 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:17 vm00 bash[20701]: audit 2026-03-10T07:46:17.117606+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:46:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:17 vm00 bash[20701]: audit 2026-03-10T07:46:17.117606+0000 mon.a (mon.0) 3539 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:46:18.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:18 vm03 bash[23382]: cluster 2026-03-10T07:46:16.832125+0000 mgr.y (mgr.24407) 794 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:18 vm03 bash[23382]: cluster 2026-03-10T07:46:16.832125+0000 mgr.y (mgr.24407) 794 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:18.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:18 vm00 bash[28005]: cluster 2026-03-10T07:46:16.832125+0000 mgr.y (mgr.24407) 794 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:18.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:18 vm00 bash[28005]: cluster 2026-03-10T07:46:16.832125+0000 mgr.y (mgr.24407) 794 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:18 vm00 bash[20701]: cluster 2026-03-10T07:46:16.832125+0000 mgr.y (mgr.24407) 794 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:18 vm00 bash[20701]: cluster 2026-03-10T07:46:16.832125+0000 mgr.y (mgr.24407) 794 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:20.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:20 vm03 bash[23382]: cluster 2026-03-10T07:46:18.832481+0000 mgr.y (mgr.24407) 795 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:20 vm03 bash[23382]: cluster 2026-03-10T07:46:18.832481+0000 mgr.y (mgr.24407) 795 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:20 vm00 bash[28005]: cluster 2026-03-10T07:46:18.832481+0000 mgr.y (mgr.24407) 795 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:20 vm00 bash[28005]: cluster 2026-03-10T07:46:18.832481+0000 mgr.y (mgr.24407) 795 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:20 vm00 bash[20701]: cluster 2026-03-10T07:46:18.832481+0000 mgr.y (mgr.24407) 795 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:20.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:20 vm00 bash[20701]: cluster 2026-03-10T07:46:18.832481+0000 mgr.y (mgr.24407) 795 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:21.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:46:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:46:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:46:22.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:22 vm03 bash[23382]: cluster 2026-03-10T07:46:20.833206+0000 mgr.y (mgr.24407) 796 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:22.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:22 vm03 bash[23382]: cluster 2026-03-10T07:46:20.833206+0000 mgr.y (mgr.24407) 796 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:22.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:22 vm00 bash[28005]: cluster 2026-03-10T07:46:20.833206+0000 mgr.y (mgr.24407) 796 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:22.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:22 vm00 bash[28005]: cluster 2026-03-10T07:46:20.833206+0000 mgr.y (mgr.24407) 796 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:22.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:22 vm00 bash[20701]: cluster 2026-03-10T07:46:20.833206+0000 mgr.y (mgr.24407) 796 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:22.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:22 vm00 bash[20701]: cluster 2026-03-10T07:46:20.833206+0000 mgr.y (mgr.24407) 796 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:24.186 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:46:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:46:24.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:24 vm03 bash[23382]: cluster 2026-03-10T07:46:22.833538+0000 mgr.y (mgr.24407) 797 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:24.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:24 vm03 bash[23382]: cluster 2026-03-10T07:46:22.833538+0000 mgr.y (mgr.24407) 797 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:24.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:24 vm00 bash[28005]: cluster 2026-03-10T07:46:22.833538+0000 mgr.y (mgr.24407) 797 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:24.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:24 vm00 bash[28005]: cluster 2026-03-10T07:46:22.833538+0000 mgr.y (mgr.24407) 797 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:24.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:24 vm00 bash[20701]: cluster 2026-03-10T07:46:22.833538+0000 mgr.y (mgr.24407) 797 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:24.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:24 vm00 bash[20701]: cluster 2026-03-10T07:46:22.833538+0000 mgr.y (mgr.24407) 797 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:25 vm03 bash[23382]: audit 2026-03-10T07:46:23.786900+0000 mgr.y (mgr.24407) 798 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:25 vm03 bash[23382]: audit 2026-03-10T07:46:23.786900+0000 mgr.y (mgr.24407) 798 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:25 vm03 bash[23382]: audit 2026-03-10T07:46:25.129967+0000 mon.c (mon.2) 398 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:25 vm03 bash[23382]: audit 2026-03-10T07:46:25.129967+0000 mon.c (mon.2) 398 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:25 vm00 bash[28005]: audit 2026-03-10T07:46:23.786900+0000 mgr.y (mgr.24407) 798 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:25 vm00 bash[28005]: audit 2026-03-10T07:46:23.786900+0000 mgr.y (mgr.24407) 798 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:25 vm00 bash[28005]: audit 2026-03-10T07:46:25.129967+0000 mon.c (mon.2) 398 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:25.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:25 vm00 bash[28005]: audit 2026-03-10T07:46:25.129967+0000 mon.c (mon.2) 398 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:25 vm00 bash[20701]: audit 2026-03-10T07:46:23.786900+0000 mgr.y (mgr.24407) 798 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:25 vm00 bash[20701]: audit 2026-03-10T07:46:23.786900+0000 mgr.y (mgr.24407) 798 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:25 vm00 bash[20701]: audit 2026-03-10T07:46:25.129967+0000 mon.c (mon.2) 398 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:25.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:25 vm00 bash[20701]: audit 2026-03-10T07:46:25.129967+0000 mon.c (mon.2) 398 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:26 vm03 bash[23382]: cluster 2026-03-10T07:46:24.834169+0000 mgr.y (mgr.24407) 799 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:26 vm03 bash[23382]: cluster 2026-03-10T07:46:24.834169+0000 mgr.y (mgr.24407) 799 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:26.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:26 vm00 bash[28005]: cluster 2026-03-10T07:46:24.834169+0000 mgr.y (mgr.24407) 799 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:26.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:26 vm00 bash[28005]: cluster 2026-03-10T07:46:24.834169+0000 mgr.y (mgr.24407) 799 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:26 vm00 bash[20701]: cluster 2026-03-10T07:46:24.834169+0000 mgr.y (mgr.24407) 799 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:26.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:26 vm00 bash[20701]: cluster 2026-03-10T07:46:24.834169+0000 mgr.y (mgr.24407) 799 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:28 vm03 bash[23382]: cluster 2026-03-10T07:46:26.834521+0000 mgr.y (mgr.24407) 800 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:28 vm03 bash[23382]: cluster 2026-03-10T07:46:26.834521+0000 mgr.y (mgr.24407) 800 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:28.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:28 vm00 bash[28005]: cluster 2026-03-10T07:46:26.834521+0000 mgr.y (mgr.24407) 800 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:28.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:28 vm00 bash[28005]: cluster 2026-03-10T07:46:26.834521+0000 mgr.y (mgr.24407) 800 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:28 vm00 bash[20701]: cluster 2026-03-10T07:46:26.834521+0000 mgr.y (mgr.24407) 800 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:28.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:28 vm00 bash[20701]: cluster 2026-03-10T07:46:26.834521+0000 mgr.y (mgr.24407) 800 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:30 vm03 bash[23382]: cluster 2026-03-10T07:46:28.834877+0000 mgr.y (mgr.24407) 801 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:30 vm03 bash[23382]: cluster 2026-03-10T07:46:28.834877+0000 mgr.y (mgr.24407) 801 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:30.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:30 vm00 bash[28005]: cluster 2026-03-10T07:46:28.834877+0000 mgr.y (mgr.24407) 801 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:30.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:30 vm00 bash[28005]: cluster 2026-03-10T07:46:28.834877+0000 mgr.y (mgr.24407) 801 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:30.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:30 vm00 bash[20701]: cluster 2026-03-10T07:46:28.834877+0000 mgr.y (mgr.24407) 801 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:30.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:30 vm00 bash[20701]: cluster 2026-03-10T07:46:28.834877+0000 mgr.y (mgr.24407) 801 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:31.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:46:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:46:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:46:32.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:32 vm00 bash[28005]: cluster 2026-03-10T07:46:30.835636+0000 mgr.y (mgr.24407) 802 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:32.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:32 vm00 bash[28005]: cluster 2026-03-10T07:46:30.835636+0000 mgr.y (mgr.24407) 802 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:32 vm00 bash[20701]: cluster 2026-03-10T07:46:30.835636+0000 mgr.y (mgr.24407) 802 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:32 vm00 bash[20701]: cluster 2026-03-10T07:46:30.835636+0000 mgr.y (mgr.24407) 802 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:32.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:32 vm03 bash[23382]: cluster 2026-03-10T07:46:30.835636+0000 mgr.y (mgr.24407) 802 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:32.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:32 vm03 bash[23382]: cluster 2026-03-10T07:46:30.835636+0000 mgr.y (mgr.24407) 802 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:33.511 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:46:33 vm03 bash[51371]: logger=cleanup t=2026-03-10T07:46:33.108642414Z level=info msg="Completed cleanup jobs" duration=1.978085ms 2026-03-10T07:46:33.511 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:46:33 vm03 bash[51371]: logger=plugins.update.checker t=2026-03-10T07:46:33.249333677Z level=info msg="Update check succeeded" duration=47.404485ms 2026-03-10T07:46:34.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:46:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:46:34.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:34 vm00 bash[28005]: cluster 2026-03-10T07:46:32.836004+0000 mgr.y (mgr.24407) 803 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:34.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:34 vm00 bash[28005]: cluster 2026-03-10T07:46:32.836004+0000 mgr.y (mgr.24407) 803 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:34.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:34 vm00 bash[20701]: cluster 2026-03-10T07:46:32.836004+0000 mgr.y (mgr.24407) 803 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:34.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:34 vm00 bash[20701]: cluster 2026-03-10T07:46:32.836004+0000 mgr.y (mgr.24407) 803 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:34.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:34 vm03 bash[23382]: cluster 2026-03-10T07:46:32.836004+0000 mgr.y (mgr.24407) 803 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:34.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:34 vm03 bash[23382]: cluster 2026-03-10T07:46:32.836004+0000 mgr.y (mgr.24407) 803 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:35 vm00 bash[28005]: audit 2026-03-10T07:46:33.794185+0000 mgr.y (mgr.24407) 804 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:35 vm00 bash[28005]: audit 2026-03-10T07:46:33.794185+0000 mgr.y (mgr.24407) 804 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:35 vm00 bash[20701]: audit 2026-03-10T07:46:33.794185+0000 mgr.y (mgr.24407) 804 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:35 vm00 bash[20701]: audit 2026-03-10T07:46:33.794185+0000 mgr.y (mgr.24407) 804 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:35.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:35 vm03 bash[23382]: audit 2026-03-10T07:46:33.794185+0000 mgr.y (mgr.24407) 804 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:35.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:35 vm03 bash[23382]: audit 2026-03-10T07:46:33.794185+0000 mgr.y (mgr.24407) 804 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:36.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:36 vm00 bash[28005]: cluster 2026-03-10T07:46:34.836640+0000 mgr.y (mgr.24407) 805 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:36.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:36 vm00 bash[28005]: cluster 2026-03-10T07:46:34.836640+0000 mgr.y (mgr.24407) 805 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:36.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:36 vm00 bash[20701]: cluster 2026-03-10T07:46:34.836640+0000 mgr.y (mgr.24407) 805 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:36.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:36 vm00 bash[20701]: cluster 2026-03-10T07:46:34.836640+0000 mgr.y (mgr.24407) 805 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:36.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:36 vm03 bash[23382]: cluster 2026-03-10T07:46:34.836640+0000 mgr.y (mgr.24407) 805 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:36.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:36 vm03 bash[23382]: cluster 2026-03-10T07:46:34.836640+0000 mgr.y (mgr.24407) 805 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:38 vm00 bash[28005]: cluster 2026-03-10T07:46:36.836988+0000 mgr.y (mgr.24407) 806 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:38.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:38 vm00 bash[28005]: cluster 2026-03-10T07:46:36.836988+0000 mgr.y (mgr.24407) 806 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:38 vm00 bash[20701]: cluster 2026-03-10T07:46:36.836988+0000 mgr.y (mgr.24407) 806 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:38.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:38 vm00 bash[20701]: cluster 2026-03-10T07:46:36.836988+0000 mgr.y (mgr.24407) 806 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:38.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:38 vm03 bash[23382]: cluster 2026-03-10T07:46:36.836988+0000 mgr.y (mgr.24407) 806 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:38.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:38 vm03 bash[23382]: cluster 2026-03-10T07:46:36.836988+0000 mgr.y (mgr.24407) 806 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:40 vm00 bash[28005]: cluster 2026-03-10T07:46:38.837308+0000 mgr.y (mgr.24407) 807 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:40 vm00 bash[28005]: cluster 2026-03-10T07:46:38.837308+0000 mgr.y (mgr.24407) 807 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:40 vm00 bash[28005]: audit 2026-03-10T07:46:40.136969+0000 mon.c (mon.2) 399 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:40.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:40 vm00 bash[28005]: audit 2026-03-10T07:46:40.136969+0000 mon.c (mon.2) 399 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:40 vm00 bash[20701]: cluster 2026-03-10T07:46:38.837308+0000 mgr.y (mgr.24407) 807 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:40 vm00 bash[20701]: cluster 2026-03-10T07:46:38.837308+0000 mgr.y (mgr.24407) 807 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:40 vm00 bash[20701]: audit 2026-03-10T07:46:40.136969+0000 mon.c (mon.2) 399 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:40.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:40 vm00 bash[20701]: audit 2026-03-10T07:46:40.136969+0000 mon.c (mon.2) 399 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:40 vm03 bash[23382]: cluster 2026-03-10T07:46:38.837308+0000 mgr.y (mgr.24407) 807 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:40 vm03 bash[23382]: cluster 2026-03-10T07:46:38.837308+0000 mgr.y (mgr.24407) 807 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:40 vm03 bash[23382]: audit 2026-03-10T07:46:40.136969+0000 mon.c (mon.2) 399 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:40 vm03 bash[23382]: audit 2026-03-10T07:46:40.136969+0000 mon.c (mon.2) 399 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:41.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:46:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:46:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:46:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:42 vm00 bash[28005]: cluster 2026-03-10T07:46:40.837910+0000 mgr.y (mgr.24407) 808 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:42 vm00 bash[28005]: cluster 2026-03-10T07:46:40.837910+0000 mgr.y (mgr.24407) 808 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:42 vm00 bash[20701]: cluster 2026-03-10T07:46:40.837910+0000 mgr.y (mgr.24407) 808 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:42 vm00 bash[20701]: cluster 2026-03-10T07:46:40.837910+0000 mgr.y (mgr.24407) 808 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:42.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:42 vm03 bash[23382]: cluster 2026-03-10T07:46:40.837910+0000 mgr.y (mgr.24407) 808 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:42.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:42 vm03 bash[23382]: cluster 2026-03-10T07:46:40.837910+0000 mgr.y (mgr.24407) 808 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:44.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:46:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:46:44.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:44 vm03 bash[23382]: cluster 2026-03-10T07:46:42.838260+0000 mgr.y (mgr.24407) 809 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:44.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:44 vm03 bash[23382]: cluster 2026-03-10T07:46:42.838260+0000 mgr.y (mgr.24407) 809 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:44.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:44 vm00 bash[28005]: cluster 2026-03-10T07:46:42.838260+0000 mgr.y (mgr.24407) 809 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:44.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:44 vm00 bash[28005]: cluster 2026-03-10T07:46:42.838260+0000 mgr.y (mgr.24407) 809 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:44 vm00 bash[20701]: cluster 2026-03-10T07:46:42.838260+0000 mgr.y (mgr.24407) 809 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:44.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:44 vm00 bash[20701]: cluster 2026-03-10T07:46:42.838260+0000 mgr.y (mgr.24407) 809 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:45.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:45 vm03 bash[23382]: audit 2026-03-10T07:46:43.804813+0000 mgr.y (mgr.24407) 810 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:45.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:45 vm03 bash[23382]: audit 2026-03-10T07:46:43.804813+0000 mgr.y (mgr.24407) 810 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:45 vm00 bash[28005]: audit 2026-03-10T07:46:43.804813+0000 mgr.y (mgr.24407) 810 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:45.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:45 vm00 bash[28005]: audit 2026-03-10T07:46:43.804813+0000 mgr.y (mgr.24407) 810 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:45 vm00 bash[20701]: audit 2026-03-10T07:46:43.804813+0000 mgr.y (mgr.24407) 810 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:45.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:45 vm00 bash[20701]: audit 2026-03-10T07:46:43.804813+0000 mgr.y (mgr.24407) 810 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:46.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:46 vm03 bash[23382]: cluster 2026-03-10T07:46:44.838884+0000 mgr.y (mgr.24407) 811 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:46.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:46 vm03 bash[23382]: cluster 2026-03-10T07:46:44.838884+0000 mgr.y (mgr.24407) 811 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:46.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:46 vm00 bash[28005]: cluster 2026-03-10T07:46:44.838884+0000 mgr.y (mgr.24407) 811 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:46.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:46 vm00 bash[28005]: cluster 2026-03-10T07:46:44.838884+0000 mgr.y (mgr.24407) 811 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:46 vm00 bash[20701]: cluster 2026-03-10T07:46:44.838884+0000 mgr.y (mgr.24407) 811 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:46.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:46 vm00 bash[20701]: cluster 2026-03-10T07:46:44.838884+0000 mgr.y (mgr.24407) 811 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:48.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:48 vm03 bash[23382]: cluster 2026-03-10T07:46:46.839222+0000 mgr.y (mgr.24407) 812 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:48.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:48 vm03 bash[23382]: cluster 2026-03-10T07:46:46.839222+0000 mgr.y (mgr.24407) 812 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:48.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:48 vm00 bash[28005]: cluster 2026-03-10T07:46:46.839222+0000 mgr.y (mgr.24407) 812 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:48.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:48 vm00 bash[28005]: cluster 2026-03-10T07:46:46.839222+0000 mgr.y (mgr.24407) 812 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:48.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:48 vm00 bash[20701]: cluster 2026-03-10T07:46:46.839222+0000 mgr.y (mgr.24407) 812 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:48.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:48 vm00 bash[20701]: cluster 2026-03-10T07:46:46.839222+0000 mgr.y (mgr.24407) 812 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:50.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:50 vm03 bash[23382]: cluster 2026-03-10T07:46:48.839537+0000 mgr.y (mgr.24407) 813 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:50.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:50 vm03 bash[23382]: cluster 2026-03-10T07:46:48.839537+0000 mgr.y (mgr.24407) 813 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:50.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:50 vm00 bash[28005]: cluster 2026-03-10T07:46:48.839537+0000 mgr.y (mgr.24407) 813 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:50.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:50 vm00 bash[28005]: cluster 2026-03-10T07:46:48.839537+0000 mgr.y (mgr.24407) 813 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:50 vm00 bash[20701]: cluster 2026-03-10T07:46:48.839537+0000 mgr.y (mgr.24407) 813 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:50.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:50 vm00 bash[20701]: cluster 2026-03-10T07:46:48.839537+0000 mgr.y (mgr.24407) 813 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:51.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:46:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:46:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:46:52.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:52 vm03 bash[23382]: cluster 2026-03-10T07:46:50.840195+0000 mgr.y (mgr.24407) 814 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:52.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:52 vm03 bash[23382]: cluster 2026-03-10T07:46:50.840195+0000 mgr.y (mgr.24407) 814 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:52 vm00 bash[28005]: cluster 2026-03-10T07:46:50.840195+0000 mgr.y (mgr.24407) 814 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:52 vm00 bash[28005]: cluster 2026-03-10T07:46:50.840195+0000 mgr.y (mgr.24407) 814 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:52 vm00 bash[20701]: cluster 2026-03-10T07:46:50.840195+0000 mgr.y (mgr.24407) 814 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:52 vm00 bash[20701]: cluster 2026-03-10T07:46:50.840195+0000 mgr.y (mgr.24407) 814 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:54.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:46:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:46:54.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:54 vm03 bash[23382]: cluster 2026-03-10T07:46:52.840553+0000 mgr.y (mgr.24407) 815 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:54.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:54 vm03 bash[23382]: cluster 2026-03-10T07:46:52.840553+0000 mgr.y (mgr.24407) 815 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:54.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:54 vm00 bash[28005]: cluster 2026-03-10T07:46:52.840553+0000 mgr.y (mgr.24407) 815 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:54.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:54 vm00 bash[28005]: cluster 2026-03-10T07:46:52.840553+0000 mgr.y (mgr.24407) 815 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:54.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:54 vm00 bash[20701]: cluster 2026-03-10T07:46:52.840553+0000 mgr.y (mgr.24407) 815 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:54.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:54 vm00 bash[20701]: cluster 2026-03-10T07:46:52.840553+0000 mgr.y (mgr.24407) 815 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:55 vm03 bash[23382]: audit 2026-03-10T07:46:53.809757+0000 mgr.y (mgr.24407) 816 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:55 vm03 bash[23382]: audit 2026-03-10T07:46:53.809757+0000 mgr.y (mgr.24407) 816 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:55 vm03 bash[23382]: audit 2026-03-10T07:46:55.143414+0000 mon.c (mon.2) 400 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:55 vm03 bash[23382]: audit 2026-03-10T07:46:55.143414+0000 mon.c (mon.2) 400 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:55 vm00 bash[28005]: audit 2026-03-10T07:46:53.809757+0000 mgr.y (mgr.24407) 816 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:55 vm00 bash[28005]: audit 2026-03-10T07:46:53.809757+0000 mgr.y (mgr.24407) 816 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:55.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:55 vm00 bash[28005]: audit 2026-03-10T07:46:55.143414+0000 mon.c (mon.2) 400 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:55.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:55 vm00 bash[28005]: audit 2026-03-10T07:46:55.143414+0000 mon.c (mon.2) 400 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:55.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:55 vm00 bash[20701]: audit 2026-03-10T07:46:53.809757+0000 mgr.y (mgr.24407) 816 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:55.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:55 vm00 bash[20701]: audit 2026-03-10T07:46:53.809757+0000 mgr.y (mgr.24407) 816 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:46:55.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:55 vm00 bash[20701]: audit 2026-03-10T07:46:55.143414+0000 mon.c (mon.2) 400 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:55.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:55 vm00 bash[20701]: audit 2026-03-10T07:46:55.143414+0000 mon.c (mon.2) 400 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:46:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:56 vm03 bash[23382]: cluster 2026-03-10T07:46:54.841126+0000 mgr.y (mgr.24407) 817 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:56 vm03 bash[23382]: cluster 2026-03-10T07:46:54.841126+0000 mgr.y (mgr.24407) 817 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:56.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:56 vm00 bash[28005]: cluster 2026-03-10T07:46:54.841126+0000 mgr.y (mgr.24407) 817 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:56.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:56 vm00 bash[28005]: cluster 2026-03-10T07:46:54.841126+0000 mgr.y (mgr.24407) 817 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:56.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:56 vm00 bash[20701]: cluster 2026-03-10T07:46:54.841126+0000 mgr.y (mgr.24407) 817 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:56.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:56 vm00 bash[20701]: cluster 2026-03-10T07:46:54.841126+0000 mgr.y (mgr.24407) 817 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:46:57.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:57 vm03 bash[23382]: cluster 2026-03-10T07:46:56.841498+0000 mgr.y (mgr.24407) 818 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:57.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:57 vm03 bash[23382]: cluster 2026-03-10T07:46:56.841498+0000 mgr.y (mgr.24407) 818 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:57.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:57 vm00 bash[28005]: cluster 2026-03-10T07:46:56.841498+0000 mgr.y (mgr.24407) 818 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:57.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:57 vm00 bash[28005]: cluster 2026-03-10T07:46:56.841498+0000 mgr.y (mgr.24407) 818 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:57.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:57 vm00 bash[20701]: cluster 2026-03-10T07:46:56.841498+0000 mgr.y (mgr.24407) 818 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:46:57.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:57 vm00 bash[20701]: cluster 2026-03-10T07:46:56.841498+0000 mgr.y (mgr.24407) 818 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:00.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:59 vm03 bash[23382]: cluster 2026-03-10T07:46:58.841859+0000 mgr.y (mgr.24407) 819 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:00.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:46:59 vm03 bash[23382]: cluster 2026-03-10T07:46:58.841859+0000 mgr.y (mgr.24407) 819 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:59 vm00 bash[28005]: cluster 2026-03-10T07:46:58.841859+0000 mgr.y (mgr.24407) 819 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:46:59 vm00 bash[28005]: cluster 2026-03-10T07:46:58.841859+0000 mgr.y (mgr.24407) 819 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:59 vm00 bash[20701]: cluster 2026-03-10T07:46:58.841859+0000 mgr.y (mgr.24407) 819 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:00.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:46:59 vm00 bash[20701]: cluster 2026-03-10T07:46:58.841859+0000 mgr.y (mgr.24407) 819 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:01.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:47:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:47:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:47:02.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:01 vm03 bash[23382]: cluster 2026-03-10T07:47:00.842519+0000 mgr.y (mgr.24407) 820 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:02.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:01 vm03 bash[23382]: cluster 2026-03-10T07:47:00.842519+0000 mgr.y (mgr.24407) 820 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:02.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:01 vm00 bash[28005]: cluster 2026-03-10T07:47:00.842519+0000 mgr.y (mgr.24407) 820 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:02.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:01 vm00 bash[28005]: cluster 2026-03-10T07:47:00.842519+0000 mgr.y (mgr.24407) 820 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:01 vm00 bash[20701]: cluster 2026-03-10T07:47:00.842519+0000 mgr.y (mgr.24407) 820 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:02.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:01 vm00 bash[20701]: cluster 2026-03-10T07:47:00.842519+0000 mgr.y (mgr.24407) 820 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:04.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:47:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:47:04.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:03 vm03 bash[23382]: cluster 2026-03-10T07:47:02.842829+0000 mgr.y (mgr.24407) 821 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:04.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:03 vm03 bash[23382]: cluster 2026-03-10T07:47:02.842829+0000 mgr.y (mgr.24407) 821 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:04.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:03 vm00 bash[28005]: cluster 2026-03-10T07:47:02.842829+0000 mgr.y (mgr.24407) 821 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:04.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:03 vm00 bash[28005]: cluster 2026-03-10T07:47:02.842829+0000 mgr.y (mgr.24407) 821 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:03 vm00 bash[20701]: cluster 2026-03-10T07:47:02.842829+0000 mgr.y (mgr.24407) 821 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:04.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:03 vm00 bash[20701]: cluster 2026-03-10T07:47:02.842829+0000 mgr.y (mgr.24407) 821 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:05.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:04 vm03 bash[23382]: audit 2026-03-10T07:47:03.812571+0000 mgr.y (mgr.24407) 822 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:05.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:04 vm03 bash[23382]: audit 2026-03-10T07:47:03.812571+0000 mgr.y (mgr.24407) 822 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:05.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:04 vm00 bash[28005]: audit 2026-03-10T07:47:03.812571+0000 mgr.y (mgr.24407) 822 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:05.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:04 vm00 bash[28005]: audit 2026-03-10T07:47:03.812571+0000 mgr.y (mgr.24407) 822 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:05.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:04 vm00 bash[20701]: audit 2026-03-10T07:47:03.812571+0000 mgr.y (mgr.24407) 822 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:05.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:04 vm00 bash[20701]: audit 2026-03-10T07:47:03.812571+0000 mgr.y (mgr.24407) 822 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:06.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:05 vm03 bash[23382]: cluster 2026-03-10T07:47:04.843421+0000 mgr.y (mgr.24407) 823 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:06.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:05 vm03 bash[23382]: cluster 2026-03-10T07:47:04.843421+0000 mgr.y (mgr.24407) 823 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:06.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:05 vm00 bash[28005]: cluster 2026-03-10T07:47:04.843421+0000 mgr.y (mgr.24407) 823 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:06.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:05 vm00 bash[28005]: cluster 2026-03-10T07:47:04.843421+0000 mgr.y (mgr.24407) 823 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:06.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:05 vm00 bash[20701]: cluster 2026-03-10T07:47:04.843421+0000 mgr.y (mgr.24407) 823 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:06.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:05 vm00 bash[20701]: cluster 2026-03-10T07:47:04.843421+0000 mgr.y (mgr.24407) 823 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:08.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:07 vm03 bash[23382]: cluster 2026-03-10T07:47:06.843801+0000 mgr.y (mgr.24407) 824 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:08.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:07 vm03 bash[23382]: cluster 2026-03-10T07:47:06.843801+0000 mgr.y (mgr.24407) 824 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:08.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:07 vm00 bash[28005]: cluster 2026-03-10T07:47:06.843801+0000 mgr.y (mgr.24407) 824 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:08.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:07 vm00 bash[28005]: cluster 2026-03-10T07:47:06.843801+0000 mgr.y (mgr.24407) 824 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:08.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:07 vm00 bash[20701]: cluster 2026-03-10T07:47:06.843801+0000 mgr.y (mgr.24407) 824 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:08.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:07 vm00 bash[20701]: cluster 2026-03-10T07:47:06.843801+0000 mgr.y (mgr.24407) 824 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:10.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:09 vm03 bash[23382]: cluster 2026-03-10T07:47:08.844119+0000 mgr.y (mgr.24407) 825 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:10.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:09 vm03 bash[23382]: cluster 2026-03-10T07:47:08.844119+0000 mgr.y (mgr.24407) 825 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:10.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:09 vm00 bash[28005]: cluster 2026-03-10T07:47:08.844119+0000 mgr.y (mgr.24407) 825 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:10.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:09 vm00 bash[28005]: cluster 2026-03-10T07:47:08.844119+0000 mgr.y (mgr.24407) 825 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:10.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:09 vm00 bash[20701]: cluster 2026-03-10T07:47:08.844119+0000 mgr.y (mgr.24407) 825 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:10.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:09 vm00 bash[20701]: cluster 2026-03-10T07:47:08.844119+0000 mgr.y (mgr.24407) 825 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:11.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:10 vm03 bash[23382]: audit 2026-03-10T07:47:10.149172+0000 mon.c (mon.2) 401 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:11.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:10 vm03 bash[23382]: audit 2026-03-10T07:47:10.149172+0000 mon.c (mon.2) 401 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:10 vm00 bash[20701]: audit 2026-03-10T07:47:10.149172+0000 mon.c (mon.2) 401 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:11.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:10 vm00 bash[20701]: audit 2026-03-10T07:47:10.149172+0000 mon.c (mon.2) 401 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:11.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:47:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:47:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:47:11.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:10 vm00 bash[28005]: audit 2026-03-10T07:47:10.149172+0000 mon.c (mon.2) 401 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:11.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:10 vm00 bash[28005]: audit 2026-03-10T07:47:10.149172+0000 mon.c (mon.2) 401 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:12.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:11 vm03 bash[23382]: cluster 2026-03-10T07:47:10.844936+0000 mgr.y (mgr.24407) 826 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:12.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:11 vm03 bash[23382]: cluster 2026-03-10T07:47:10.844936+0000 mgr.y (mgr.24407) 826 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:12.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:11 vm00 bash[28005]: cluster 2026-03-10T07:47:10.844936+0000 mgr.y (mgr.24407) 826 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:12.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:11 vm00 bash[28005]: cluster 2026-03-10T07:47:10.844936+0000 mgr.y (mgr.24407) 826 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:12.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:11 vm00 bash[20701]: cluster 2026-03-10T07:47:10.844936+0000 mgr.y (mgr.24407) 826 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:12.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:11 vm00 bash[20701]: cluster 2026-03-10T07:47:10.844936+0000 mgr.y (mgr.24407) 826 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:14.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:47:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:47:14.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:14 vm03 bash[23382]: cluster 2026-03-10T07:47:12.845318+0000 mgr.y (mgr.24407) 827 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:14.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:14 vm03 bash[23382]: cluster 2026-03-10T07:47:12.845318+0000 mgr.y (mgr.24407) 827 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:14.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:14 vm00 bash[28005]: cluster 2026-03-10T07:47:12.845318+0000 mgr.y (mgr.24407) 827 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:14.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:14 vm00 bash[28005]: cluster 2026-03-10T07:47:12.845318+0000 mgr.y (mgr.24407) 827 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:14 vm00 bash[20701]: cluster 2026-03-10T07:47:12.845318+0000 mgr.y (mgr.24407) 827 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:14.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:14 vm00 bash[20701]: cluster 2026-03-10T07:47:12.845318+0000 mgr.y (mgr.24407) 827 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:15.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:15 vm00 bash[28005]: audit 2026-03-10T07:47:13.823134+0000 mgr.y (mgr.24407) 828 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:15.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:15 vm00 bash[28005]: audit 2026-03-10T07:47:13.823134+0000 mgr.y (mgr.24407) 828 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:15.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:15 vm00 bash[20701]: audit 2026-03-10T07:47:13.823134+0000 mgr.y (mgr.24407) 828 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:15.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:15 vm00 bash[20701]: audit 2026-03-10T07:47:13.823134+0000 mgr.y (mgr.24407) 828 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:15.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:15 vm03 bash[23382]: audit 2026-03-10T07:47:13.823134+0000 mgr.y (mgr.24407) 828 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:15.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:15 vm03 bash[23382]: audit 2026-03-10T07:47:13.823134+0000 mgr.y (mgr.24407) 828 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:16.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:16 vm00 bash[28005]: cluster 2026-03-10T07:47:14.845874+0000 mgr.y (mgr.24407) 829 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:16.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:16 vm00 bash[28005]: cluster 2026-03-10T07:47:14.845874+0000 mgr.y (mgr.24407) 829 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:16.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:16 vm00 bash[20701]: cluster 2026-03-10T07:47:14.845874+0000 mgr.y (mgr.24407) 829 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:16.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:16 vm00 bash[20701]: cluster 2026-03-10T07:47:14.845874+0000 mgr.y (mgr.24407) 829 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:16.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:16 vm03 bash[23382]: cluster 2026-03-10T07:47:14.845874+0000 mgr.y (mgr.24407) 829 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:16.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:16 vm03 bash[23382]: cluster 2026-03-10T07:47:14.845874+0000 mgr.y (mgr.24407) 829 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: cluster 2026-03-10T07:47:16.846240+0000 mgr.y (mgr.24407) 830 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: cluster 2026-03-10T07:47:16.846240+0000 mgr.y (mgr.24407) 830 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.165980+0000 mon.c (mon.2) 402 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.165980+0000 mon.c (mon.2) 402 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.513032+0000 mon.c (mon.2) 403 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.513032+0000 mon.c (mon.2) 403 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.513373+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.513373+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.515242+0000 mon.c (mon.2) 404 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.515242+0000 mon.c (mon.2) 404 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.515491+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.515491+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.516185+0000 mon.c (mon.2) 405 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.516185+0000 mon.c (mon.2) 405 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.516773+0000 mon.c (mon.2) 406 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.516773+0000 mon.c (mon.2) 406 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.523393+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:18 vm00 bash[28005]: audit 2026-03-10T07:47:17.523393+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: cluster 2026-03-10T07:47:16.846240+0000 mgr.y (mgr.24407) 830 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: cluster 2026-03-10T07:47:16.846240+0000 mgr.y (mgr.24407) 830 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.165980+0000 mon.c (mon.2) 402 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.165980+0000 mon.c (mon.2) 402 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.513032+0000 mon.c (mon.2) 403 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.513032+0000 mon.c (mon.2) 403 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.513373+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.513373+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.515242+0000 mon.c (mon.2) 404 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.515242+0000 mon.c (mon.2) 404 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.515491+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.515491+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.516185+0000 mon.c (mon.2) 405 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.516185+0000 mon.c (mon.2) 405 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:47:18.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.516773+0000 mon.c (mon.2) 406 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:47:18.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.516773+0000 mon.c (mon.2) 406 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:47:18.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.523393+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:47:18.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:18 vm00 bash[20701]: audit 2026-03-10T07:47:17.523393+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:47:18.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: cluster 2026-03-10T07:47:16.846240+0000 mgr.y (mgr.24407) 830 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: cluster 2026-03-10T07:47:16.846240+0000 mgr.y (mgr.24407) 830 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.165980+0000 mon.c (mon.2) 402 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.165980+0000 mon.c (mon.2) 402 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.513032+0000 mon.c (mon.2) 403 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.513032+0000 mon.c (mon.2) 403 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.513373+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.513373+0000 mon.a (mon.0) 3540 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.515242+0000 mon.c (mon.2) 404 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.515242+0000 mon.c (mon.2) 404 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.515491+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.515491+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.516185+0000 mon.c (mon.2) 405 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.516185+0000 mon.c (mon.2) 405 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.516773+0000 mon.c (mon.2) 406 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.516773+0000 mon.c (mon.2) 406 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.523393+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:47:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:18 vm03 bash[23382]: audit 2026-03-10T07:47:17.523393+0000 mon.a (mon.0) 3542 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:47:20.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:20 vm00 bash[28005]: cluster 2026-03-10T07:47:18.846616+0000 mgr.y (mgr.24407) 831 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:20.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:20 vm00 bash[28005]: cluster 2026-03-10T07:47:18.846616+0000 mgr.y (mgr.24407) 831 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:20.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:20 vm00 bash[20701]: cluster 2026-03-10T07:47:18.846616+0000 mgr.y (mgr.24407) 831 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:20.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:20 vm00 bash[20701]: cluster 2026-03-10T07:47:18.846616+0000 mgr.y (mgr.24407) 831 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:20.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:20 vm03 bash[23382]: cluster 2026-03-10T07:47:18.846616+0000 mgr.y (mgr.24407) 831 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:20.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:20 vm03 bash[23382]: cluster 2026-03-10T07:47:18.846616+0000 mgr.y (mgr.24407) 831 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:21.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:47:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:47:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:47:22.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:22 vm00 bash[28005]: cluster 2026-03-10T07:47:20.847451+0000 mgr.y (mgr.24407) 832 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:22.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:22 vm00 bash[28005]: cluster 2026-03-10T07:47:20.847451+0000 mgr.y (mgr.24407) 832 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:22.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:22 vm00 bash[20701]: cluster 2026-03-10T07:47:20.847451+0000 mgr.y (mgr.24407) 832 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:22.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:22 vm00 bash[20701]: cluster 2026-03-10T07:47:20.847451+0000 mgr.y (mgr.24407) 832 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:22.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:22 vm03 bash[23382]: cluster 2026-03-10T07:47:20.847451+0000 mgr.y (mgr.24407) 832 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:22.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:22 vm03 bash[23382]: cluster 2026-03-10T07:47:20.847451+0000 mgr.y (mgr.24407) 832 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:24.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:47:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:47:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:24 vm03 bash[23382]: cluster 2026-03-10T07:47:22.847884+0000 mgr.y (mgr.24407) 833 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:24 vm03 bash[23382]: cluster 2026-03-10T07:47:22.847884+0000 mgr.y (mgr.24407) 833 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:24 vm00 bash[28005]: cluster 2026-03-10T07:47:22.847884+0000 mgr.y (mgr.24407) 833 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:24 vm00 bash[28005]: cluster 2026-03-10T07:47:22.847884+0000 mgr.y (mgr.24407) 833 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:24 vm00 bash[20701]: cluster 2026-03-10T07:47:22.847884+0000 mgr.y (mgr.24407) 833 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:24 vm00 bash[20701]: cluster 2026-03-10T07:47:22.847884+0000 mgr.y (mgr.24407) 833 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:25 vm00 bash[28005]: audit 2026-03-10T07:47:23.824568+0000 mgr.y (mgr.24407) 834 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:25 vm00 bash[28005]: audit 2026-03-10T07:47:23.824568+0000 mgr.y (mgr.24407) 834 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:25.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:25 vm00 bash[20701]: audit 2026-03-10T07:47:23.824568+0000 mgr.y (mgr.24407) 834 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:25.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:25 vm00 bash[20701]: audit 2026-03-10T07:47:23.824568+0000 mgr.y (mgr.24407) 834 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:25 vm03 bash[23382]: audit 2026-03-10T07:47:23.824568+0000 mgr.y (mgr.24407) 834 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:25 vm03 bash[23382]: audit 2026-03-10T07:47:23.824568+0000 mgr.y (mgr.24407) 834 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:26 vm00 bash[28005]: cluster 2026-03-10T07:47:24.848530+0000 mgr.y (mgr.24407) 835 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:26 vm00 bash[28005]: cluster 2026-03-10T07:47:24.848530+0000 mgr.y (mgr.24407) 835 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:26 vm00 bash[28005]: audit 2026-03-10T07:47:25.155416+0000 mon.c (mon.2) 407 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:26.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:26 vm00 bash[28005]: audit 2026-03-10T07:47:25.155416+0000 mon.c (mon.2) 407 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:26 vm00 bash[20701]: cluster 2026-03-10T07:47:24.848530+0000 mgr.y (mgr.24407) 835 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:26 vm00 bash[20701]: cluster 2026-03-10T07:47:24.848530+0000 mgr.y (mgr.24407) 835 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:26 vm00 bash[20701]: audit 2026-03-10T07:47:25.155416+0000 mon.c (mon.2) 407 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:26.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:26 vm00 bash[20701]: audit 2026-03-10T07:47:25.155416+0000 mon.c (mon.2) 407 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:26 vm03 bash[23382]: cluster 2026-03-10T07:47:24.848530+0000 mgr.y (mgr.24407) 835 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:26 vm03 bash[23382]: cluster 2026-03-10T07:47:24.848530+0000 mgr.y (mgr.24407) 835 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:26 vm03 bash[23382]: audit 2026-03-10T07:47:25.155416+0000 mon.c (mon.2) 407 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:26 vm03 bash[23382]: audit 2026-03-10T07:47:25.155416+0000 mon.c (mon.2) 407 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:28.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:28 vm00 bash[28005]: cluster 2026-03-10T07:47:26.848880+0000 mgr.y (mgr.24407) 836 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:28.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:28 vm00 bash[28005]: cluster 2026-03-10T07:47:26.848880+0000 mgr.y (mgr.24407) 836 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:28 vm00 bash[20701]: cluster 2026-03-10T07:47:26.848880+0000 mgr.y (mgr.24407) 836 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:28.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:28 vm00 bash[20701]: cluster 2026-03-10T07:47:26.848880+0000 mgr.y (mgr.24407) 836 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:28 vm03 bash[23382]: cluster 2026-03-10T07:47:26.848880+0000 mgr.y (mgr.24407) 836 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:28 vm03 bash[23382]: cluster 2026-03-10T07:47:26.848880+0000 mgr.y (mgr.24407) 836 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:30 vm00 bash[28005]: cluster 2026-03-10T07:47:28.849250+0000 mgr.y (mgr.24407) 837 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:30 vm00 bash[28005]: cluster 2026-03-10T07:47:28.849250+0000 mgr.y (mgr.24407) 837 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:30 vm00 bash[20701]: cluster 2026-03-10T07:47:28.849250+0000 mgr.y (mgr.24407) 837 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:30 vm00 bash[20701]: cluster 2026-03-10T07:47:28.849250+0000 mgr.y (mgr.24407) 837 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:30 vm03 bash[23382]: cluster 2026-03-10T07:47:28.849250+0000 mgr.y (mgr.24407) 837 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:30 vm03 bash[23382]: cluster 2026-03-10T07:47:28.849250+0000 mgr.y (mgr.24407) 837 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:31.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:47:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:47:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:47:32.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:32 vm00 bash[28005]: cluster 2026-03-10T07:47:30.849998+0000 mgr.y (mgr.24407) 838 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:32 vm00 bash[28005]: cluster 2026-03-10T07:47:30.849998+0000 mgr.y (mgr.24407) 838 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:32 vm00 bash[20701]: cluster 2026-03-10T07:47:30.849998+0000 mgr.y (mgr.24407) 838 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:32 vm00 bash[20701]: cluster 2026-03-10T07:47:30.849998+0000 mgr.y (mgr.24407) 838 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:32 vm03 bash[23382]: cluster 2026-03-10T07:47:30.849998+0000 mgr.y (mgr.24407) 838 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:32 vm03 bash[23382]: cluster 2026-03-10T07:47:30.849998+0000 mgr.y (mgr.24407) 838 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:34.103 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:47:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:47:34.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:34 vm00 bash[28005]: cluster 2026-03-10T07:47:32.850427+0000 mgr.y (mgr.24407) 839 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:34.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:34 vm00 bash[28005]: cluster 2026-03-10T07:47:32.850427+0000 mgr.y (mgr.24407) 839 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:34.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:34 vm00 bash[20701]: cluster 2026-03-10T07:47:32.850427+0000 mgr.y (mgr.24407) 839 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:34.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:34 vm00 bash[20701]: cluster 2026-03-10T07:47:32.850427+0000 mgr.y (mgr.24407) 839 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:34.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:34 vm03 bash[23382]: cluster 2026-03-10T07:47:32.850427+0000 mgr.y (mgr.24407) 839 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:34.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:34 vm03 bash[23382]: cluster 2026-03-10T07:47:32.850427+0000 mgr.y (mgr.24407) 839 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:35 vm00 bash[28005]: audit 2026-03-10T07:47:33.835306+0000 mgr.y (mgr.24407) 840 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:35 vm00 bash[28005]: audit 2026-03-10T07:47:33.835306+0000 mgr.y (mgr.24407) 840 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:35.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:35 vm00 bash[20701]: audit 2026-03-10T07:47:33.835306+0000 mgr.y (mgr.24407) 840 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:35.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:35 vm00 bash[20701]: audit 2026-03-10T07:47:33.835306+0000 mgr.y (mgr.24407) 840 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:35.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:35 vm03 bash[23382]: audit 2026-03-10T07:47:33.835306+0000 mgr.y (mgr.24407) 840 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:35.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:35 vm03 bash[23382]: audit 2026-03-10T07:47:33.835306+0000 mgr.y (mgr.24407) 840 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:36 vm00 bash[28005]: cluster 2026-03-10T07:47:34.851290+0000 mgr.y (mgr.24407) 841 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:36 vm00 bash[28005]: cluster 2026-03-10T07:47:34.851290+0000 mgr.y (mgr.24407) 841 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:36 vm00 bash[20701]: cluster 2026-03-10T07:47:34.851290+0000 mgr.y (mgr.24407) 841 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:36.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:36 vm00 bash[20701]: cluster 2026-03-10T07:47:34.851290+0000 mgr.y (mgr.24407) 841 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:36.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:36 vm03 bash[23382]: cluster 2026-03-10T07:47:34.851290+0000 mgr.y (mgr.24407) 841 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:36.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:36 vm03 bash[23382]: cluster 2026-03-10T07:47:34.851290+0000 mgr.y (mgr.24407) 841 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:38.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:38 vm00 bash[28005]: cluster 2026-03-10T07:47:36.851610+0000 mgr.y (mgr.24407) 842 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:38.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:38 vm00 bash[28005]: cluster 2026-03-10T07:47:36.851610+0000 mgr.y (mgr.24407) 842 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:38.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:38 vm00 bash[20701]: cluster 2026-03-10T07:47:36.851610+0000 mgr.y (mgr.24407) 842 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:38.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:38 vm00 bash[20701]: cluster 2026-03-10T07:47:36.851610+0000 mgr.y (mgr.24407) 842 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:38.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:38 vm03 bash[23382]: cluster 2026-03-10T07:47:36.851610+0000 mgr.y (mgr.24407) 842 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:38.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:38 vm03 bash[23382]: cluster 2026-03-10T07:47:36.851610+0000 mgr.y (mgr.24407) 842 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:40 vm00 bash[28005]: cluster 2026-03-10T07:47:38.851935+0000 mgr.y (mgr.24407) 843 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:40 vm00 bash[28005]: cluster 2026-03-10T07:47:38.851935+0000 mgr.y (mgr.24407) 843 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:40.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:40 vm00 bash[20701]: cluster 2026-03-10T07:47:38.851935+0000 mgr.y (mgr.24407) 843 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:40.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:40 vm00 bash[20701]: cluster 2026-03-10T07:47:38.851935+0000 mgr.y (mgr.24407) 843 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:40 vm03 bash[23382]: cluster 2026-03-10T07:47:38.851935+0000 mgr.y (mgr.24407) 843 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:40 vm03 bash[23382]: cluster 2026-03-10T07:47:38.851935+0000 mgr.y (mgr.24407) 843 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:41 vm00 bash[28005]: audit 2026-03-10T07:47:40.162051+0000 mon.c (mon.2) 408 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:41 vm00 bash[28005]: audit 2026-03-10T07:47:40.162051+0000 mon.c (mon.2) 408 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:41.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:47:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:47:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:47:41.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:41 vm00 bash[20701]: audit 2026-03-10T07:47:40.162051+0000 mon.c (mon.2) 408 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:41.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:41 vm00 bash[20701]: audit 2026-03-10T07:47:40.162051+0000 mon.c (mon.2) 408 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:41 vm03 bash[23382]: audit 2026-03-10T07:47:40.162051+0000 mon.c (mon.2) 408 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:41 vm03 bash[23382]: audit 2026-03-10T07:47:40.162051+0000 mon.c (mon.2) 408 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:42.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:42 vm03 bash[23382]: cluster 2026-03-10T07:47:40.852720+0000 mgr.y (mgr.24407) 844 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:42.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:42 vm03 bash[23382]: cluster 2026-03-10T07:47:40.852720+0000 mgr.y (mgr.24407) 844 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:42.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:42 vm00 bash[28005]: cluster 2026-03-10T07:47:40.852720+0000 mgr.y (mgr.24407) 844 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:42.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:42 vm00 bash[28005]: cluster 2026-03-10T07:47:40.852720+0000 mgr.y (mgr.24407) 844 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:42 vm00 bash[20701]: cluster 2026-03-10T07:47:40.852720+0000 mgr.y (mgr.24407) 844 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:42.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:42 vm00 bash[20701]: cluster 2026-03-10T07:47:40.852720+0000 mgr.y (mgr.24407) 844 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:44.153 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:47:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:47:44.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:44 vm03 bash[23382]: cluster 2026-03-10T07:47:42.853089+0000 mgr.y (mgr.24407) 845 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:44.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:44 vm03 bash[23382]: cluster 2026-03-10T07:47:42.853089+0000 mgr.y (mgr.24407) 845 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:44 vm00 bash[28005]: cluster 2026-03-10T07:47:42.853089+0000 mgr.y (mgr.24407) 845 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:44.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:44 vm00 bash[28005]: cluster 2026-03-10T07:47:42.853089+0000 mgr.y (mgr.24407) 845 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:44 vm00 bash[20701]: cluster 2026-03-10T07:47:42.853089+0000 mgr.y (mgr.24407) 845 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:44.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:44 vm00 bash[20701]: cluster 2026-03-10T07:47:42.853089+0000 mgr.y (mgr.24407) 845 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:45 vm03 bash[23382]: audit 2026-03-10T07:47:43.838550+0000 mgr.y (mgr.24407) 846 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:45 vm03 bash[23382]: audit 2026-03-10T07:47:43.838550+0000 mgr.y (mgr.24407) 846 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:45.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:45 vm00 bash[28005]: audit 2026-03-10T07:47:43.838550+0000 mgr.y (mgr.24407) 846 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:45.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:45 vm00 bash[28005]: audit 2026-03-10T07:47:43.838550+0000 mgr.y (mgr.24407) 846 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:45.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:45 vm00 bash[20701]: audit 2026-03-10T07:47:43.838550+0000 mgr.y (mgr.24407) 846 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:45.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:45 vm00 bash[20701]: audit 2026-03-10T07:47:43.838550+0000 mgr.y (mgr.24407) 846 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:46 vm03 bash[23382]: cluster 2026-03-10T07:47:44.853921+0000 mgr.y (mgr.24407) 847 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:46 vm03 bash[23382]: cluster 2026-03-10T07:47:44.853921+0000 mgr.y (mgr.24407) 847 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:46.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:46 vm00 bash[28005]: cluster 2026-03-10T07:47:44.853921+0000 mgr.y (mgr.24407) 847 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:46.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:46 vm00 bash[28005]: cluster 2026-03-10T07:47:44.853921+0000 mgr.y (mgr.24407) 847 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:46.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:46 vm00 bash[20701]: cluster 2026-03-10T07:47:44.853921+0000 mgr.y (mgr.24407) 847 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:46.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:46 vm00 bash[20701]: cluster 2026-03-10T07:47:44.853921+0000 mgr.y (mgr.24407) 847 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:48 vm03 bash[23382]: cluster 2026-03-10T07:47:46.854252+0000 mgr.y (mgr.24407) 848 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:48 vm03 bash[23382]: cluster 2026-03-10T07:47:46.854252+0000 mgr.y (mgr.24407) 848 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:48.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:48 vm00 bash[28005]: cluster 2026-03-10T07:47:46.854252+0000 mgr.y (mgr.24407) 848 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:48.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:48 vm00 bash[28005]: cluster 2026-03-10T07:47:46.854252+0000 mgr.y (mgr.24407) 848 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:48.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:48 vm00 bash[20701]: cluster 2026-03-10T07:47:46.854252+0000 mgr.y (mgr.24407) 848 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:48.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:48 vm00 bash[20701]: cluster 2026-03-10T07:47:46.854252+0000 mgr.y (mgr.24407) 848 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:50 vm03 bash[23382]: cluster 2026-03-10T07:47:48.854601+0000 mgr.y (mgr.24407) 849 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:50 vm03 bash[23382]: cluster 2026-03-10T07:47:48.854601+0000 mgr.y (mgr.24407) 849 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:50.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:50 vm00 bash[28005]: cluster 2026-03-10T07:47:48.854601+0000 mgr.y (mgr.24407) 849 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:50.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:50 vm00 bash[28005]: cluster 2026-03-10T07:47:48.854601+0000 mgr.y (mgr.24407) 849 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:50.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:50 vm00 bash[20701]: cluster 2026-03-10T07:47:48.854601+0000 mgr.y (mgr.24407) 849 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:50.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:50 vm00 bash[20701]: cluster 2026-03-10T07:47:48.854601+0000 mgr.y (mgr.24407) 849 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:51.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:47:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:47:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:47:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:52 vm03 bash[23382]: cluster 2026-03-10T07:47:50.855314+0000 mgr.y (mgr.24407) 850 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:52 vm03 bash[23382]: cluster 2026-03-10T07:47:50.855314+0000 mgr.y (mgr.24407) 850 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:52 vm00 bash[28005]: cluster 2026-03-10T07:47:50.855314+0000 mgr.y (mgr.24407) 850 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:52 vm00 bash[28005]: cluster 2026-03-10T07:47:50.855314+0000 mgr.y (mgr.24407) 850 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:52 vm00 bash[20701]: cluster 2026-03-10T07:47:50.855314+0000 mgr.y (mgr.24407) 850 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:52 vm00 bash[20701]: cluster 2026-03-10T07:47:50.855314+0000 mgr.y (mgr.24407) 850 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:54.209 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:47:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:47:54.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:54 vm03 bash[23382]: cluster 2026-03-10T07:47:52.855719+0000 mgr.y (mgr.24407) 851 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:54.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:54 vm03 bash[23382]: cluster 2026-03-10T07:47:52.855719+0000 mgr.y (mgr.24407) 851 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:54 vm00 bash[28005]: cluster 2026-03-10T07:47:52.855719+0000 mgr.y (mgr.24407) 851 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:54 vm00 bash[28005]: cluster 2026-03-10T07:47:52.855719+0000 mgr.y (mgr.24407) 851 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:54 vm00 bash[20701]: cluster 2026-03-10T07:47:52.855719+0000 mgr.y (mgr.24407) 851 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:54.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:54 vm00 bash[20701]: cluster 2026-03-10T07:47:52.855719+0000 mgr.y (mgr.24407) 851 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:55 vm03 bash[23382]: audit 2026-03-10T07:47:53.849443+0000 mgr.y (mgr.24407) 852 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:55 vm03 bash[23382]: audit 2026-03-10T07:47:53.849443+0000 mgr.y (mgr.24407) 852 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:55 vm03 bash[23382]: audit 2026-03-10T07:47:55.168875+0000 mon.c (mon.2) 409 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:55 vm03 bash[23382]: audit 2026-03-10T07:47:55.168875+0000 mon.c (mon.2) 409 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:55.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:55 vm00 bash[28005]: audit 2026-03-10T07:47:53.849443+0000 mgr.y (mgr.24407) 852 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:55.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:55 vm00 bash[28005]: audit 2026-03-10T07:47:53.849443+0000 mgr.y (mgr.24407) 852 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:55.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:55 vm00 bash[28005]: audit 2026-03-10T07:47:55.168875+0000 mon.c (mon.2) 409 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:55.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:55 vm00 bash[28005]: audit 2026-03-10T07:47:55.168875+0000 mon.c (mon.2) 409 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:55.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:55 vm00 bash[20701]: audit 2026-03-10T07:47:53.849443+0000 mgr.y (mgr.24407) 852 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:55.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:55 vm00 bash[20701]: audit 2026-03-10T07:47:53.849443+0000 mgr.y (mgr.24407) 852 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:47:55.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:55 vm00 bash[20701]: audit 2026-03-10T07:47:55.168875+0000 mon.c (mon.2) 409 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:55.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:55 vm00 bash[20701]: audit 2026-03-10T07:47:55.168875+0000 mon.c (mon.2) 409 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:47:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:56 vm03 bash[23382]: cluster 2026-03-10T07:47:54.856575+0000 mgr.y (mgr.24407) 853 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:56 vm03 bash[23382]: cluster 2026-03-10T07:47:54.856575+0000 mgr.y (mgr.24407) 853 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:56 vm00 bash[28005]: cluster 2026-03-10T07:47:54.856575+0000 mgr.y (mgr.24407) 853 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:56 vm00 bash[28005]: cluster 2026-03-10T07:47:54.856575+0000 mgr.y (mgr.24407) 853 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:56 vm00 bash[20701]: cluster 2026-03-10T07:47:54.856575+0000 mgr.y (mgr.24407) 853 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:56 vm00 bash[20701]: cluster 2026-03-10T07:47:54.856575+0000 mgr.y (mgr.24407) 853 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:47:58.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:58 vm03 bash[23382]: cluster 2026-03-10T07:47:56.856977+0000 mgr.y (mgr.24407) 854 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:58.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:47:58 vm03 bash[23382]: cluster 2026-03-10T07:47:56.856977+0000 mgr.y (mgr.24407) 854 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:58 vm00 bash[28005]: cluster 2026-03-10T07:47:56.856977+0000 mgr.y (mgr.24407) 854 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:58.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:47:58 vm00 bash[28005]: cluster 2026-03-10T07:47:56.856977+0000 mgr.y (mgr.24407) 854 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:58.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:58 vm00 bash[20701]: cluster 2026-03-10T07:47:56.856977+0000 mgr.y (mgr.24407) 854 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:47:58.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:47:58 vm00 bash[20701]: cluster 2026-03-10T07:47:56.856977+0000 mgr.y (mgr.24407) 854 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:00 vm03 bash[23382]: cluster 2026-03-10T07:47:58.857366+0000 mgr.y (mgr.24407) 855 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:00 vm03 bash[23382]: cluster 2026-03-10T07:47:58.857366+0000 mgr.y (mgr.24407) 855 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:00 vm00 bash[28005]: cluster 2026-03-10T07:47:58.857366+0000 mgr.y (mgr.24407) 855 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:00 vm00 bash[28005]: cluster 2026-03-10T07:47:58.857366+0000 mgr.y (mgr.24407) 855 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:00.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:00 vm00 bash[20701]: cluster 2026-03-10T07:47:58.857366+0000 mgr.y (mgr.24407) 855 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:00.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:00 vm00 bash[20701]: cluster 2026-03-10T07:47:58.857366+0000 mgr.y (mgr.24407) 855 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:01.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:48:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:48:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:48:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:02 vm03 bash[23382]: cluster 2026-03-10T07:48:00.858276+0000 mgr.y (mgr.24407) 856 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:02 vm03 bash[23382]: cluster 2026-03-10T07:48:00.858276+0000 mgr.y (mgr.24407) 856 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:02 vm00 bash[28005]: cluster 2026-03-10T07:48:00.858276+0000 mgr.y (mgr.24407) 856 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:02 vm00 bash[28005]: cluster 2026-03-10T07:48:00.858276+0000 mgr.y (mgr.24407) 856 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:02.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:02 vm00 bash[20701]: cluster 2026-03-10T07:48:00.858276+0000 mgr.y (mgr.24407) 856 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:02.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:02 vm00 bash[20701]: cluster 2026-03-10T07:48:00.858276+0000 mgr.y (mgr.24407) 856 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:04.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:48:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:48:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:04 vm00 bash[28005]: cluster 2026-03-10T07:48:02.858700+0000 mgr.y (mgr.24407) 857 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:04 vm00 bash[28005]: cluster 2026-03-10T07:48:02.858700+0000 mgr.y (mgr.24407) 857 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:04 vm00 bash[20701]: cluster 2026-03-10T07:48:02.858700+0000 mgr.y (mgr.24407) 857 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:04 vm00 bash[20701]: cluster 2026-03-10T07:48:02.858700+0000 mgr.y (mgr.24407) 857 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:04.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:04 vm03 bash[23382]: cluster 2026-03-10T07:48:02.858700+0000 mgr.y (mgr.24407) 857 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:04.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:04 vm03 bash[23382]: cluster 2026-03-10T07:48:02.858700+0000 mgr.y (mgr.24407) 857 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:05 vm00 bash[28005]: audit 2026-03-10T07:48:03.860362+0000 mgr.y (mgr.24407) 858 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:05 vm00 bash[28005]: audit 2026-03-10T07:48:03.860362+0000 mgr.y (mgr.24407) 858 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:05 vm00 bash[20701]: audit 2026-03-10T07:48:03.860362+0000 mgr.y (mgr.24407) 858 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:05.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:05 vm00 bash[20701]: audit 2026-03-10T07:48:03.860362+0000 mgr.y (mgr.24407) 858 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:05 vm03 bash[23382]: audit 2026-03-10T07:48:03.860362+0000 mgr.y (mgr.24407) 858 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:05 vm03 bash[23382]: audit 2026-03-10T07:48:03.860362+0000 mgr.y (mgr.24407) 858 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:06 vm00 bash[28005]: cluster 2026-03-10T07:48:04.859390+0000 mgr.y (mgr.24407) 859 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:06 vm00 bash[28005]: cluster 2026-03-10T07:48:04.859390+0000 mgr.y (mgr.24407) 859 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:06 vm00 bash[20701]: cluster 2026-03-10T07:48:04.859390+0000 mgr.y (mgr.24407) 859 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:06.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:06 vm00 bash[20701]: cluster 2026-03-10T07:48:04.859390+0000 mgr.y (mgr.24407) 859 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:06 vm03 bash[23382]: cluster 2026-03-10T07:48:04.859390+0000 mgr.y (mgr.24407) 859 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:06 vm03 bash[23382]: cluster 2026-03-10T07:48:04.859390+0000 mgr.y (mgr.24407) 859 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:08 vm00 bash[28005]: cluster 2026-03-10T07:48:06.859764+0000 mgr.y (mgr.24407) 860 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:08 vm00 bash[28005]: cluster 2026-03-10T07:48:06.859764+0000 mgr.y (mgr.24407) 860 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:08 vm00 bash[20701]: cluster 2026-03-10T07:48:06.859764+0000 mgr.y (mgr.24407) 860 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:08 vm00 bash[20701]: cluster 2026-03-10T07:48:06.859764+0000 mgr.y (mgr.24407) 860 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:08 vm03 bash[23382]: cluster 2026-03-10T07:48:06.859764+0000 mgr.y (mgr.24407) 860 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:08 vm03 bash[23382]: cluster 2026-03-10T07:48:06.859764+0000 mgr.y (mgr.24407) 860 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:10 vm00 bash[28005]: cluster 2026-03-10T07:48:08.860076+0000 mgr.y (mgr.24407) 861 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:10 vm00 bash[28005]: cluster 2026-03-10T07:48:08.860076+0000 mgr.y (mgr.24407) 861 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:10 vm00 bash[28005]: audit 2026-03-10T07:48:10.174509+0000 mon.c (mon.2) 410 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:10 vm00 bash[28005]: audit 2026-03-10T07:48:10.174509+0000 mon.c (mon.2) 410 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:10 vm00 bash[20701]: cluster 2026-03-10T07:48:08.860076+0000 mgr.y (mgr.24407) 861 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:10 vm00 bash[20701]: cluster 2026-03-10T07:48:08.860076+0000 mgr.y (mgr.24407) 861 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:10 vm00 bash[20701]: audit 2026-03-10T07:48:10.174509+0000 mon.c (mon.2) 410 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:10.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:10 vm00 bash[20701]: audit 2026-03-10T07:48:10.174509+0000 mon.c (mon.2) 410 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:10 vm03 bash[23382]: cluster 2026-03-10T07:48:08.860076+0000 mgr.y (mgr.24407) 861 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:10 vm03 bash[23382]: cluster 2026-03-10T07:48:08.860076+0000 mgr.y (mgr.24407) 861 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:10 vm03 bash[23382]: audit 2026-03-10T07:48:10.174509+0000 mon.c (mon.2) 410 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:10 vm03 bash[23382]: audit 2026-03-10T07:48:10.174509+0000 mon.c (mon.2) 410 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:11.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:48:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:48:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:48:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:12 vm00 bash[28005]: cluster 2026-03-10T07:48:10.860958+0000 mgr.y (mgr.24407) 862 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:12 vm00 bash[28005]: cluster 2026-03-10T07:48:10.860958+0000 mgr.y (mgr.24407) 862 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:12 vm00 bash[20701]: cluster 2026-03-10T07:48:10.860958+0000 mgr.y (mgr.24407) 862 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:12 vm00 bash[20701]: cluster 2026-03-10T07:48:10.860958+0000 mgr.y (mgr.24407) 862 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:12 vm03 bash[23382]: cluster 2026-03-10T07:48:10.860958+0000 mgr.y (mgr.24407) 862 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:12 vm03 bash[23382]: cluster 2026-03-10T07:48:10.860958+0000 mgr.y (mgr.24407) 862 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:14.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:48:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:48:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:14 vm00 bash[28005]: cluster 2026-03-10T07:48:12.861335+0000 mgr.y (mgr.24407) 863 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:14 vm00 bash[28005]: cluster 2026-03-10T07:48:12.861335+0000 mgr.y (mgr.24407) 863 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:14 vm00 bash[20701]: cluster 2026-03-10T07:48:12.861335+0000 mgr.y (mgr.24407) 863 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:14 vm00 bash[20701]: cluster 2026-03-10T07:48:12.861335+0000 mgr.y (mgr.24407) 863 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:14 vm03 bash[23382]: cluster 2026-03-10T07:48:12.861335+0000 mgr.y (mgr.24407) 863 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:14 vm03 bash[23382]: cluster 2026-03-10T07:48:12.861335+0000 mgr.y (mgr.24407) 863 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:15 vm00 bash[28005]: audit 2026-03-10T07:48:13.864793+0000 mgr.y (mgr.24407) 864 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:15 vm00 bash[28005]: audit 2026-03-10T07:48:13.864793+0000 mgr.y (mgr.24407) 864 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:15 vm00 bash[20701]: audit 2026-03-10T07:48:13.864793+0000 mgr.y (mgr.24407) 864 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:15 vm00 bash[20701]: audit 2026-03-10T07:48:13.864793+0000 mgr.y (mgr.24407) 864 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:15 vm03 bash[23382]: audit 2026-03-10T07:48:13.864793+0000 mgr.y (mgr.24407) 864 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:15 vm03 bash[23382]: audit 2026-03-10T07:48:13.864793+0000 mgr.y (mgr.24407) 864 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:16 vm00 bash[28005]: cluster 2026-03-10T07:48:14.862166+0000 mgr.y (mgr.24407) 865 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:16.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:16 vm00 bash[28005]: cluster 2026-03-10T07:48:14.862166+0000 mgr.y (mgr.24407) 865 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:16 vm00 bash[20701]: cluster 2026-03-10T07:48:14.862166+0000 mgr.y (mgr.24407) 865 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:16.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:16 vm00 bash[20701]: cluster 2026-03-10T07:48:14.862166+0000 mgr.y (mgr.24407) 865 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:16 vm03 bash[23382]: cluster 2026-03-10T07:48:14.862166+0000 mgr.y (mgr.24407) 865 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:16 vm03 bash[23382]: cluster 2026-03-10T07:48:14.862166+0000 mgr.y (mgr.24407) 865 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:18 vm00 bash[28005]: cluster 2026-03-10T07:48:16.862498+0000 mgr.y (mgr.24407) 866 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:18 vm00 bash[28005]: cluster 2026-03-10T07:48:16.862498+0000 mgr.y (mgr.24407) 866 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:18 vm00 bash[28005]: audit 2026-03-10T07:48:17.566774+0000 mon.c (mon.2) 411 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:48:18.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:18 vm00 bash[28005]: audit 2026-03-10T07:48:17.566774+0000 mon.c (mon.2) 411 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:48:18.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:18 vm00 bash[20701]: cluster 2026-03-10T07:48:16.862498+0000 mgr.y (mgr.24407) 866 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:18 vm00 bash[20701]: cluster 2026-03-10T07:48:16.862498+0000 mgr.y (mgr.24407) 866 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:18 vm00 bash[20701]: audit 2026-03-10T07:48:17.566774+0000 mon.c (mon.2) 411 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:48:18.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:18 vm00 bash[20701]: audit 2026-03-10T07:48:17.566774+0000 mon.c (mon.2) 411 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:48:18.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:18 vm03 bash[23382]: cluster 2026-03-10T07:48:16.862498+0000 mgr.y (mgr.24407) 866 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:18.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:18 vm03 bash[23382]: cluster 2026-03-10T07:48:16.862498+0000 mgr.y (mgr.24407) 866 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:18.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:18 vm03 bash[23382]: audit 2026-03-10T07:48:17.566774+0000 mon.c (mon.2) 411 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:48:18.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:18 vm03 bash[23382]: audit 2026-03-10T07:48:17.566774+0000 mon.c (mon.2) 411 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:48:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:20 vm00 bash[28005]: cluster 2026-03-10T07:48:18.862956+0000 mgr.y (mgr.24407) 867 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:20.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:20 vm00 bash[28005]: cluster 2026-03-10T07:48:18.862956+0000 mgr.y (mgr.24407) 867 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:20 vm00 bash[20701]: cluster 2026-03-10T07:48:18.862956+0000 mgr.y (mgr.24407) 867 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:20.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:20 vm00 bash[20701]: cluster 2026-03-10T07:48:18.862956+0000 mgr.y (mgr.24407) 867 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:20.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:20 vm03 bash[23382]: cluster 2026-03-10T07:48:18.862956+0000 mgr.y (mgr.24407) 867 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:20.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:20 vm03 bash[23382]: cluster 2026-03-10T07:48:18.862956+0000 mgr.y (mgr.24407) 867 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:21.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:48:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:48:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:48:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:22 vm00 bash[20701]: cluster 2026-03-10T07:48:20.863659+0000 mgr.y (mgr.24407) 868 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:22.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:22 vm00 bash[20701]: cluster 2026-03-10T07:48:20.863659+0000 mgr.y (mgr.24407) 868 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:22.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:22 vm00 bash[28005]: cluster 2026-03-10T07:48:20.863659+0000 mgr.y (mgr.24407) 868 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:22.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:22 vm00 bash[28005]: cluster 2026-03-10T07:48:20.863659+0000 mgr.y (mgr.24407) 868 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:22.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:22 vm03 bash[23382]: cluster 2026-03-10T07:48:20.863659+0000 mgr.y (mgr.24407) 868 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:22.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:22 vm03 bash[23382]: cluster 2026-03-10T07:48:20.863659+0000 mgr.y (mgr.24407) 868 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:24.011 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:48:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:22.713281+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:22.713281+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:22.722592+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:22.722592+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: cluster 2026-03-10T07:48:22.864054+0000 mgr.y (mgr.24407) 869 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: cluster 2026-03-10T07:48:22.864054+0000 mgr.y (mgr.24407) 869 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:22.917513+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:22.917513+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:22.925770+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:22.925770+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:23.227081+0000 mon.c (mon.2) 412 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:23.227081+0000 mon.c (mon.2) 412 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:23.228337+0000 mon.c (mon.2) 413 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:23.228337+0000 mon.c (mon.2) 413 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:23.234746+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:23 vm03 bash[23382]: audit 2026-03-10T07:48:23.234746+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:22.713281+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:22.713281+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:22.722592+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:22.722592+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: cluster 2026-03-10T07:48:22.864054+0000 mgr.y (mgr.24407) 869 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: cluster 2026-03-10T07:48:22.864054+0000 mgr.y (mgr.24407) 869 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:22.917513+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:22.917513+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:22.925770+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:22.925770+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:23.227081+0000 mon.c (mon.2) 412 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:23.227081+0000 mon.c (mon.2) 412 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:23.228337+0000 mon.c (mon.2) 413 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:23.228337+0000 mon.c (mon.2) 413 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:23.234746+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:23 vm00 bash[28005]: audit 2026-03-10T07:48:23.234746+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:22.713281+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:22.713281+0000 mon.a (mon.0) 3543 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:22.722592+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:22.722592+0000 mon.a (mon.0) 3544 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: cluster 2026-03-10T07:48:22.864054+0000 mgr.y (mgr.24407) 869 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: cluster 2026-03-10T07:48:22.864054+0000 mgr.y (mgr.24407) 869 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:22.917513+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:22.917513+0000 mon.a (mon.0) 3545 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:22.925770+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:22.925770+0000 mon.a (mon.0) 3546 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:23.227081+0000 mon.c (mon.2) 412 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:23.227081+0000 mon.c (mon.2) 412 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:23.228337+0000 mon.c (mon.2) 413 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:23.228337+0000 mon.c (mon.2) 413 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:23.234746+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:24.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:23 vm00 bash[20701]: audit 2026-03-10T07:48:23.234746+0000 mon.a (mon.0) 3547 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:48:25.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:24 vm03 bash[23382]: audit 2026-03-10T07:48:23.873077+0000 mgr.y (mgr.24407) 870 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:25.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:24 vm03 bash[23382]: audit 2026-03-10T07:48:23.873077+0000 mgr.y (mgr.24407) 870 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:25.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:24 vm00 bash[28005]: audit 2026-03-10T07:48:23.873077+0000 mgr.y (mgr.24407) 870 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:25.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:24 vm00 bash[28005]: audit 2026-03-10T07:48:23.873077+0000 mgr.y (mgr.24407) 870 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:24 vm00 bash[20701]: audit 2026-03-10T07:48:23.873077+0000 mgr.y (mgr.24407) 870 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:25.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:24 vm00 bash[20701]: audit 2026-03-10T07:48:23.873077+0000 mgr.y (mgr.24407) 870 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:25 vm00 bash[28005]: cluster 2026-03-10T07:48:24.864783+0000 mgr.y (mgr.24407) 871 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:25 vm00 bash[28005]: cluster 2026-03-10T07:48:24.864783+0000 mgr.y (mgr.24407) 871 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:25 vm00 bash[28005]: audit 2026-03-10T07:48:25.186153+0000 mon.c (mon.2) 414 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:26.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:25 vm00 bash[28005]: audit 2026-03-10T07:48:25.186153+0000 mon.c (mon.2) 414 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:25 vm00 bash[20701]: cluster 2026-03-10T07:48:24.864783+0000 mgr.y (mgr.24407) 871 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:25 vm00 bash[20701]: cluster 2026-03-10T07:48:24.864783+0000 mgr.y (mgr.24407) 871 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:26.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:25 vm00 bash[20701]: audit 2026-03-10T07:48:25.186153+0000 mon.c (mon.2) 414 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:26.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:25 vm00 bash[20701]: audit 2026-03-10T07:48:25.186153+0000 mon.c (mon.2) 414 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:26.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:25 vm03 bash[23382]: cluster 2026-03-10T07:48:24.864783+0000 mgr.y (mgr.24407) 871 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:26.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:25 vm03 bash[23382]: cluster 2026-03-10T07:48:24.864783+0000 mgr.y (mgr.24407) 871 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:26.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:25 vm03 bash[23382]: audit 2026-03-10T07:48:25.186153+0000 mon.c (mon.2) 414 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:26.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:25 vm03 bash[23382]: audit 2026-03-10T07:48:25.186153+0000 mon.c (mon.2) 414 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:28.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:27 vm03 bash[23382]: cluster 2026-03-10T07:48:26.865151+0000 mgr.y (mgr.24407) 872 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:28.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:27 vm03 bash[23382]: cluster 2026-03-10T07:48:26.865151+0000 mgr.y (mgr.24407) 872 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:28.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:27 vm00 bash[28005]: cluster 2026-03-10T07:48:26.865151+0000 mgr.y (mgr.24407) 872 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:28.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:27 vm00 bash[28005]: cluster 2026-03-10T07:48:26.865151+0000 mgr.y (mgr.24407) 872 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:28.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:27 vm00 bash[20701]: cluster 2026-03-10T07:48:26.865151+0000 mgr.y (mgr.24407) 872 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:28.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:27 vm00 bash[20701]: cluster 2026-03-10T07:48:26.865151+0000 mgr.y (mgr.24407) 872 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:30.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:29 vm03 bash[23382]: cluster 2026-03-10T07:48:28.865592+0000 mgr.y (mgr.24407) 873 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:30.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:29 vm03 bash[23382]: cluster 2026-03-10T07:48:28.865592+0000 mgr.y (mgr.24407) 873 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:29 vm00 bash[28005]: cluster 2026-03-10T07:48:28.865592+0000 mgr.y (mgr.24407) 873 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:29 vm00 bash[28005]: cluster 2026-03-10T07:48:28.865592+0000 mgr.y (mgr.24407) 873 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:29 vm00 bash[20701]: cluster 2026-03-10T07:48:28.865592+0000 mgr.y (mgr.24407) 873 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:29 vm00 bash[20701]: cluster 2026-03-10T07:48:28.865592+0000 mgr.y (mgr.24407) 873 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:31.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:48:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:48:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:48:32.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:31 vm03 bash[23382]: cluster 2026-03-10T07:48:30.866305+0000 mgr.y (mgr.24407) 874 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:32.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:31 vm03 bash[23382]: cluster 2026-03-10T07:48:30.866305+0000 mgr.y (mgr.24407) 874 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:32.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:31 vm00 bash[28005]: cluster 2026-03-10T07:48:30.866305+0000 mgr.y (mgr.24407) 874 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:32.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:31 vm00 bash[28005]: cluster 2026-03-10T07:48:30.866305+0000 mgr.y (mgr.24407) 874 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:32.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:31 vm00 bash[20701]: cluster 2026-03-10T07:48:30.866305+0000 mgr.y (mgr.24407) 874 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:32.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:31 vm00 bash[20701]: cluster 2026-03-10T07:48:30.866305+0000 mgr.y (mgr.24407) 874 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:34.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:48:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:48:34.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:33 vm03 bash[23382]: cluster 2026-03-10T07:48:32.866695+0000 mgr.y (mgr.24407) 875 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:34.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:33 vm03 bash[23382]: cluster 2026-03-10T07:48:32.866695+0000 mgr.y (mgr.24407) 875 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:34.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:33 vm00 bash[28005]: cluster 2026-03-10T07:48:32.866695+0000 mgr.y (mgr.24407) 875 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:34.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:33 vm00 bash[28005]: cluster 2026-03-10T07:48:32.866695+0000 mgr.y (mgr.24407) 875 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:34.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:33 vm00 bash[20701]: cluster 2026-03-10T07:48:32.866695+0000 mgr.y (mgr.24407) 875 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:34.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:33 vm00 bash[20701]: cluster 2026-03-10T07:48:32.866695+0000 mgr.y (mgr.24407) 875 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:35.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:34 vm03 bash[23382]: audit 2026-03-10T07:48:33.875100+0000 mgr.y (mgr.24407) 876 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:35.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:34 vm03 bash[23382]: audit 2026-03-10T07:48:33.875100+0000 mgr.y (mgr.24407) 876 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:34 vm00 bash[28005]: audit 2026-03-10T07:48:33.875100+0000 mgr.y (mgr.24407) 876 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:35.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:34 vm00 bash[28005]: audit 2026-03-10T07:48:33.875100+0000 mgr.y (mgr.24407) 876 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:35.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:34 vm00 bash[20701]: audit 2026-03-10T07:48:33.875100+0000 mgr.y (mgr.24407) 876 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:35.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:34 vm00 bash[20701]: audit 2026-03-10T07:48:33.875100+0000 mgr.y (mgr.24407) 876 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:36.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:35 vm03 bash[23382]: cluster 2026-03-10T07:48:34.867204+0000 mgr.y (mgr.24407) 877 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:36.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:35 vm03 bash[23382]: cluster 2026-03-10T07:48:34.867204+0000 mgr.y (mgr.24407) 877 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:35 vm00 bash[28005]: cluster 2026-03-10T07:48:34.867204+0000 mgr.y (mgr.24407) 877 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:36.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:35 vm00 bash[28005]: cluster 2026-03-10T07:48:34.867204+0000 mgr.y (mgr.24407) 877 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:35 vm00 bash[20701]: cluster 2026-03-10T07:48:34.867204+0000 mgr.y (mgr.24407) 877 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:36.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:35 vm00 bash[20701]: cluster 2026-03-10T07:48:34.867204+0000 mgr.y (mgr.24407) 877 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:38.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:37 vm03 bash[23382]: cluster 2026-03-10T07:48:36.867508+0000 mgr.y (mgr.24407) 878 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:38.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:37 vm03 bash[23382]: cluster 2026-03-10T07:48:36.867508+0000 mgr.y (mgr.24407) 878 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:38.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:37 vm00 bash[28005]: cluster 2026-03-10T07:48:36.867508+0000 mgr.y (mgr.24407) 878 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:38.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:37 vm00 bash[28005]: cluster 2026-03-10T07:48:36.867508+0000 mgr.y (mgr.24407) 878 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:38.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:37 vm00 bash[20701]: cluster 2026-03-10T07:48:36.867508+0000 mgr.y (mgr.24407) 878 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:38.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:37 vm00 bash[20701]: cluster 2026-03-10T07:48:36.867508+0000 mgr.y (mgr.24407) 878 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:40.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:39 vm03 bash[23382]: cluster 2026-03-10T07:48:38.867800+0000 mgr.y (mgr.24407) 879 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:40.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:39 vm03 bash[23382]: cluster 2026-03-10T07:48:38.867800+0000 mgr.y (mgr.24407) 879 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:39 vm00 bash[28005]: cluster 2026-03-10T07:48:38.867800+0000 mgr.y (mgr.24407) 879 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:40.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:39 vm00 bash[28005]: cluster 2026-03-10T07:48:38.867800+0000 mgr.y (mgr.24407) 879 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:40.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:39 vm00 bash[20701]: cluster 2026-03-10T07:48:38.867800+0000 mgr.y (mgr.24407) 879 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:40.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:39 vm00 bash[20701]: cluster 2026-03-10T07:48:38.867800+0000 mgr.y (mgr.24407) 879 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:41.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:40 vm03 bash[23382]: audit 2026-03-10T07:48:40.192229+0000 mon.c (mon.2) 415 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:41.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:40 vm03 bash[23382]: audit 2026-03-10T07:48:40.192229+0000 mon.c (mon.2) 415 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:40 vm00 bash[28005]: audit 2026-03-10T07:48:40.192229+0000 mon.c (mon.2) 415 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:40 vm00 bash[28005]: audit 2026-03-10T07:48:40.192229+0000 mon.c (mon.2) 415 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:41.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:48:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:48:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:48:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:40 vm00 bash[20701]: audit 2026-03-10T07:48:40.192229+0000 mon.c (mon.2) 415 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:40 vm00 bash[20701]: audit 2026-03-10T07:48:40.192229+0000 mon.c (mon.2) 415 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:42.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:41 vm03 bash[23382]: cluster 2026-03-10T07:48:40.868544+0000 mgr.y (mgr.24407) 880 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:42.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:41 vm03 bash[23382]: cluster 2026-03-10T07:48:40.868544+0000 mgr.y (mgr.24407) 880 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:42.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:41 vm00 bash[28005]: cluster 2026-03-10T07:48:40.868544+0000 mgr.y (mgr.24407) 880 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:42.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:41 vm00 bash[28005]: cluster 2026-03-10T07:48:40.868544+0000 mgr.y (mgr.24407) 880 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:42.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:41 vm00 bash[20701]: cluster 2026-03-10T07:48:40.868544+0000 mgr.y (mgr.24407) 880 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:42.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:41 vm00 bash[20701]: cluster 2026-03-10T07:48:40.868544+0000 mgr.y (mgr.24407) 880 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:44.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:48:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:48:44.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:44 vm03 bash[23382]: cluster 2026-03-10T07:48:42.868898+0000 mgr.y (mgr.24407) 881 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:44.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:44 vm03 bash[23382]: cluster 2026-03-10T07:48:42.868898+0000 mgr.y (mgr.24407) 881 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:44.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:43 vm00 bash[28005]: cluster 2026-03-10T07:48:42.868898+0000 mgr.y (mgr.24407) 881 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:44.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:43 vm00 bash[28005]: cluster 2026-03-10T07:48:42.868898+0000 mgr.y (mgr.24407) 881 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:44 vm00 bash[20701]: cluster 2026-03-10T07:48:42.868898+0000 mgr.y (mgr.24407) 881 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:44 vm00 bash[20701]: cluster 2026-03-10T07:48:42.868898+0000 mgr.y (mgr.24407) 881 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:45.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:45 vm03 bash[23382]: audit 2026-03-10T07:48:43.885152+0000 mgr.y (mgr.24407) 882 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:45.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:45 vm03 bash[23382]: audit 2026-03-10T07:48:43.885152+0000 mgr.y (mgr.24407) 882 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:45.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:45 vm00 bash[28005]: audit 2026-03-10T07:48:43.885152+0000 mgr.y (mgr.24407) 882 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:45.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:45 vm00 bash[28005]: audit 2026-03-10T07:48:43.885152+0000 mgr.y (mgr.24407) 882 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:45.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:45 vm00 bash[20701]: audit 2026-03-10T07:48:43.885152+0000 mgr.y (mgr.24407) 882 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:45.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:45 vm00 bash[20701]: audit 2026-03-10T07:48:43.885152+0000 mgr.y (mgr.24407) 882 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:46.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:46 vm03 bash[23382]: cluster 2026-03-10T07:48:44.869564+0000 mgr.y (mgr.24407) 883 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:46.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:46 vm03 bash[23382]: cluster 2026-03-10T07:48:44.869564+0000 mgr.y (mgr.24407) 883 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:46.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:46 vm00 bash[28005]: cluster 2026-03-10T07:48:44.869564+0000 mgr.y (mgr.24407) 883 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:46.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:46 vm00 bash[28005]: cluster 2026-03-10T07:48:44.869564+0000 mgr.y (mgr.24407) 883 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:46.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:46 vm00 bash[20701]: cluster 2026-03-10T07:48:44.869564+0000 mgr.y (mgr.24407) 883 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:46.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:46 vm00 bash[20701]: cluster 2026-03-10T07:48:44.869564+0000 mgr.y (mgr.24407) 883 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:48 vm00 bash[28005]: cluster 2026-03-10T07:48:46.869888+0000 mgr.y (mgr.24407) 884 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:48.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:48 vm00 bash[28005]: cluster 2026-03-10T07:48:46.869888+0000 mgr.y (mgr.24407) 884 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:48.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:48 vm00 bash[20701]: cluster 2026-03-10T07:48:46.869888+0000 mgr.y (mgr.24407) 884 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:48.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:48 vm00 bash[20701]: cluster 2026-03-10T07:48:46.869888+0000 mgr.y (mgr.24407) 884 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:48 vm03 bash[23382]: cluster 2026-03-10T07:48:46.869888+0000 mgr.y (mgr.24407) 884 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:48 vm03 bash[23382]: cluster 2026-03-10T07:48:46.869888+0000 mgr.y (mgr.24407) 884 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:50.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:50 vm00 bash[28005]: cluster 2026-03-10T07:48:48.870241+0000 mgr.y (mgr.24407) 885 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:50.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:50 vm00 bash[28005]: cluster 2026-03-10T07:48:48.870241+0000 mgr.y (mgr.24407) 885 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:50 vm00 bash[20701]: cluster 2026-03-10T07:48:48.870241+0000 mgr.y (mgr.24407) 885 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:50 vm00 bash[20701]: cluster 2026-03-10T07:48:48.870241+0000 mgr.y (mgr.24407) 885 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:50 vm03 bash[23382]: cluster 2026-03-10T07:48:48.870241+0000 mgr.y (mgr.24407) 885 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:50 vm03 bash[23382]: cluster 2026-03-10T07:48:48.870241+0000 mgr.y (mgr.24407) 885 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:51.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:48:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:48:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:48:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:52 vm00 bash[28005]: cluster 2026-03-10T07:48:50.870891+0000 mgr.y (mgr.24407) 886 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:52.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:52 vm00 bash[28005]: cluster 2026-03-10T07:48:50.870891+0000 mgr.y (mgr.24407) 886 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:52 vm00 bash[20701]: cluster 2026-03-10T07:48:50.870891+0000 mgr.y (mgr.24407) 886 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:52 vm00 bash[20701]: cluster 2026-03-10T07:48:50.870891+0000 mgr.y (mgr.24407) 886 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:52 vm03 bash[23382]: cluster 2026-03-10T07:48:50.870891+0000 mgr.y (mgr.24407) 886 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:52 vm03 bash[23382]: cluster 2026-03-10T07:48:50.870891+0000 mgr.y (mgr.24407) 886 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:54.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:54 vm03 bash[23382]: cluster 2026-03-10T07:48:52.871261+0000 mgr.y (mgr.24407) 887 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:54.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:54 vm03 bash[23382]: cluster 2026-03-10T07:48:52.871261+0000 mgr.y (mgr.24407) 887 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:54.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:48:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:48:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:54 vm00 bash[28005]: cluster 2026-03-10T07:48:52.871261+0000 mgr.y (mgr.24407) 887 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:54.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:54 vm00 bash[28005]: cluster 2026-03-10T07:48:52.871261+0000 mgr.y (mgr.24407) 887 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:54.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:54 vm00 bash[20701]: cluster 2026-03-10T07:48:52.871261+0000 mgr.y (mgr.24407) 887 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:54.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:54 vm00 bash[20701]: cluster 2026-03-10T07:48:52.871261+0000 mgr.y (mgr.24407) 887 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:55.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:55 vm00 bash[28005]: audit 2026-03-10T07:48:53.895942+0000 mgr.y (mgr.24407) 888 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:55.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:55 vm00 bash[28005]: audit 2026-03-10T07:48:53.895942+0000 mgr.y (mgr.24407) 888 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:55.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:55 vm00 bash[20701]: audit 2026-03-10T07:48:53.895942+0000 mgr.y (mgr.24407) 888 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:55.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:55 vm00 bash[20701]: audit 2026-03-10T07:48:53.895942+0000 mgr.y (mgr.24407) 888 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:55 vm03 bash[23382]: audit 2026-03-10T07:48:53.895942+0000 mgr.y (mgr.24407) 888 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:55 vm03 bash[23382]: audit 2026-03-10T07:48:53.895942+0000 mgr.y (mgr.24407) 888 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:48:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:56 vm00 bash[28005]: cluster 2026-03-10T07:48:54.871786+0000 mgr.y (mgr.24407) 889 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:56 vm00 bash[28005]: cluster 2026-03-10T07:48:54.871786+0000 mgr.y (mgr.24407) 889 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:56 vm00 bash[28005]: audit 2026-03-10T07:48:55.198023+0000 mon.c (mon.2) 416 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:56.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:56 vm00 bash[28005]: audit 2026-03-10T07:48:55.198023+0000 mon.c (mon.2) 416 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:56 vm00 bash[20701]: cluster 2026-03-10T07:48:54.871786+0000 mgr.y (mgr.24407) 889 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:56 vm00 bash[20701]: cluster 2026-03-10T07:48:54.871786+0000 mgr.y (mgr.24407) 889 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:56 vm00 bash[20701]: audit 2026-03-10T07:48:55.198023+0000 mon.c (mon.2) 416 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:56.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:56 vm00 bash[20701]: audit 2026-03-10T07:48:55.198023+0000 mon.c (mon.2) 416 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:56 vm03 bash[23382]: cluster 2026-03-10T07:48:54.871786+0000 mgr.y (mgr.24407) 889 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:56 vm03 bash[23382]: cluster 2026-03-10T07:48:54.871786+0000 mgr.y (mgr.24407) 889 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:48:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:56 vm03 bash[23382]: audit 2026-03-10T07:48:55.198023+0000 mon.c (mon.2) 416 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:56 vm03 bash[23382]: audit 2026-03-10T07:48:55.198023+0000 mon.c (mon.2) 416 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:48:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:58 vm00 bash[28005]: cluster 2026-03-10T07:48:56.872052+0000 mgr.y (mgr.24407) 890 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:48:58 vm00 bash[28005]: cluster 2026-03-10T07:48:56.872052+0000 mgr.y (mgr.24407) 890 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:58 vm00 bash[20701]: cluster 2026-03-10T07:48:56.872052+0000 mgr.y (mgr.24407) 890 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:48:58 vm00 bash[20701]: cluster 2026-03-10T07:48:56.872052+0000 mgr.y (mgr.24407) 890 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:58.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:58 vm03 bash[23382]: cluster 2026-03-10T07:48:56.872052+0000 mgr.y (mgr.24407) 890 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:48:58.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:48:58 vm03 bash[23382]: cluster 2026-03-10T07:48:56.872052+0000 mgr.y (mgr.24407) 890 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:00 vm00 bash[28005]: cluster 2026-03-10T07:48:58.872455+0000 mgr.y (mgr.24407) 891 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:00 vm00 bash[28005]: cluster 2026-03-10T07:48:58.872455+0000 mgr.y (mgr.24407) 891 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:00 vm00 bash[20701]: cluster 2026-03-10T07:48:58.872455+0000 mgr.y (mgr.24407) 891 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:00 vm00 bash[20701]: cluster 2026-03-10T07:48:58.872455+0000 mgr.y (mgr.24407) 891 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:00 vm03 bash[23382]: cluster 2026-03-10T07:48:58.872455+0000 mgr.y (mgr.24407) 891 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:00 vm03 bash[23382]: cluster 2026-03-10T07:48:58.872455+0000 mgr.y (mgr.24407) 891 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:01.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:49:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:49:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:49:02.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:02 vm00 bash[28005]: cluster 2026-03-10T07:49:00.873354+0000 mgr.y (mgr.24407) 892 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:02.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:02 vm00 bash[28005]: cluster 2026-03-10T07:49:00.873354+0000 mgr.y (mgr.24407) 892 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:02.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:02 vm00 bash[20701]: cluster 2026-03-10T07:49:00.873354+0000 mgr.y (mgr.24407) 892 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:02.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:02 vm00 bash[20701]: cluster 2026-03-10T07:49:00.873354+0000 mgr.y (mgr.24407) 892 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:02 vm03 bash[23382]: cluster 2026-03-10T07:49:00.873354+0000 mgr.y (mgr.24407) 892 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:02 vm03 bash[23382]: cluster 2026-03-10T07:49:00.873354+0000 mgr.y (mgr.24407) 892 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:04.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:04 vm03 bash[23382]: cluster 2026-03-10T07:49:02.873726+0000 mgr.y (mgr.24407) 893 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:04.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:04 vm03 bash[23382]: cluster 2026-03-10T07:49:02.873726+0000 mgr.y (mgr.24407) 893 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:04.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:49:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:49:04.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:04 vm00 bash[28005]: cluster 2026-03-10T07:49:02.873726+0000 mgr.y (mgr.24407) 893 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:04.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:04 vm00 bash[28005]: cluster 2026-03-10T07:49:02.873726+0000 mgr.y (mgr.24407) 893 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:04.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:04 vm00 bash[20701]: cluster 2026-03-10T07:49:02.873726+0000 mgr.y (mgr.24407) 893 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:04.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:04 vm00 bash[20701]: cluster 2026-03-10T07:49:02.873726+0000 mgr.y (mgr.24407) 893 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:05.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:05 vm00 bash[28005]: audit 2026-03-10T07:49:03.899177+0000 mgr.y (mgr.24407) 894 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:05.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:05 vm00 bash[28005]: audit 2026-03-10T07:49:03.899177+0000 mgr.y (mgr.24407) 894 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:05.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:05 vm00 bash[20701]: audit 2026-03-10T07:49:03.899177+0000 mgr.y (mgr.24407) 894 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:05.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:05 vm00 bash[20701]: audit 2026-03-10T07:49:03.899177+0000 mgr.y (mgr.24407) 894 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:05.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:05 vm03 bash[23382]: audit 2026-03-10T07:49:03.899177+0000 mgr.y (mgr.24407) 894 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:05.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:05 vm03 bash[23382]: audit 2026-03-10T07:49:03.899177+0000 mgr.y (mgr.24407) 894 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:06.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:06 vm00 bash[28005]: cluster 2026-03-10T07:49:04.874261+0000 mgr.y (mgr.24407) 895 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:06.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:06 vm00 bash[28005]: cluster 2026-03-10T07:49:04.874261+0000 mgr.y (mgr.24407) 895 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:06.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:06 vm00 bash[20701]: cluster 2026-03-10T07:49:04.874261+0000 mgr.y (mgr.24407) 895 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:06.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:06 vm00 bash[20701]: cluster 2026-03-10T07:49:04.874261+0000 mgr.y (mgr.24407) 895 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:06.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:06 vm03 bash[23382]: cluster 2026-03-10T07:49:04.874261+0000 mgr.y (mgr.24407) 895 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:06.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:06 vm03 bash[23382]: cluster 2026-03-10T07:49:04.874261+0000 mgr.y (mgr.24407) 895 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:08.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:08 vm03 bash[23382]: cluster 2026-03-10T07:49:06.874502+0000 mgr.y (mgr.24407) 896 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:08.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:08 vm03 bash[23382]: cluster 2026-03-10T07:49:06.874502+0000 mgr.y (mgr.24407) 896 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:08 vm00 bash[28005]: cluster 2026-03-10T07:49:06.874502+0000 mgr.y (mgr.24407) 896 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:08.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:08 vm00 bash[28005]: cluster 2026-03-10T07:49:06.874502+0000 mgr.y (mgr.24407) 896 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:08 vm00 bash[20701]: cluster 2026-03-10T07:49:06.874502+0000 mgr.y (mgr.24407) 896 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:08 vm00 bash[20701]: cluster 2026-03-10T07:49:06.874502+0000 mgr.y (mgr.24407) 896 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:10.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:10 vm03 bash[23382]: cluster 2026-03-10T07:49:08.874790+0000 mgr.y (mgr.24407) 897 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:10.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:10 vm03 bash[23382]: cluster 2026-03-10T07:49:08.874790+0000 mgr.y (mgr.24407) 897 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:10.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:10 vm03 bash[23382]: audit 2026-03-10T07:49:10.204315+0000 mon.c (mon.2) 417 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:10.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:10 vm03 bash[23382]: audit 2026-03-10T07:49:10.204315+0000 mon.c (mon.2) 417 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:10 vm00 bash[28005]: cluster 2026-03-10T07:49:08.874790+0000 mgr.y (mgr.24407) 897 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:10 vm00 bash[28005]: cluster 2026-03-10T07:49:08.874790+0000 mgr.y (mgr.24407) 897 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:10 vm00 bash[28005]: audit 2026-03-10T07:49:10.204315+0000 mon.c (mon.2) 417 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:10.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:10 vm00 bash[28005]: audit 2026-03-10T07:49:10.204315+0000 mon.c (mon.2) 417 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:10 vm00 bash[20701]: cluster 2026-03-10T07:49:08.874790+0000 mgr.y (mgr.24407) 897 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:10 vm00 bash[20701]: cluster 2026-03-10T07:49:08.874790+0000 mgr.y (mgr.24407) 897 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:10 vm00 bash[20701]: audit 2026-03-10T07:49:10.204315+0000 mon.c (mon.2) 417 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:10.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:10 vm00 bash[20701]: audit 2026-03-10T07:49:10.204315+0000 mon.c (mon.2) 417 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:11.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:49:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:49:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:49:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:12 vm00 bash[28005]: cluster 2026-03-10T07:49:10.875469+0000 mgr.y (mgr.24407) 898 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:12.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:12 vm00 bash[28005]: cluster 2026-03-10T07:49:10.875469+0000 mgr.y (mgr.24407) 898 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:12 vm00 bash[20701]: cluster 2026-03-10T07:49:10.875469+0000 mgr.y (mgr.24407) 898 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:12.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:12 vm00 bash[20701]: cluster 2026-03-10T07:49:10.875469+0000 mgr.y (mgr.24407) 898 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:12 vm03 bash[23382]: cluster 2026-03-10T07:49:10.875469+0000 mgr.y (mgr.24407) 898 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:12 vm03 bash[23382]: cluster 2026-03-10T07:49:10.875469+0000 mgr.y (mgr.24407) 898 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:14.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:49:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:49:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:14 vm00 bash[28005]: cluster 2026-03-10T07:49:12.875763+0000 mgr.y (mgr.24407) 899 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:14.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:14 vm00 bash[28005]: cluster 2026-03-10T07:49:12.875763+0000 mgr.y (mgr.24407) 899 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:14 vm00 bash[20701]: cluster 2026-03-10T07:49:12.875763+0000 mgr.y (mgr.24407) 899 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:14.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:14 vm00 bash[20701]: cluster 2026-03-10T07:49:12.875763+0000 mgr.y (mgr.24407) 899 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:14 vm03 bash[23382]: cluster 2026-03-10T07:49:12.875763+0000 mgr.y (mgr.24407) 899 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:14 vm03 bash[23382]: cluster 2026-03-10T07:49:12.875763+0000 mgr.y (mgr.24407) 899 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:15 vm00 bash[28005]: audit 2026-03-10T07:49:13.909151+0000 mgr.y (mgr.24407) 900 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:15.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:15 vm00 bash[28005]: audit 2026-03-10T07:49:13.909151+0000 mgr.y (mgr.24407) 900 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:15 vm00 bash[20701]: audit 2026-03-10T07:49:13.909151+0000 mgr.y (mgr.24407) 900 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:15.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:15 vm00 bash[20701]: audit 2026-03-10T07:49:13.909151+0000 mgr.y (mgr.24407) 900 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:15 vm03 bash[23382]: audit 2026-03-10T07:49:13.909151+0000 mgr.y (mgr.24407) 900 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:15 vm03 bash[23382]: audit 2026-03-10T07:49:13.909151+0000 mgr.y (mgr.24407) 900 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:16 vm03 bash[23382]: cluster 2026-03-10T07:49:14.876409+0000 mgr.y (mgr.24407) 901 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:16 vm03 bash[23382]: cluster 2026-03-10T07:49:14.876409+0000 mgr.y (mgr.24407) 901 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:16.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:16 vm00 bash[28005]: cluster 2026-03-10T07:49:14.876409+0000 mgr.y (mgr.24407) 901 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:16.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:16 vm00 bash[28005]: cluster 2026-03-10T07:49:14.876409+0000 mgr.y (mgr.24407) 901 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:16 vm00 bash[20701]: cluster 2026-03-10T07:49:14.876409+0000 mgr.y (mgr.24407) 901 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:16.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:16 vm00 bash[20701]: cluster 2026-03-10T07:49:14.876409+0000 mgr.y (mgr.24407) 901 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:17 vm00 bash[28005]: cluster 2026-03-10T07:49:16.876824+0000 mgr.y (mgr.24407) 902 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:17.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:17 vm00 bash[28005]: cluster 2026-03-10T07:49:16.876824+0000 mgr.y (mgr.24407) 902 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:17 vm00 bash[20701]: cluster 2026-03-10T07:49:16.876824+0000 mgr.y (mgr.24407) 902 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:17.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:17 vm00 bash[20701]: cluster 2026-03-10T07:49:16.876824+0000 mgr.y (mgr.24407) 902 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:18.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:17 vm03 bash[23382]: cluster 2026-03-10T07:49:16.876824+0000 mgr.y (mgr.24407) 902 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:18.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:17 vm03 bash[23382]: cluster 2026-03-10T07:49:16.876824+0000 mgr.y (mgr.24407) 902 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:20.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:19 vm03 bash[23382]: cluster 2026-03-10T07:49:18.877292+0000 mgr.y (mgr.24407) 903 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:20.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:19 vm03 bash[23382]: cluster 2026-03-10T07:49:18.877292+0000 mgr.y (mgr.24407) 903 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:20.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:19 vm00 bash[28005]: cluster 2026-03-10T07:49:18.877292+0000 mgr.y (mgr.24407) 903 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:20.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:19 vm00 bash[28005]: cluster 2026-03-10T07:49:18.877292+0000 mgr.y (mgr.24407) 903 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:19 vm00 bash[20701]: cluster 2026-03-10T07:49:18.877292+0000 mgr.y (mgr.24407) 903 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:20.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:19 vm00 bash[20701]: cluster 2026-03-10T07:49:18.877292+0000 mgr.y (mgr.24407) 903 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:21.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:49:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:49:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:49:22.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:21 vm03 bash[23382]: cluster 2026-03-10T07:49:20.878062+0000 mgr.y (mgr.24407) 904 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:22.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:21 vm03 bash[23382]: cluster 2026-03-10T07:49:20.878062+0000 mgr.y (mgr.24407) 904 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:22.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:21 vm00 bash[28005]: cluster 2026-03-10T07:49:20.878062+0000 mgr.y (mgr.24407) 904 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:22.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:21 vm00 bash[28005]: cluster 2026-03-10T07:49:20.878062+0000 mgr.y (mgr.24407) 904 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:21 vm00 bash[20701]: cluster 2026-03-10T07:49:20.878062+0000 mgr.y (mgr.24407) 904 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:22.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:21 vm00 bash[20701]: cluster 2026-03-10T07:49:20.878062+0000 mgr.y (mgr.24407) 904 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:24.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:49:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:49:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:24 vm03 bash[23382]: cluster 2026-03-10T07:49:22.878443+0000 mgr.y (mgr.24407) 905 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:24 vm03 bash[23382]: cluster 2026-03-10T07:49:22.878443+0000 mgr.y (mgr.24407) 905 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:24 vm03 bash[23382]: audit 2026-03-10T07:49:23.279769+0000 mon.c (mon.2) 418 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:49:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:24 vm03 bash[23382]: audit 2026-03-10T07:49:23.279769+0000 mon.c (mon.2) 418 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:49:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:24 vm03 bash[23382]: audit 2026-03-10T07:49:23.614203+0000 mon.c (mon.2) 419 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:49:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:24 vm03 bash[23382]: audit 2026-03-10T07:49:23.614203+0000 mon.c (mon.2) 419 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:49:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:24 vm03 bash[23382]: audit 2026-03-10T07:49:23.615359+0000 mon.c (mon.2) 420 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:49:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:24 vm03 bash[23382]: audit 2026-03-10T07:49:23.615359+0000 mon.c (mon.2) 420 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:49:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:24 vm03 bash[23382]: audit 2026-03-10T07:49:23.637908+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:49:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:24 vm03 bash[23382]: audit 2026-03-10T07:49:23.637908+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:24 vm00 bash[28005]: cluster 2026-03-10T07:49:22.878443+0000 mgr.y (mgr.24407) 905 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:24 vm00 bash[28005]: cluster 2026-03-10T07:49:22.878443+0000 mgr.y (mgr.24407) 905 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:24 vm00 bash[28005]: audit 2026-03-10T07:49:23.279769+0000 mon.c (mon.2) 418 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:24 vm00 bash[28005]: audit 2026-03-10T07:49:23.279769+0000 mon.c (mon.2) 418 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:24 vm00 bash[28005]: audit 2026-03-10T07:49:23.614203+0000 mon.c (mon.2) 419 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:24 vm00 bash[28005]: audit 2026-03-10T07:49:23.614203+0000 mon.c (mon.2) 419 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:24 vm00 bash[28005]: audit 2026-03-10T07:49:23.615359+0000 mon.c (mon.2) 420 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:24 vm00 bash[28005]: audit 2026-03-10T07:49:23.615359+0000 mon.c (mon.2) 420 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:24 vm00 bash[28005]: audit 2026-03-10T07:49:23.637908+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:24 vm00 bash[28005]: audit 2026-03-10T07:49:23.637908+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:23 vm00 bash[20701]: cluster 2026-03-10T07:49:22.878443+0000 mgr.y (mgr.24407) 905 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:24 vm00 bash[20701]: cluster 2026-03-10T07:49:22.878443+0000 mgr.y (mgr.24407) 905 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:24 vm00 bash[20701]: audit 2026-03-10T07:49:23.279769+0000 mon.c (mon.2) 418 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:24 vm00 bash[20701]: audit 2026-03-10T07:49:23.279769+0000 mon.c (mon.2) 418 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:24 vm00 bash[20701]: audit 2026-03-10T07:49:23.614203+0000 mon.c (mon.2) 419 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:24 vm00 bash[20701]: audit 2026-03-10T07:49:23.614203+0000 mon.c (mon.2) 419 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:24 vm00 bash[20701]: audit 2026-03-10T07:49:23.615359+0000 mon.c (mon.2) 420 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:24 vm00 bash[20701]: audit 2026-03-10T07:49:23.615359+0000 mon.c (mon.2) 420 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:24 vm00 bash[20701]: audit 2026-03-10T07:49:23.637908+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:49:24.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:24 vm00 bash[20701]: audit 2026-03-10T07:49:23.637908+0000 mon.a (mon.0) 3548 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:49:25.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:25 vm03 bash[23382]: audit 2026-03-10T07:49:23.916498+0000 mgr.y (mgr.24407) 906 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:25.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:25 vm03 bash[23382]: audit 2026-03-10T07:49:23.916498+0000 mgr.y (mgr.24407) 906 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:25 vm00 bash[28005]: audit 2026-03-10T07:49:23.916498+0000 mgr.y (mgr.24407) 906 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:25.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:25 vm00 bash[28005]: audit 2026-03-10T07:49:23.916498+0000 mgr.y (mgr.24407) 906 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:25 vm00 bash[20701]: audit 2026-03-10T07:49:23.916498+0000 mgr.y (mgr.24407) 906 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:25.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:25 vm00 bash[20701]: audit 2026-03-10T07:49:23.916498+0000 mgr.y (mgr.24407) 906 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:26.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:26 vm03 bash[23382]: cluster 2026-03-10T07:49:24.879214+0000 mgr.y (mgr.24407) 907 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:26.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:26 vm03 bash[23382]: cluster 2026-03-10T07:49:24.879214+0000 mgr.y (mgr.24407) 907 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:26.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:26 vm03 bash[23382]: audit 2026-03-10T07:49:25.210393+0000 mon.c (mon.2) 421 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:26.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:26 vm03 bash[23382]: audit 2026-03-10T07:49:25.210393+0000 mon.c (mon.2) 421 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:26 vm00 bash[28005]: cluster 2026-03-10T07:49:24.879214+0000 mgr.y (mgr.24407) 907 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:26 vm00 bash[28005]: cluster 2026-03-10T07:49:24.879214+0000 mgr.y (mgr.24407) 907 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:26 vm00 bash[28005]: audit 2026-03-10T07:49:25.210393+0000 mon.c (mon.2) 421 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:26.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:26 vm00 bash[28005]: audit 2026-03-10T07:49:25.210393+0000 mon.c (mon.2) 421 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:26 vm00 bash[20701]: cluster 2026-03-10T07:49:24.879214+0000 mgr.y (mgr.24407) 907 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:26 vm00 bash[20701]: cluster 2026-03-10T07:49:24.879214+0000 mgr.y (mgr.24407) 907 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:26 vm00 bash[20701]: audit 2026-03-10T07:49:25.210393+0000 mon.c (mon.2) 421 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:26.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:26 vm00 bash[20701]: audit 2026-03-10T07:49:25.210393+0000 mon.c (mon.2) 421 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:28.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:28 vm00 bash[28005]: cluster 2026-03-10T07:49:26.879554+0000 mgr.y (mgr.24407) 908 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:28.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:28 vm00 bash[28005]: cluster 2026-03-10T07:49:26.879554+0000 mgr.y (mgr.24407) 908 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:28.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:28 vm00 bash[20701]: cluster 2026-03-10T07:49:26.879554+0000 mgr.y (mgr.24407) 908 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:28.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:28 vm00 bash[20701]: cluster 2026-03-10T07:49:26.879554+0000 mgr.y (mgr.24407) 908 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:28 vm03 bash[23382]: cluster 2026-03-10T07:49:26.879554+0000 mgr.y (mgr.24407) 908 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:28 vm03 bash[23382]: cluster 2026-03-10T07:49:26.879554+0000 mgr.y (mgr.24407) 908 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:30 vm03 bash[23382]: cluster 2026-03-10T07:49:28.879891+0000 mgr.y (mgr.24407) 909 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:30 vm03 bash[23382]: cluster 2026-03-10T07:49:28.879891+0000 mgr.y (mgr.24407) 909 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:30.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:30 vm00 bash[28005]: cluster 2026-03-10T07:49:28.879891+0000 mgr.y (mgr.24407) 909 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:30.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:30 vm00 bash[28005]: cluster 2026-03-10T07:49:28.879891+0000 mgr.y (mgr.24407) 909 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:30 vm00 bash[20701]: cluster 2026-03-10T07:49:28.879891+0000 mgr.y (mgr.24407) 909 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:30 vm00 bash[20701]: cluster 2026-03-10T07:49:28.879891+0000 mgr.y (mgr.24407) 909 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:31.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:49:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:49:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:49:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:32 vm03 bash[23382]: cluster 2026-03-10T07:49:30.880648+0000 mgr.y (mgr.24407) 910 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:32 vm03 bash[23382]: cluster 2026-03-10T07:49:30.880648+0000 mgr.y (mgr.24407) 910 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:32 vm00 bash[28005]: cluster 2026-03-10T07:49:30.880648+0000 mgr.y (mgr.24407) 910 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:32.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:32 vm00 bash[28005]: cluster 2026-03-10T07:49:30.880648+0000 mgr.y (mgr.24407) 910 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:32 vm00 bash[20701]: cluster 2026-03-10T07:49:30.880648+0000 mgr.y (mgr.24407) 910 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:32.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:32 vm00 bash[20701]: cluster 2026-03-10T07:49:30.880648+0000 mgr.y (mgr.24407) 910 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:34.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:49:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:49:34.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:34 vm03 bash[23382]: cluster 2026-03-10T07:49:32.880967+0000 mgr.y (mgr.24407) 911 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:34.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:34 vm03 bash[23382]: cluster 2026-03-10T07:49:32.880967+0000 mgr.y (mgr.24407) 911 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:34 vm00 bash[28005]: cluster 2026-03-10T07:49:32.880967+0000 mgr.y (mgr.24407) 911 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:34.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:34 vm00 bash[28005]: cluster 2026-03-10T07:49:32.880967+0000 mgr.y (mgr.24407) 911 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:34 vm00 bash[20701]: cluster 2026-03-10T07:49:32.880967+0000 mgr.y (mgr.24407) 911 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:34.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:34 vm00 bash[20701]: cluster 2026-03-10T07:49:32.880967+0000 mgr.y (mgr.24407) 911 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:35.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:35 vm03 bash[23382]: audit 2026-03-10T07:49:33.925228+0000 mgr.y (mgr.24407) 912 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:35.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:35 vm03 bash[23382]: audit 2026-03-10T07:49:33.925228+0000 mgr.y (mgr.24407) 912 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:35 vm00 bash[28005]: audit 2026-03-10T07:49:33.925228+0000 mgr.y (mgr.24407) 912 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:35 vm00 bash[28005]: audit 2026-03-10T07:49:33.925228+0000 mgr.y (mgr.24407) 912 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:35.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:35 vm00 bash[20701]: audit 2026-03-10T07:49:33.925228+0000 mgr.y (mgr.24407) 912 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:35.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:35 vm00 bash[20701]: audit 2026-03-10T07:49:33.925228+0000 mgr.y (mgr.24407) 912 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:36.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:36 vm03 bash[23382]: cluster 2026-03-10T07:49:34.881838+0000 mgr.y (mgr.24407) 913 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:36.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:36 vm03 bash[23382]: cluster 2026-03-10T07:49:34.881838+0000 mgr.y (mgr.24407) 913 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:36 vm00 bash[28005]: cluster 2026-03-10T07:49:34.881838+0000 mgr.y (mgr.24407) 913 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:36.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:36 vm00 bash[28005]: cluster 2026-03-10T07:49:34.881838+0000 mgr.y (mgr.24407) 913 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:36.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:36 vm00 bash[20701]: cluster 2026-03-10T07:49:34.881838+0000 mgr.y (mgr.24407) 913 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:36.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:36 vm00 bash[20701]: cluster 2026-03-10T07:49:34.881838+0000 mgr.y (mgr.24407) 913 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:38.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:38 vm03 bash[23382]: cluster 2026-03-10T07:49:36.882223+0000 mgr.y (mgr.24407) 914 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:38.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:38 vm03 bash[23382]: cluster 2026-03-10T07:49:36.882223+0000 mgr.y (mgr.24407) 914 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:38.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:38 vm00 bash[28005]: cluster 2026-03-10T07:49:36.882223+0000 mgr.y (mgr.24407) 914 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:38.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:38 vm00 bash[28005]: cluster 2026-03-10T07:49:36.882223+0000 mgr.y (mgr.24407) 914 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:38 vm00 bash[20701]: cluster 2026-03-10T07:49:36.882223+0000 mgr.y (mgr.24407) 914 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:38.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:38 vm00 bash[20701]: cluster 2026-03-10T07:49:36.882223+0000 mgr.y (mgr.24407) 914 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:40 vm03 bash[23382]: cluster 2026-03-10T07:49:38.882512+0000 mgr.y (mgr.24407) 915 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:40 vm03 bash[23382]: cluster 2026-03-10T07:49:38.882512+0000 mgr.y (mgr.24407) 915 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:40 vm00 bash[28005]: cluster 2026-03-10T07:49:38.882512+0000 mgr.y (mgr.24407) 915 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:40 vm00 bash[28005]: cluster 2026-03-10T07:49:38.882512+0000 mgr.y (mgr.24407) 915 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:40.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:40 vm00 bash[20701]: cluster 2026-03-10T07:49:38.882512+0000 mgr.y (mgr.24407) 915 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:40.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:40 vm00 bash[20701]: cluster 2026-03-10T07:49:38.882512+0000 mgr.y (mgr.24407) 915 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:41 vm00 bash[28005]: audit 2026-03-10T07:49:40.216482+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:41.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:41 vm00 bash[28005]: audit 2026-03-10T07:49:40.216482+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:41.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:49:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:49:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:49:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:41 vm00 bash[20701]: audit 2026-03-10T07:49:40.216482+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:41.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:41 vm00 bash[20701]: audit 2026-03-10T07:49:40.216482+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:41 vm03 bash[23382]: audit 2026-03-10T07:49:40.216482+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:41 vm03 bash[23382]: audit 2026-03-10T07:49:40.216482+0000 mon.c (mon.2) 422 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:42.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:42 vm03 bash[23382]: cluster 2026-03-10T07:49:40.883100+0000 mgr.y (mgr.24407) 916 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:42.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:42 vm03 bash[23382]: cluster 2026-03-10T07:49:40.883100+0000 mgr.y (mgr.24407) 916 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:42 vm00 bash[28005]: cluster 2026-03-10T07:49:40.883100+0000 mgr.y (mgr.24407) 916 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:42.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:42 vm00 bash[28005]: cluster 2026-03-10T07:49:40.883100+0000 mgr.y (mgr.24407) 916 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:42.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:42 vm00 bash[20701]: cluster 2026-03-10T07:49:40.883100+0000 mgr.y (mgr.24407) 916 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:42.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:42 vm00 bash[20701]: cluster 2026-03-10T07:49:40.883100+0000 mgr.y (mgr.24407) 916 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:44.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:49:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:49:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:44 vm00 bash[28005]: cluster 2026-03-10T07:49:42.883430+0000 mgr.y (mgr.24407) 917 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:44.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:44 vm00 bash[28005]: cluster 2026-03-10T07:49:42.883430+0000 mgr.y (mgr.24407) 917 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:44 vm00 bash[20701]: cluster 2026-03-10T07:49:42.883430+0000 mgr.y (mgr.24407) 917 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:44.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:44 vm00 bash[20701]: cluster 2026-03-10T07:49:42.883430+0000 mgr.y (mgr.24407) 917 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:44.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:44 vm03 bash[23382]: cluster 2026-03-10T07:49:42.883430+0000 mgr.y (mgr.24407) 917 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:44.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:44 vm03 bash[23382]: cluster 2026-03-10T07:49:42.883430+0000 mgr.y (mgr.24407) 917 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:45.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:45 vm00 bash[28005]: audit 2026-03-10T07:49:43.934047+0000 mgr.y (mgr.24407) 918 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:45.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:45 vm00 bash[28005]: audit 2026-03-10T07:49:43.934047+0000 mgr.y (mgr.24407) 918 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:45 vm00 bash[20701]: audit 2026-03-10T07:49:43.934047+0000 mgr.y (mgr.24407) 918 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:45.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:45 vm00 bash[20701]: audit 2026-03-10T07:49:43.934047+0000 mgr.y (mgr.24407) 918 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:45.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:45 vm03 bash[23382]: audit 2026-03-10T07:49:43.934047+0000 mgr.y (mgr.24407) 918 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:45.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:45 vm03 bash[23382]: audit 2026-03-10T07:49:43.934047+0000 mgr.y (mgr.24407) 918 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:46.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:46 vm00 bash[28005]: cluster 2026-03-10T07:49:44.884118+0000 mgr.y (mgr.24407) 919 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:46.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:46 vm00 bash[28005]: cluster 2026-03-10T07:49:44.884118+0000 mgr.y (mgr.24407) 919 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:46 vm00 bash[20701]: cluster 2026-03-10T07:49:44.884118+0000 mgr.y (mgr.24407) 919 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:46.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:46 vm00 bash[20701]: cluster 2026-03-10T07:49:44.884118+0000 mgr.y (mgr.24407) 919 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:46.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:46 vm03 bash[23382]: cluster 2026-03-10T07:49:44.884118+0000 mgr.y (mgr.24407) 919 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:46.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:46 vm03 bash[23382]: cluster 2026-03-10T07:49:44.884118+0000 mgr.y (mgr.24407) 919 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:48.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:48 vm00 bash[28005]: cluster 2026-03-10T07:49:46.884458+0000 mgr.y (mgr.24407) 920 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:48.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:48 vm00 bash[28005]: cluster 2026-03-10T07:49:46.884458+0000 mgr.y (mgr.24407) 920 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:48 vm00 bash[20701]: cluster 2026-03-10T07:49:46.884458+0000 mgr.y (mgr.24407) 920 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:48 vm00 bash[20701]: cluster 2026-03-10T07:49:46.884458+0000 mgr.y (mgr.24407) 920 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:48.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:48 vm03 bash[23382]: cluster 2026-03-10T07:49:46.884458+0000 mgr.y (mgr.24407) 920 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:48.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:48 vm03 bash[23382]: cluster 2026-03-10T07:49:46.884458+0000 mgr.y (mgr.24407) 920 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:50.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:50 vm03 bash[23382]: cluster 2026-03-10T07:49:48.884766+0000 mgr.y (mgr.24407) 921 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:50.765 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:50 vm03 bash[23382]: cluster 2026-03-10T07:49:48.884766+0000 mgr.y (mgr.24407) 921 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:50.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:50 vm00 bash[28005]: cluster 2026-03-10T07:49:48.884766+0000 mgr.y (mgr.24407) 921 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:50.894 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:50 vm00 bash[28005]: cluster 2026-03-10T07:49:48.884766+0000 mgr.y (mgr.24407) 921 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:50.894 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:50 vm00 bash[20701]: cluster 2026-03-10T07:49:48.884766+0000 mgr.y (mgr.24407) 921 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:50.894 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:50 vm00 bash[20701]: cluster 2026-03-10T07:49:48.884766+0000 mgr.y (mgr.24407) 921 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:51.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:49:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:49:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:49:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:52 vm00 bash[28005]: cluster 2026-03-10T07:49:50.885572+0000 mgr.y (mgr.24407) 922 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:52 vm00 bash[28005]: cluster 2026-03-10T07:49:50.885572+0000 mgr.y (mgr.24407) 922 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:52 vm00 bash[20701]: cluster 2026-03-10T07:49:50.885572+0000 mgr.y (mgr.24407) 922 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:52 vm00 bash[20701]: cluster 2026-03-10T07:49:50.885572+0000 mgr.y (mgr.24407) 922 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:53.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:52 vm03 bash[23382]: cluster 2026-03-10T07:49:50.885572+0000 mgr.y (mgr.24407) 922 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:53.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:52 vm03 bash[23382]: cluster 2026-03-10T07:49:50.885572+0000 mgr.y (mgr.24407) 922 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:53.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:53 vm03 bash[23382]: cluster 2026-03-10T07:49:52.885874+0000 mgr.y (mgr.24407) 923 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:53.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:53 vm03 bash[23382]: cluster 2026-03-10T07:49:52.885874+0000 mgr.y (mgr.24407) 923 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:53 vm00 bash[28005]: cluster 2026-03-10T07:49:52.885874+0000 mgr.y (mgr.24407) 923 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:53 vm00 bash[28005]: cluster 2026-03-10T07:49:52.885874+0000 mgr.y (mgr.24407) 923 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:53 vm00 bash[20701]: cluster 2026-03-10T07:49:52.885874+0000 mgr.y (mgr.24407) 923 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:53 vm00 bash[20701]: cluster 2026-03-10T07:49:52.885874+0000 mgr.y (mgr.24407) 923 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:54.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:49:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:49:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:54 vm00 bash[28005]: audit 2026-03-10T07:49:53.936949+0000 mgr.y (mgr.24407) 924 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:54.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:54 vm00 bash[28005]: audit 2026-03-10T07:49:53.936949+0000 mgr.y (mgr.24407) 924 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:54 vm00 bash[20701]: audit 2026-03-10T07:49:53.936949+0000 mgr.y (mgr.24407) 924 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:54.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:54 vm00 bash[20701]: audit 2026-03-10T07:49:53.936949+0000 mgr.y (mgr.24407) 924 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:55.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:54 vm03 bash[23382]: audit 2026-03-10T07:49:53.936949+0000 mgr.y (mgr.24407) 924 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:55.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:54 vm03 bash[23382]: audit 2026-03-10T07:49:53.936949+0000 mgr.y (mgr.24407) 924 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:49:56.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:55 vm03 bash[23382]: cluster 2026-03-10T07:49:54.887004+0000 mgr.y (mgr.24407) 925 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:56.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:55 vm03 bash[23382]: cluster 2026-03-10T07:49:54.887004+0000 mgr.y (mgr.24407) 925 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:56.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:55 vm03 bash[23382]: audit 2026-03-10T07:49:55.223149+0000 mon.c (mon.2) 423 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:56.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:55 vm03 bash[23382]: audit 2026-03-10T07:49:55.223149+0000 mon.c (mon.2) 423 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:55 vm00 bash[28005]: cluster 2026-03-10T07:49:54.887004+0000 mgr.y (mgr.24407) 925 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:55 vm00 bash[28005]: cluster 2026-03-10T07:49:54.887004+0000 mgr.y (mgr.24407) 925 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:55 vm00 bash[28005]: audit 2026-03-10T07:49:55.223149+0000 mon.c (mon.2) 423 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:55 vm00 bash[28005]: audit 2026-03-10T07:49:55.223149+0000 mon.c (mon.2) 423 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:56.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:55 vm00 bash[20701]: cluster 2026-03-10T07:49:54.887004+0000 mgr.y (mgr.24407) 925 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:56.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:55 vm00 bash[20701]: cluster 2026-03-10T07:49:54.887004+0000 mgr.y (mgr.24407) 925 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:49:56.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:55 vm00 bash[20701]: audit 2026-03-10T07:49:55.223149+0000 mon.c (mon.2) 423 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:56.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:55 vm00 bash[20701]: audit 2026-03-10T07:49:55.223149+0000 mon.c (mon.2) 423 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:49:58.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:57 vm03 bash[23382]: cluster 2026-03-10T07:49:56.887387+0000 mgr.y (mgr.24407) 926 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:58.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:49:57 vm03 bash[23382]: cluster 2026-03-10T07:49:56.887387+0000 mgr.y (mgr.24407) 926 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:57 vm00 bash[28005]: cluster 2026-03-10T07:49:56.887387+0000 mgr.y (mgr.24407) 926 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:49:57 vm00 bash[28005]: cluster 2026-03-10T07:49:56.887387+0000 mgr.y (mgr.24407) 926 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:57 vm00 bash[20701]: cluster 2026-03-10T07:49:56.887387+0000 mgr.y (mgr.24407) 926 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:49:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:49:57 vm00 bash[20701]: cluster 2026-03-10T07:49:56.887387+0000 mgr.y (mgr.24407) 926 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:49:58.887695+0000 mgr.y (mgr.24407) 927 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:49:58.887695+0000 mgr.y (mgr.24407) 927 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000138+0000 mon.a (mon.0) 3549 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000138+0000 mon.a (mon.0) 3549 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000168+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000168+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000175+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000175+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000181+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000181+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000188+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000188+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000194+0000 mon.a (mon.0) 3554 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:50:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:00 vm03 bash[23382]: cluster 2026-03-10T07:50:00.000194+0000 mon.a (mon.0) 3554 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:49:58.887695+0000 mgr.y (mgr.24407) 927 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:49:58.887695+0000 mgr.y (mgr.24407) 927 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000138+0000 mon.a (mon.0) 3549 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000138+0000 mon.a (mon.0) 3549 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000168+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000168+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000175+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000175+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000181+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000181+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000188+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000188+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000194+0000 mon.a (mon.0) 3554 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:00 vm00 bash[28005]: cluster 2026-03-10T07:50:00.000194+0000 mon.a (mon.0) 3554 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:49:58.887695+0000 mgr.y (mgr.24407) 927 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:49:58.887695+0000 mgr.y (mgr.24407) 927 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000138+0000 mon.a (mon.0) 3549 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000138+0000 mon.a (mon.0) 3549 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000168+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000168+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000175+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000175+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000181+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000181+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm00-60651-1' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000188+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000188+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm00-60682-1' 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000194+0000 mon.a (mon.0) 3554 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:50:00.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:00 vm00 bash[20701]: cluster 2026-03-10T07:50:00.000194+0000 mon.a (mon.0) 3554 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T07:50:01.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:50:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:50:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:50:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:02 vm03 bash[23382]: cluster 2026-03-10T07:50:00.888350+0000 mgr.y (mgr.24407) 928 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:02 vm03 bash[23382]: cluster 2026-03-10T07:50:00.888350+0000 mgr.y (mgr.24407) 928 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:02 vm00 bash[28005]: cluster 2026-03-10T07:50:00.888350+0000 mgr.y (mgr.24407) 928 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:02 vm00 bash[28005]: cluster 2026-03-10T07:50:00.888350+0000 mgr.y (mgr.24407) 928 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:02 vm00 bash[20701]: cluster 2026-03-10T07:50:00.888350+0000 mgr.y (mgr.24407) 928 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:02 vm00 bash[20701]: cluster 2026-03-10T07:50:00.888350+0000 mgr.y (mgr.24407) 928 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:04.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:50:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:50:04.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:04 vm03 bash[23382]: cluster 2026-03-10T07:50:02.888698+0000 mgr.y (mgr.24407) 929 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:04.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:04 vm03 bash[23382]: cluster 2026-03-10T07:50:02.888698+0000 mgr.y (mgr.24407) 929 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:04 vm00 bash[28005]: cluster 2026-03-10T07:50:02.888698+0000 mgr.y (mgr.24407) 929 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:04.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:04 vm00 bash[28005]: cluster 2026-03-10T07:50:02.888698+0000 mgr.y (mgr.24407) 929 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:04 vm00 bash[20701]: cluster 2026-03-10T07:50:02.888698+0000 mgr.y (mgr.24407) 929 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:04 vm00 bash[20701]: cluster 2026-03-10T07:50:02.888698+0000 mgr.y (mgr.24407) 929 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:05 vm03 bash[23382]: audit 2026-03-10T07:50:03.947794+0000 mgr.y (mgr.24407) 930 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:05 vm03 bash[23382]: audit 2026-03-10T07:50:03.947794+0000 mgr.y (mgr.24407) 930 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:05 vm00 bash[28005]: audit 2026-03-10T07:50:03.947794+0000 mgr.y (mgr.24407) 930 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:05 vm00 bash[28005]: audit 2026-03-10T07:50:03.947794+0000 mgr.y (mgr.24407) 930 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:05.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:05 vm00 bash[20701]: audit 2026-03-10T07:50:03.947794+0000 mgr.y (mgr.24407) 930 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:05.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:05 vm00 bash[20701]: audit 2026-03-10T07:50:03.947794+0000 mgr.y (mgr.24407) 930 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:06 vm03 bash[23382]: cluster 2026-03-10T07:50:04.889270+0000 mgr.y (mgr.24407) 931 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:06 vm03 bash[23382]: cluster 2026-03-10T07:50:04.889270+0000 mgr.y (mgr.24407) 931 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:06.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:06 vm00 bash[28005]: cluster 2026-03-10T07:50:04.889270+0000 mgr.y (mgr.24407) 931 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:06.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:06 vm00 bash[28005]: cluster 2026-03-10T07:50:04.889270+0000 mgr.y (mgr.24407) 931 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:06.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:06 vm00 bash[20701]: cluster 2026-03-10T07:50:04.889270+0000 mgr.y (mgr.24407) 931 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:06.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:06 vm00 bash[20701]: cluster 2026-03-10T07:50:04.889270+0000 mgr.y (mgr.24407) 931 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:08 vm03 bash[23382]: cluster 2026-03-10T07:50:06.889613+0000 mgr.y (mgr.24407) 932 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:08 vm03 bash[23382]: cluster 2026-03-10T07:50:06.889613+0000 mgr.y (mgr.24407) 932 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:08.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:08 vm00 bash[28005]: cluster 2026-03-10T07:50:06.889613+0000 mgr.y (mgr.24407) 932 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:08.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:08 vm00 bash[28005]: cluster 2026-03-10T07:50:06.889613+0000 mgr.y (mgr.24407) 932 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:08.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:08 vm00 bash[20701]: cluster 2026-03-10T07:50:06.889613+0000 mgr.y (mgr.24407) 932 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:08.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:08 vm00 bash[20701]: cluster 2026-03-10T07:50:06.889613+0000 mgr.y (mgr.24407) 932 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:09.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:09 vm00 bash[28005]: cluster 2026-03-10T07:50:08.890023+0000 mgr.y (mgr.24407) 933 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:09.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:09 vm00 bash[28005]: cluster 2026-03-10T07:50:08.890023+0000 mgr.y (mgr.24407) 933 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:09.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:09 vm00 bash[20701]: cluster 2026-03-10T07:50:08.890023+0000 mgr.y (mgr.24407) 933 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:09.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:09 vm00 bash[20701]: cluster 2026-03-10T07:50:08.890023+0000 mgr.y (mgr.24407) 933 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:10.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:09 vm03 bash[23382]: cluster 2026-03-10T07:50:08.890023+0000 mgr.y (mgr.24407) 933 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:10.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:09 vm03 bash[23382]: cluster 2026-03-10T07:50:08.890023+0000 mgr.y (mgr.24407) 933 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:10 vm00 bash[28005]: audit 2026-03-10T07:50:10.229986+0000 mon.c (mon.2) 424 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:10.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:10 vm00 bash[28005]: audit 2026-03-10T07:50:10.229986+0000 mon.c (mon.2) 424 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:10.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:10 vm00 bash[20701]: audit 2026-03-10T07:50:10.229986+0000 mon.c (mon.2) 424 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:10.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:10 vm00 bash[20701]: audit 2026-03-10T07:50:10.229986+0000 mon.c (mon.2) 424 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:11.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:10 vm03 bash[23382]: audit 2026-03-10T07:50:10.229986+0000 mon.c (mon.2) 424 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:11.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:10 vm03 bash[23382]: audit 2026-03-10T07:50:10.229986+0000 mon.c (mon.2) 424 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:11.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:50:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:50:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:50:11.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:11 vm00 bash[28005]: cluster 2026-03-10T07:50:10.891055+0000 mgr.y (mgr.24407) 934 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:11.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:11 vm00 bash[28005]: cluster 2026-03-10T07:50:10.891055+0000 mgr.y (mgr.24407) 934 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:11.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:11 vm00 bash[20701]: cluster 2026-03-10T07:50:10.891055+0000 mgr.y (mgr.24407) 934 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:11.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:11 vm00 bash[20701]: cluster 2026-03-10T07:50:10.891055+0000 mgr.y (mgr.24407) 934 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:12.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:11 vm03 bash[23382]: cluster 2026-03-10T07:50:10.891055+0000 mgr.y (mgr.24407) 934 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:12.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:11 vm03 bash[23382]: cluster 2026-03-10T07:50:10.891055+0000 mgr.y (mgr.24407) 934 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:14.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:50:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:50:14.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:13 vm03 bash[23382]: cluster 2026-03-10T07:50:12.891462+0000 mgr.y (mgr.24407) 935 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:14.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:13 vm03 bash[23382]: cluster 2026-03-10T07:50:12.891462+0000 mgr.y (mgr.24407) 935 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:13 vm00 bash[28005]: cluster 2026-03-10T07:50:12.891462+0000 mgr.y (mgr.24407) 935 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:13 vm00 bash[28005]: cluster 2026-03-10T07:50:12.891462+0000 mgr.y (mgr.24407) 935 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:13 vm00 bash[20701]: cluster 2026-03-10T07:50:12.891462+0000 mgr.y (mgr.24407) 935 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:14.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:13 vm00 bash[20701]: cluster 2026-03-10T07:50:12.891462+0000 mgr.y (mgr.24407) 935 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:15.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:14 vm03 bash[23382]: audit 2026-03-10T07:50:13.950625+0000 mgr.y (mgr.24407) 936 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:15.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:14 vm03 bash[23382]: audit 2026-03-10T07:50:13.950625+0000 mgr.y (mgr.24407) 936 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:14 vm00 bash[28005]: audit 2026-03-10T07:50:13.950625+0000 mgr.y (mgr.24407) 936 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:14 vm00 bash[28005]: audit 2026-03-10T07:50:13.950625+0000 mgr.y (mgr.24407) 936 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:15.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:14 vm00 bash[20701]: audit 2026-03-10T07:50:13.950625+0000 mgr.y (mgr.24407) 936 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:15.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:14 vm00 bash[20701]: audit 2026-03-10T07:50:13.950625+0000 mgr.y (mgr.24407) 936 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:16.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:15 vm03 bash[23382]: cluster 2026-03-10T07:50:14.892316+0000 mgr.y (mgr.24407) 937 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:16.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:15 vm03 bash[23382]: cluster 2026-03-10T07:50:14.892316+0000 mgr.y (mgr.24407) 937 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:16.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:15 vm00 bash[28005]: cluster 2026-03-10T07:50:14.892316+0000 mgr.y (mgr.24407) 937 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:16.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:15 vm00 bash[28005]: cluster 2026-03-10T07:50:14.892316+0000 mgr.y (mgr.24407) 937 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:16.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:15 vm00 bash[20701]: cluster 2026-03-10T07:50:14.892316+0000 mgr.y (mgr.24407) 937 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:16.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:15 vm00 bash[20701]: cluster 2026-03-10T07:50:14.892316+0000 mgr.y (mgr.24407) 937 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:18.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:17 vm03 bash[23382]: cluster 2026-03-10T07:50:16.892643+0000 mgr.y (mgr.24407) 938 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:18.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:17 vm03 bash[23382]: cluster 2026-03-10T07:50:16.892643+0000 mgr.y (mgr.24407) 938 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:18.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:17 vm00 bash[28005]: cluster 2026-03-10T07:50:16.892643+0000 mgr.y (mgr.24407) 938 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:18.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:17 vm00 bash[28005]: cluster 2026-03-10T07:50:16.892643+0000 mgr.y (mgr.24407) 938 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:18.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:17 vm00 bash[20701]: cluster 2026-03-10T07:50:16.892643+0000 mgr.y (mgr.24407) 938 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:18.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:17 vm00 bash[20701]: cluster 2026-03-10T07:50:16.892643+0000 mgr.y (mgr.24407) 938 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:20.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:19 vm03 bash[23382]: cluster 2026-03-10T07:50:18.892978+0000 mgr.y (mgr.24407) 939 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:20.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:19 vm03 bash[23382]: cluster 2026-03-10T07:50:18.892978+0000 mgr.y (mgr.24407) 939 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:19 vm00 bash[28005]: cluster 2026-03-10T07:50:18.892978+0000 mgr.y (mgr.24407) 939 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:19 vm00 bash[28005]: cluster 2026-03-10T07:50:18.892978+0000 mgr.y (mgr.24407) 939 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:20.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:19 vm00 bash[20701]: cluster 2026-03-10T07:50:18.892978+0000 mgr.y (mgr.24407) 939 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:20.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:19 vm00 bash[20701]: cluster 2026-03-10T07:50:18.892978+0000 mgr.y (mgr.24407) 939 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:21.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:50:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:50:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:50:22.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:22 vm03 bash[23382]: cluster 2026-03-10T07:50:20.893729+0000 mgr.y (mgr.24407) 940 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:22.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:22 vm03 bash[23382]: cluster 2026-03-10T07:50:20.893729+0000 mgr.y (mgr.24407) 940 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:22.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:22 vm00 bash[20701]: cluster 2026-03-10T07:50:20.893729+0000 mgr.y (mgr.24407) 940 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:22.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:22 vm00 bash[20701]: cluster 2026-03-10T07:50:20.893729+0000 mgr.y (mgr.24407) 940 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:22.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:22 vm00 bash[28005]: cluster 2026-03-10T07:50:20.893729+0000 mgr.y (mgr.24407) 940 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:22.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:22 vm00 bash[28005]: cluster 2026-03-10T07:50:20.893729+0000 mgr.y (mgr.24407) 940 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:24.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:50:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:50:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:24 vm03 bash[23382]: cluster 2026-03-10T07:50:22.894146+0000 mgr.y (mgr.24407) 941 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:24 vm03 bash[23382]: cluster 2026-03-10T07:50:22.894146+0000 mgr.y (mgr.24407) 941 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:24 vm03 bash[23382]: audit 2026-03-10T07:50:23.682824+0000 mon.c (mon.2) 425 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:50:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:24 vm03 bash[23382]: audit 2026-03-10T07:50:23.682824+0000 mon.c (mon.2) 425 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:50:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:24 vm03 bash[23382]: audit 2026-03-10T07:50:23.984794+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:24 vm03 bash[23382]: audit 2026-03-10T07:50:23.984794+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:24 vm03 bash[23382]: audit 2026-03-10T07:50:23.998283+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:24 vm03 bash[23382]: audit 2026-03-10T07:50:23.998283+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:24 vm00 bash[28005]: cluster 2026-03-10T07:50:22.894146+0000 mgr.y (mgr.24407) 941 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:24 vm00 bash[28005]: cluster 2026-03-10T07:50:22.894146+0000 mgr.y (mgr.24407) 941 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:24 vm00 bash[28005]: audit 2026-03-10T07:50:23.682824+0000 mon.c (mon.2) 425 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:50:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:24 vm00 bash[28005]: audit 2026-03-10T07:50:23.682824+0000 mon.c (mon.2) 425 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:50:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:24 vm00 bash[28005]: audit 2026-03-10T07:50:23.984794+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:24 vm00 bash[28005]: audit 2026-03-10T07:50:23.984794+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:24 vm00 bash[28005]: audit 2026-03-10T07:50:23.998283+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:24.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:24 vm00 bash[28005]: audit 2026-03-10T07:50:23.998283+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:24 vm00 bash[20701]: cluster 2026-03-10T07:50:22.894146+0000 mgr.y (mgr.24407) 941 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:24 vm00 bash[20701]: cluster 2026-03-10T07:50:22.894146+0000 mgr.y (mgr.24407) 941 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:24 vm00 bash[20701]: audit 2026-03-10T07:50:23.682824+0000 mon.c (mon.2) 425 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:50:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:24 vm00 bash[20701]: audit 2026-03-10T07:50:23.682824+0000 mon.c (mon.2) 425 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:50:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:24 vm00 bash[20701]: audit 2026-03-10T07:50:23.984794+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:24 vm00 bash[20701]: audit 2026-03-10T07:50:23.984794+0000 mon.a (mon.0) 3555 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:24 vm00 bash[20701]: audit 2026-03-10T07:50:23.998283+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:24.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:24 vm00 bash[20701]: audit 2026-03-10T07:50:23.998283+0000 mon.a (mon.0) 3556 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:25 vm03 bash[23382]: audit 2026-03-10T07:50:23.961735+0000 mgr.y (mgr.24407) 942 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:25 vm03 bash[23382]: audit 2026-03-10T07:50:23.961735+0000 mgr.y (mgr.24407) 942 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:25 vm03 bash[23382]: audit 2026-03-10T07:50:24.340951+0000 mon.c (mon.2) 426 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:50:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:25 vm03 bash[23382]: audit 2026-03-10T07:50:24.340951+0000 mon.c (mon.2) 426 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:50:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:25 vm03 bash[23382]: audit 2026-03-10T07:50:24.342280+0000 mon.c (mon.2) 427 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:50:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:25 vm03 bash[23382]: audit 2026-03-10T07:50:24.342280+0000 mon.c (mon.2) 427 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:50:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:25 vm03 bash[23382]: audit 2026-03-10T07:50:24.347597+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:25 vm03 bash[23382]: audit 2026-03-10T07:50:24.347597+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:25 vm00 bash[20701]: audit 2026-03-10T07:50:23.961735+0000 mgr.y (mgr.24407) 942 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:25 vm00 bash[20701]: audit 2026-03-10T07:50:23.961735+0000 mgr.y (mgr.24407) 942 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:25 vm00 bash[20701]: audit 2026-03-10T07:50:24.340951+0000 mon.c (mon.2) 426 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:25 vm00 bash[20701]: audit 2026-03-10T07:50:24.340951+0000 mon.c (mon.2) 426 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:25 vm00 bash[20701]: audit 2026-03-10T07:50:24.342280+0000 mon.c (mon.2) 427 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:25 vm00 bash[20701]: audit 2026-03-10T07:50:24.342280+0000 mon.c (mon.2) 427 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:25 vm00 bash[20701]: audit 2026-03-10T07:50:24.347597+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:25 vm00 bash[20701]: audit 2026-03-10T07:50:24.347597+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:25.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:25 vm00 bash[28005]: audit 2026-03-10T07:50:23.961735+0000 mgr.y (mgr.24407) 942 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:25.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:25 vm00 bash[28005]: audit 2026-03-10T07:50:23.961735+0000 mgr.y (mgr.24407) 942 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:25.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:25 vm00 bash[28005]: audit 2026-03-10T07:50:24.340951+0000 mon.c (mon.2) 426 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:50:25.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:25 vm00 bash[28005]: audit 2026-03-10T07:50:24.340951+0000 mon.c (mon.2) 426 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:50:25.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:25 vm00 bash[28005]: audit 2026-03-10T07:50:24.342280+0000 mon.c (mon.2) 427 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:50:25.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:25 vm00 bash[28005]: audit 2026-03-10T07:50:24.342280+0000 mon.c (mon.2) 427 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:50:25.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:25 vm00 bash[28005]: audit 2026-03-10T07:50:24.347597+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:25.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:25 vm00 bash[28005]: audit 2026-03-10T07:50:24.347597+0000 mon.a (mon.0) 3557 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:50:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:26 vm03 bash[23382]: cluster 2026-03-10T07:50:24.894918+0000 mgr.y (mgr.24407) 943 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:26 vm03 bash[23382]: cluster 2026-03-10T07:50:24.894918+0000 mgr.y (mgr.24407) 943 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:26 vm03 bash[23382]: audit 2026-03-10T07:50:25.236463+0000 mon.c (mon.2) 428 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:26 vm03 bash[23382]: audit 2026-03-10T07:50:25.236463+0000 mon.c (mon.2) 428 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:26 vm00 bash[20701]: cluster 2026-03-10T07:50:24.894918+0000 mgr.y (mgr.24407) 943 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:26 vm00 bash[20701]: cluster 2026-03-10T07:50:24.894918+0000 mgr.y (mgr.24407) 943 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:26 vm00 bash[20701]: audit 2026-03-10T07:50:25.236463+0000 mon.c (mon.2) 428 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:26 vm00 bash[20701]: audit 2026-03-10T07:50:25.236463+0000 mon.c (mon.2) 428 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:26 vm00 bash[28005]: cluster 2026-03-10T07:50:24.894918+0000 mgr.y (mgr.24407) 943 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:26 vm00 bash[28005]: cluster 2026-03-10T07:50:24.894918+0000 mgr.y (mgr.24407) 943 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:26 vm00 bash[28005]: audit 2026-03-10T07:50:25.236463+0000 mon.c (mon.2) 428 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:26.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:26 vm00 bash[28005]: audit 2026-03-10T07:50:25.236463+0000 mon.c (mon.2) 428 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:28 vm03 bash[23382]: cluster 2026-03-10T07:50:26.895255+0000 mgr.y (mgr.24407) 944 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:28 vm03 bash[23382]: cluster 2026-03-10T07:50:26.895255+0000 mgr.y (mgr.24407) 944 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:28 vm00 bash[20701]: cluster 2026-03-10T07:50:26.895255+0000 mgr.y (mgr.24407) 944 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:28 vm00 bash[20701]: cluster 2026-03-10T07:50:26.895255+0000 mgr.y (mgr.24407) 944 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:28 vm00 bash[28005]: cluster 2026-03-10T07:50:26.895255+0000 mgr.y (mgr.24407) 944 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:28 vm00 bash[28005]: cluster 2026-03-10T07:50:26.895255+0000 mgr.y (mgr.24407) 944 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:30 vm03 bash[23382]: cluster 2026-03-10T07:50:28.895615+0000 mgr.y (mgr.24407) 945 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:30 vm03 bash[23382]: cluster 2026-03-10T07:50:28.895615+0000 mgr.y (mgr.24407) 945 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:30.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:30 vm00 bash[20701]: cluster 2026-03-10T07:50:28.895615+0000 mgr.y (mgr.24407) 945 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:30.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:30 vm00 bash[20701]: cluster 2026-03-10T07:50:28.895615+0000 mgr.y (mgr.24407) 945 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:30 vm00 bash[28005]: cluster 2026-03-10T07:50:28.895615+0000 mgr.y (mgr.24407) 945 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:30 vm00 bash[28005]: cluster 2026-03-10T07:50:28.895615+0000 mgr.y (mgr.24407) 945 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T07:50:31.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:50:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:50:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:50:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:32 vm03 bash[23382]: cluster 2026-03-10T07:50:30.896399+0000 mgr.y (mgr.24407) 946 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:32 vm03 bash[23382]: cluster 2026-03-10T07:50:30.896399+0000 mgr.y (mgr.24407) 946 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:32 vm00 bash[28005]: cluster 2026-03-10T07:50:30.896399+0000 mgr.y (mgr.24407) 946 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:32 vm00 bash[28005]: cluster 2026-03-10T07:50:30.896399+0000 mgr.y (mgr.24407) 946 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:32 vm00 bash[20701]: cluster 2026-03-10T07:50:30.896399+0000 mgr.y (mgr.24407) 946 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:32 vm00 bash[20701]: cluster 2026-03-10T07:50:30.896399+0000 mgr.y (mgr.24407) 946 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:34.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:50:33 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:50:34.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:34 vm03 bash[23382]: cluster 2026-03-10T07:50:32.896784+0000 mgr.y (mgr.24407) 947 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:34.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:34 vm03 bash[23382]: cluster 2026-03-10T07:50:32.896784+0000 mgr.y (mgr.24407) 947 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:34 vm00 bash[28005]: cluster 2026-03-10T07:50:32.896784+0000 mgr.y (mgr.24407) 947 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:34 vm00 bash[28005]: cluster 2026-03-10T07:50:32.896784+0000 mgr.y (mgr.24407) 947 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:34 vm00 bash[20701]: cluster 2026-03-10T07:50:32.896784+0000 mgr.y (mgr.24407) 947 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:34 vm00 bash[20701]: cluster 2026-03-10T07:50:32.896784+0000 mgr.y (mgr.24407) 947 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:35.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:35 vm03 bash[23382]: audit 2026-03-10T07:50:33.964017+0000 mgr.y (mgr.24407) 948 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:35.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:35 vm03 bash[23382]: audit 2026-03-10T07:50:33.964017+0000 mgr.y (mgr.24407) 948 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:35 vm00 bash[28005]: audit 2026-03-10T07:50:33.964017+0000 mgr.y (mgr.24407) 948 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:35 vm00 bash[28005]: audit 2026-03-10T07:50:33.964017+0000 mgr.y (mgr.24407) 948 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:35.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:35 vm00 bash[20701]: audit 2026-03-10T07:50:33.964017+0000 mgr.y (mgr.24407) 948 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:35.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:35 vm00 bash[20701]: audit 2026-03-10T07:50:33.964017+0000 mgr.y (mgr.24407) 948 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:36.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:36 vm03 bash[23382]: cluster 2026-03-10T07:50:34.897587+0000 mgr.y (mgr.24407) 949 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:36.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:36 vm03 bash[23382]: cluster 2026-03-10T07:50:34.897587+0000 mgr.y (mgr.24407) 949 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:36 vm00 bash[28005]: cluster 2026-03-10T07:50:34.897587+0000 mgr.y (mgr.24407) 949 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:36 vm00 bash[28005]: cluster 2026-03-10T07:50:34.897587+0000 mgr.y (mgr.24407) 949 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:36 vm00 bash[20701]: cluster 2026-03-10T07:50:34.897587+0000 mgr.y (mgr.24407) 949 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:36.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:36 vm00 bash[20701]: cluster 2026-03-10T07:50:34.897587+0000 mgr.y (mgr.24407) 949 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:38 vm00 bash[28005]: cluster 2026-03-10T07:50:36.897917+0000 mgr.y (mgr.24407) 950 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:38 vm00 bash[28005]: cluster 2026-03-10T07:50:36.897917+0000 mgr.y (mgr.24407) 950 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:38 vm00 bash[20701]: cluster 2026-03-10T07:50:36.897917+0000 mgr.y (mgr.24407) 950 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:38 vm00 bash[20701]: cluster 2026-03-10T07:50:36.897917+0000 mgr.y (mgr.24407) 950 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:38.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:38 vm03 bash[23382]: cluster 2026-03-10T07:50:36.897917+0000 mgr.y (mgr.24407) 950 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:38.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:38 vm03 bash[23382]: cluster 2026-03-10T07:50:36.897917+0000 mgr.y (mgr.24407) 950 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:40 vm00 bash[28005]: cluster 2026-03-10T07:50:38.898366+0000 mgr.y (mgr.24407) 951 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:40 vm00 bash[28005]: cluster 2026-03-10T07:50:38.898366+0000 mgr.y (mgr.24407) 951 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:40 vm00 bash[28005]: audit 2026-03-10T07:50:40.242806+0000 mon.c (mon.2) 429 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:40 vm00 bash[28005]: audit 2026-03-10T07:50:40.242806+0000 mon.c (mon.2) 429 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:40.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:40 vm00 bash[20701]: cluster 2026-03-10T07:50:38.898366+0000 mgr.y (mgr.24407) 951 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:40.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:40 vm00 bash[20701]: cluster 2026-03-10T07:50:38.898366+0000 mgr.y (mgr.24407) 951 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:40.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:40 vm00 bash[20701]: audit 2026-03-10T07:50:40.242806+0000 mon.c (mon.2) 429 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:40.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:40 vm00 bash[20701]: audit 2026-03-10T07:50:40.242806+0000 mon.c (mon.2) 429 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:40 vm03 bash[23382]: cluster 2026-03-10T07:50:38.898366+0000 mgr.y (mgr.24407) 951 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:40 vm03 bash[23382]: cluster 2026-03-10T07:50:38.898366+0000 mgr.y (mgr.24407) 951 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:40 vm03 bash[23382]: audit 2026-03-10T07:50:40.242806+0000 mon.c (mon.2) 429 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:40 vm03 bash[23382]: audit 2026-03-10T07:50:40.242806+0000 mon.c (mon.2) 429 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:41.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:50:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:50:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:50:42.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:42 vm03 bash[23382]: cluster 2026-03-10T07:50:40.899118+0000 mgr.y (mgr.24407) 952 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:42.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:42 vm03 bash[23382]: cluster 2026-03-10T07:50:40.899118+0000 mgr.y (mgr.24407) 952 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:42.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:42 vm00 bash[28005]: cluster 2026-03-10T07:50:40.899118+0000 mgr.y (mgr.24407) 952 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:42.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:42 vm00 bash[28005]: cluster 2026-03-10T07:50:40.899118+0000 mgr.y (mgr.24407) 952 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:42 vm00 bash[20701]: cluster 2026-03-10T07:50:40.899118+0000 mgr.y (mgr.24407) 952 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:42 vm00 bash[20701]: cluster 2026-03-10T07:50:40.899118+0000 mgr.y (mgr.24407) 952 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:44.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:50:43 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:50:44.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:44 vm03 bash[23382]: cluster 2026-03-10T07:50:42.899487+0000 mgr.y (mgr.24407) 953 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:44.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:44 vm03 bash[23382]: cluster 2026-03-10T07:50:42.899487+0000 mgr.y (mgr.24407) 953 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:44 vm00 bash[28005]: cluster 2026-03-10T07:50:42.899487+0000 mgr.y (mgr.24407) 953 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:44 vm00 bash[28005]: cluster 2026-03-10T07:50:42.899487+0000 mgr.y (mgr.24407) 953 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:44.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:44 vm00 bash[20701]: cluster 2026-03-10T07:50:42.899487+0000 mgr.y (mgr.24407) 953 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:44.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:44 vm00 bash[20701]: cluster 2026-03-10T07:50:42.899487+0000 mgr.y (mgr.24407) 953 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:45.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:45 vm03 bash[23382]: audit 2026-03-10T07:50:43.965282+0000 mgr.y (mgr.24407) 954 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:45.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:45 vm03 bash[23382]: audit 2026-03-10T07:50:43.965282+0000 mgr.y (mgr.24407) 954 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:45.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:45 vm00 bash[28005]: audit 2026-03-10T07:50:43.965282+0000 mgr.y (mgr.24407) 954 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:45.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:45 vm00 bash[28005]: audit 2026-03-10T07:50:43.965282+0000 mgr.y (mgr.24407) 954 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:45.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:45 vm00 bash[20701]: audit 2026-03-10T07:50:43.965282+0000 mgr.y (mgr.24407) 954 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:45.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:45 vm00 bash[20701]: audit 2026-03-10T07:50:43.965282+0000 mgr.y (mgr.24407) 954 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:46.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:46 vm03 bash[23382]: cluster 2026-03-10T07:50:44.900160+0000 mgr.y (mgr.24407) 955 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:46.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:46 vm03 bash[23382]: cluster 2026-03-10T07:50:44.900160+0000 mgr.y (mgr.24407) 955 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:46.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:46 vm00 bash[28005]: cluster 2026-03-10T07:50:44.900160+0000 mgr.y (mgr.24407) 955 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:46.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:46 vm00 bash[28005]: cluster 2026-03-10T07:50:44.900160+0000 mgr.y (mgr.24407) 955 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:46.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:46 vm00 bash[20701]: cluster 2026-03-10T07:50:44.900160+0000 mgr.y (mgr.24407) 955 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:46.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:46 vm00 bash[20701]: cluster 2026-03-10T07:50:44.900160+0000 mgr.y (mgr.24407) 955 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:47.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:47 vm00 bash[28005]: cluster 2026-03-10T07:50:46.900472+0000 mgr.y (mgr.24407) 956 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:47.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:47 vm00 bash[28005]: cluster 2026-03-10T07:50:46.900472+0000 mgr.y (mgr.24407) 956 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:47.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:47 vm00 bash[20701]: cluster 2026-03-10T07:50:46.900472+0000 mgr.y (mgr.24407) 956 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:47.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:47 vm00 bash[20701]: cluster 2026-03-10T07:50:46.900472+0000 mgr.y (mgr.24407) 956 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:48.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:47 vm03 bash[23382]: cluster 2026-03-10T07:50:46.900472+0000 mgr.y (mgr.24407) 956 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:48.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:47 vm03 bash[23382]: cluster 2026-03-10T07:50:46.900472+0000 mgr.y (mgr.24407) 956 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:50.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:49 vm03 bash[23382]: cluster 2026-03-10T07:50:48.900972+0000 mgr.y (mgr.24407) 957 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:50.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:49 vm03 bash[23382]: cluster 2026-03-10T07:50:48.900972+0000 mgr.y (mgr.24407) 957 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:50.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:49 vm00 bash[28005]: cluster 2026-03-10T07:50:48.900972+0000 mgr.y (mgr.24407) 957 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:50.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:49 vm00 bash[28005]: cluster 2026-03-10T07:50:48.900972+0000 mgr.y (mgr.24407) 957 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:49 vm00 bash[20701]: cluster 2026-03-10T07:50:48.900972+0000 mgr.y (mgr.24407) 957 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:50.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:49 vm00 bash[20701]: cluster 2026-03-10T07:50:48.900972+0000 mgr.y (mgr.24407) 957 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:51.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:50:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:50:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:50:52.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:52 vm00 bash[28005]: cluster 2026-03-10T07:50:50.901941+0000 mgr.y (mgr.24407) 958 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:52.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:52 vm00 bash[28005]: cluster 2026-03-10T07:50:50.901941+0000 mgr.y (mgr.24407) 958 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:52 vm00 bash[20701]: cluster 2026-03-10T07:50:50.901941+0000 mgr.y (mgr.24407) 958 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:52.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:52 vm00 bash[20701]: cluster 2026-03-10T07:50:50.901941+0000 mgr.y (mgr.24407) 958 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:52 vm03 bash[23382]: cluster 2026-03-10T07:50:50.901941+0000 mgr.y (mgr.24407) 958 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:52 vm03 bash[23382]: cluster 2026-03-10T07:50:50.901941+0000 mgr.y (mgr.24407) 958 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:54.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:50:53 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:50:54.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:54 vm03 bash[23382]: cluster 2026-03-10T07:50:52.902313+0000 mgr.y (mgr.24407) 959 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:54.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:54 vm03 bash[23382]: cluster 2026-03-10T07:50:52.902313+0000 mgr.y (mgr.24407) 959 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:54.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:54 vm00 bash[28005]: cluster 2026-03-10T07:50:52.902313+0000 mgr.y (mgr.24407) 959 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:54.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:54 vm00 bash[28005]: cluster 2026-03-10T07:50:52.902313+0000 mgr.y (mgr.24407) 959 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:54.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:54 vm00 bash[20701]: cluster 2026-03-10T07:50:52.902313+0000 mgr.y (mgr.24407) 959 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:54.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:54 vm00 bash[20701]: cluster 2026-03-10T07:50:52.902313+0000 mgr.y (mgr.24407) 959 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:55 vm03 bash[23382]: audit 2026-03-10T07:50:53.972638+0000 mgr.y (mgr.24407) 960 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:55 vm03 bash[23382]: audit 2026-03-10T07:50:53.972638+0000 mgr.y (mgr.24407) 960 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:55 vm00 bash[28005]: audit 2026-03-10T07:50:53.972638+0000 mgr.y (mgr.24407) 960 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:55 vm00 bash[28005]: audit 2026-03-10T07:50:53.972638+0000 mgr.y (mgr.24407) 960 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:55 vm00 bash[20701]: audit 2026-03-10T07:50:53.972638+0000 mgr.y (mgr.24407) 960 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:55 vm00 bash[20701]: audit 2026-03-10T07:50:53.972638+0000 mgr.y (mgr.24407) 960 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:50:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:56 vm03 bash[23382]: cluster 2026-03-10T07:50:54.903186+0000 mgr.y (mgr.24407) 961 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:56 vm03 bash[23382]: cluster 2026-03-10T07:50:54.903186+0000 mgr.y (mgr.24407) 961 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:56 vm03 bash[23382]: audit 2026-03-10T07:50:55.249643+0000 mon.c (mon.2) 430 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:56 vm03 bash[23382]: audit 2026-03-10T07:50:55.249643+0000 mon.c (mon.2) 430 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:56 vm00 bash[28005]: cluster 2026-03-10T07:50:54.903186+0000 mgr.y (mgr.24407) 961 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:56 vm00 bash[28005]: cluster 2026-03-10T07:50:54.903186+0000 mgr.y (mgr.24407) 961 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:56 vm00 bash[28005]: audit 2026-03-10T07:50:55.249643+0000 mon.c (mon.2) 430 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:56 vm00 bash[28005]: audit 2026-03-10T07:50:55.249643+0000 mon.c (mon.2) 430 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:56 vm00 bash[20701]: cluster 2026-03-10T07:50:54.903186+0000 mgr.y (mgr.24407) 961 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:56 vm00 bash[20701]: cluster 2026-03-10T07:50:54.903186+0000 mgr.y (mgr.24407) 961 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:50:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:56 vm00 bash[20701]: audit 2026-03-10T07:50:55.249643+0000 mon.c (mon.2) 430 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:56.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:56 vm00 bash[20701]: audit 2026-03-10T07:50:55.249643+0000 mon.c (mon.2) 430 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:50:58.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:58 vm03 bash[23382]: cluster 2026-03-10T07:50:56.903574+0000 mgr.y (mgr.24407) 962 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:58.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:50:58 vm03 bash[23382]: cluster 2026-03-10T07:50:56.903574+0000 mgr.y (mgr.24407) 962 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:58.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:58 vm00 bash[28005]: cluster 2026-03-10T07:50:56.903574+0000 mgr.y (mgr.24407) 962 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:58.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:50:58 vm00 bash[28005]: cluster 2026-03-10T07:50:56.903574+0000 mgr.y (mgr.24407) 962 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:58.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:58 vm00 bash[20701]: cluster 2026-03-10T07:50:56.903574+0000 mgr.y (mgr.24407) 962 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:50:58.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:50:58 vm00 bash[20701]: cluster 2026-03-10T07:50:56.903574+0000 mgr.y (mgr.24407) 962 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:00 vm03 bash[23382]: cluster 2026-03-10T07:50:58.903908+0000 mgr.y (mgr.24407) 963 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:00 vm03 bash[23382]: cluster 2026-03-10T07:50:58.903908+0000 mgr.y (mgr.24407) 963 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:00 vm00 bash[28005]: cluster 2026-03-10T07:50:58.903908+0000 mgr.y (mgr.24407) 963 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:00 vm00 bash[28005]: cluster 2026-03-10T07:50:58.903908+0000 mgr.y (mgr.24407) 963 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:00.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:00 vm00 bash[20701]: cluster 2026-03-10T07:50:58.903908+0000 mgr.y (mgr.24407) 963 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:00.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:00 vm00 bash[20701]: cluster 2026-03-10T07:50:58.903908+0000 mgr.y (mgr.24407) 963 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:01.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:51:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:51:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:51:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:02 vm03 bash[23382]: cluster 2026-03-10T07:51:00.904561+0000 mgr.y (mgr.24407) 964 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:02 vm03 bash[23382]: cluster 2026-03-10T07:51:00.904561+0000 mgr.y (mgr.24407) 964 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:02 vm00 bash[28005]: cluster 2026-03-10T07:51:00.904561+0000 mgr.y (mgr.24407) 964 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:02 vm00 bash[28005]: cluster 2026-03-10T07:51:00.904561+0000 mgr.y (mgr.24407) 964 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:02 vm00 bash[20701]: cluster 2026-03-10T07:51:00.904561+0000 mgr.y (mgr.24407) 964 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:02 vm00 bash[20701]: cluster 2026-03-10T07:51:00.904561+0000 mgr.y (mgr.24407) 964 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:04.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:51:03 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:51:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:04 vm00 bash[28005]: cluster 2026-03-10T07:51:02.905021+0000 mgr.y (mgr.24407) 965 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:04.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:04 vm00 bash[28005]: cluster 2026-03-10T07:51:02.905021+0000 mgr.y (mgr.24407) 965 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:04.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:04 vm00 bash[20701]: cluster 2026-03-10T07:51:02.905021+0000 mgr.y (mgr.24407) 965 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:04.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:04 vm00 bash[20701]: cluster 2026-03-10T07:51:02.905021+0000 mgr.y (mgr.24407) 965 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:04.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:04 vm03 bash[23382]: cluster 2026-03-10T07:51:02.905021+0000 mgr.y (mgr.24407) 965 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:04.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:04 vm03 bash[23382]: cluster 2026-03-10T07:51:02.905021+0000 mgr.y (mgr.24407) 965 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:05 vm00 bash[20701]: audit 2026-03-10T07:51:03.982017+0000 mgr.y (mgr.24407) 966 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:05 vm00 bash[20701]: audit 2026-03-10T07:51:03.982017+0000 mgr.y (mgr.24407) 966 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:05 vm00 bash[28005]: audit 2026-03-10T07:51:03.982017+0000 mgr.y (mgr.24407) 966 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:05 vm00 bash[28005]: audit 2026-03-10T07:51:03.982017+0000 mgr.y (mgr.24407) 966 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:05 vm03 bash[23382]: audit 2026-03-10T07:51:03.982017+0000 mgr.y (mgr.24407) 966 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:05 vm03 bash[23382]: audit 2026-03-10T07:51:03.982017+0000 mgr.y (mgr.24407) 966 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:06 vm00 bash[20701]: cluster 2026-03-10T07:51:04.905812+0000 mgr.y (mgr.24407) 967 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:06 vm00 bash[20701]: cluster 2026-03-10T07:51:04.905812+0000 mgr.y (mgr.24407) 967 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:06 vm00 bash[28005]: cluster 2026-03-10T07:51:04.905812+0000 mgr.y (mgr.24407) 967 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:06 vm00 bash[28005]: cluster 2026-03-10T07:51:04.905812+0000 mgr.y (mgr.24407) 967 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:06 vm03 bash[23382]: cluster 2026-03-10T07:51:04.905812+0000 mgr.y (mgr.24407) 967 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:06 vm03 bash[23382]: cluster 2026-03-10T07:51:04.905812+0000 mgr.y (mgr.24407) 967 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:08.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:08 vm00 bash[20701]: cluster 2026-03-10T07:51:06.906124+0000 mgr.y (mgr.24407) 968 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:08.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:08 vm00 bash[20701]: cluster 2026-03-10T07:51:06.906124+0000 mgr.y (mgr.24407) 968 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:08 vm00 bash[28005]: cluster 2026-03-10T07:51:06.906124+0000 mgr.y (mgr.24407) 968 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:08.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:08 vm00 bash[28005]: cluster 2026-03-10T07:51:06.906124+0000 mgr.y (mgr.24407) 968 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:08 vm03 bash[23382]: cluster 2026-03-10T07:51:06.906124+0000 mgr.y (mgr.24407) 968 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:08 vm03 bash[23382]: cluster 2026-03-10T07:51:06.906124+0000 mgr.y (mgr.24407) 968 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:10 vm03 bash[23382]: cluster 2026-03-10T07:51:08.906410+0000 mgr.y (mgr.24407) 969 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:10 vm03 bash[23382]: cluster 2026-03-10T07:51:08.906410+0000 mgr.y (mgr.24407) 969 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:10 vm03 bash[23382]: audit 2026-03-10T07:51:10.257186+0000 mon.c (mon.2) 431 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:10 vm03 bash[23382]: audit 2026-03-10T07:51:10.257186+0000 mon.c (mon.2) 431 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:10 vm00 bash[20701]: cluster 2026-03-10T07:51:08.906410+0000 mgr.y (mgr.24407) 969 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:10 vm00 bash[20701]: cluster 2026-03-10T07:51:08.906410+0000 mgr.y (mgr.24407) 969 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:10 vm00 bash[20701]: audit 2026-03-10T07:51:10.257186+0000 mon.c (mon.2) 431 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:10 vm00 bash[20701]: audit 2026-03-10T07:51:10.257186+0000 mon.c (mon.2) 431 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:10 vm00 bash[28005]: cluster 2026-03-10T07:51:08.906410+0000 mgr.y (mgr.24407) 969 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:10 vm00 bash[28005]: cluster 2026-03-10T07:51:08.906410+0000 mgr.y (mgr.24407) 969 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:10 vm00 bash[28005]: audit 2026-03-10T07:51:10.257186+0000 mon.c (mon.2) 431 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:10 vm00 bash[28005]: audit 2026-03-10T07:51:10.257186+0000 mon.c (mon.2) 431 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:11.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:51:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:51:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:51:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:12 vm03 bash[23382]: cluster 2026-03-10T07:51:10.907060+0000 mgr.y (mgr.24407) 970 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:12 vm03 bash[23382]: cluster 2026-03-10T07:51:10.907060+0000 mgr.y (mgr.24407) 970 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:12 vm00 bash[28005]: cluster 2026-03-10T07:51:10.907060+0000 mgr.y (mgr.24407) 970 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:12 vm00 bash[28005]: cluster 2026-03-10T07:51:10.907060+0000 mgr.y (mgr.24407) 970 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:12 vm00 bash[20701]: cluster 2026-03-10T07:51:10.907060+0000 mgr.y (mgr.24407) 970 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:12 vm00 bash[20701]: cluster 2026-03-10T07:51:10.907060+0000 mgr.y (mgr.24407) 970 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:14.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:51:13 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:51:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:14 vm03 bash[23382]: cluster 2026-03-10T07:51:12.907402+0000 mgr.y (mgr.24407) 971 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:14 vm03 bash[23382]: cluster 2026-03-10T07:51:12.907402+0000 mgr.y (mgr.24407) 971 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:14 vm00 bash[28005]: cluster 2026-03-10T07:51:12.907402+0000 mgr.y (mgr.24407) 971 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:14 vm00 bash[28005]: cluster 2026-03-10T07:51:12.907402+0000 mgr.y (mgr.24407) 971 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:14.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:14 vm00 bash[20701]: cluster 2026-03-10T07:51:12.907402+0000 mgr.y (mgr.24407) 971 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:14.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:14 vm00 bash[20701]: cluster 2026-03-10T07:51:12.907402+0000 mgr.y (mgr.24407) 971 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:15 vm03 bash[23382]: audit 2026-03-10T07:51:13.985292+0000 mgr.y (mgr.24407) 972 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:15 vm03 bash[23382]: audit 2026-03-10T07:51:13.985292+0000 mgr.y (mgr.24407) 972 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:15.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:15 vm00 bash[28005]: audit 2026-03-10T07:51:13.985292+0000 mgr.y (mgr.24407) 972 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:15.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:15 vm00 bash[28005]: audit 2026-03-10T07:51:13.985292+0000 mgr.y (mgr.24407) 972 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:15.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:15 vm00 bash[20701]: audit 2026-03-10T07:51:13.985292+0000 mgr.y (mgr.24407) 972 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:15.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:15 vm00 bash[20701]: audit 2026-03-10T07:51:13.985292+0000 mgr.y (mgr.24407) 972 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:16 vm00 bash[28005]: cluster 2026-03-10T07:51:14.908076+0000 mgr.y (mgr.24407) 973 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:16 vm00 bash[28005]: cluster 2026-03-10T07:51:14.908076+0000 mgr.y (mgr.24407) 973 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:16.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:16 vm00 bash[20701]: cluster 2026-03-10T07:51:14.908076+0000 mgr.y (mgr.24407) 973 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:16.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:16 vm00 bash[20701]: cluster 2026-03-10T07:51:14.908076+0000 mgr.y (mgr.24407) 973 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:17.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:16 vm03 bash[23382]: cluster 2026-03-10T07:51:14.908076+0000 mgr.y (mgr.24407) 973 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:17.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:16 vm03 bash[23382]: cluster 2026-03-10T07:51:14.908076+0000 mgr.y (mgr.24407) 973 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:18.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:17 vm03 bash[23382]: cluster 2026-03-10T07:51:16.908399+0000 mgr.y (mgr.24407) 974 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:18.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:17 vm03 bash[23382]: cluster 2026-03-10T07:51:16.908399+0000 mgr.y (mgr.24407) 974 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:17 vm00 bash[28005]: cluster 2026-03-10T07:51:16.908399+0000 mgr.y (mgr.24407) 974 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:18.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:17 vm00 bash[28005]: cluster 2026-03-10T07:51:16.908399+0000 mgr.y (mgr.24407) 974 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:17 vm00 bash[20701]: cluster 2026-03-10T07:51:16.908399+0000 mgr.y (mgr.24407) 974 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:18.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:17 vm00 bash[20701]: cluster 2026-03-10T07:51:16.908399+0000 mgr.y (mgr.24407) 974 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:20.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:20 vm00 bash[20701]: cluster 2026-03-10T07:51:18.908770+0000 mgr.y (mgr.24407) 975 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:20.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:20 vm00 bash[20701]: cluster 2026-03-10T07:51:18.908770+0000 mgr.y (mgr.24407) 975 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:20 vm00 bash[28005]: cluster 2026-03-10T07:51:18.908770+0000 mgr.y (mgr.24407) 975 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:20.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:20 vm00 bash[28005]: cluster 2026-03-10T07:51:18.908770+0000 mgr.y (mgr.24407) 975 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:20.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:20 vm03 bash[23382]: cluster 2026-03-10T07:51:18.908770+0000 mgr.y (mgr.24407) 975 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:20.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:20 vm03 bash[23382]: cluster 2026-03-10T07:51:18.908770+0000 mgr.y (mgr.24407) 975 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:21.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:51:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:51:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:51:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:22 vm00 bash[28005]: cluster 2026-03-10T07:51:20.909514+0000 mgr.y (mgr.24407) 976 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:22.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:22 vm00 bash[28005]: cluster 2026-03-10T07:51:20.909514+0000 mgr.y (mgr.24407) 976 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:22.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:22 vm00 bash[20701]: cluster 2026-03-10T07:51:20.909514+0000 mgr.y (mgr.24407) 976 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:22.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:22 vm00 bash[20701]: cluster 2026-03-10T07:51:20.909514+0000 mgr.y (mgr.24407) 976 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:22.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:22 vm03 bash[23382]: cluster 2026-03-10T07:51:20.909514+0000 mgr.y (mgr.24407) 976 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:22.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:22 vm03 bash[23382]: cluster 2026-03-10T07:51:20.909514+0000 mgr.y (mgr.24407) 976 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:24.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:51:23 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:51:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:24 vm03 bash[23382]: cluster 2026-03-10T07:51:22.909914+0000 mgr.y (mgr.24407) 977 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:24.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:24 vm03 bash[23382]: cluster 2026-03-10T07:51:22.909914+0000 mgr.y (mgr.24407) 977 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:24 vm00 bash[28005]: cluster 2026-03-10T07:51:22.909914+0000 mgr.y (mgr.24407) 977 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:24.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:24 vm00 bash[28005]: cluster 2026-03-10T07:51:22.909914+0000 mgr.y (mgr.24407) 977 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:24 vm00 bash[20701]: cluster 2026-03-10T07:51:22.909914+0000 mgr.y (mgr.24407) 977 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:24 vm00 bash[20701]: cluster 2026-03-10T07:51:22.909914+0000 mgr.y (mgr.24407) 977 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:23.995408+0000 mgr.y (mgr.24407) 978 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:23.995408+0000 mgr.y (mgr.24407) 978 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:24.390513+0000 mon.c (mon.2) 432 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:24.390513+0000 mon.c (mon.2) 432 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:24.718437+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:24.718437+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:24.772668+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:24.772668+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:25.138684+0000 mon.c (mon.2) 433 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:25.138684+0000 mon.c (mon.2) 433 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:25.140011+0000 mon.c (mon.2) 434 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:25.140011+0000 mon.c (mon.2) 434 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:25.164325+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:25 vm03 bash[23382]: audit 2026-03-10T07:51:25.164325+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:23.995408+0000 mgr.y (mgr.24407) 978 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:23.995408+0000 mgr.y (mgr.24407) 978 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:24.390513+0000 mon.c (mon.2) 432 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:24.390513+0000 mon.c (mon.2) 432 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:24.718437+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:24.718437+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:24.772668+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:24.772668+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:25.138684+0000 mon.c (mon.2) 433 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:25.138684+0000 mon.c (mon.2) 433 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:25.140011+0000 mon.c (mon.2) 434 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:25.140011+0000 mon.c (mon.2) 434 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:25.164325+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:25 vm00 bash[28005]: audit 2026-03-10T07:51:25.164325+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:23.995408+0000 mgr.y (mgr.24407) 978 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:23.995408+0000 mgr.y (mgr.24407) 978 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:24.390513+0000 mon.c (mon.2) 432 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:24.390513+0000 mon.c (mon.2) 432 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:24.718437+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:24.718437+0000 mon.a (mon.0) 3558 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:24.772668+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:24.772668+0000 mon.a (mon.0) 3559 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:25.138684+0000 mon.c (mon.2) 433 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:25.138684+0000 mon.c (mon.2) 433 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:25.140011+0000 mon.c (mon.2) 434 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:25.140011+0000 mon.c (mon.2) 434 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:25.164325+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:25.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:25 vm00 bash[20701]: audit 2026-03-10T07:51:25.164325+0000 mon.a (mon.0) 3560 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:51:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:26 vm03 bash[23382]: cluster 2026-03-10T07:51:24.911284+0000 mgr.y (mgr.24407) 979 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:26 vm03 bash[23382]: cluster 2026-03-10T07:51:24.911284+0000 mgr.y (mgr.24407) 979 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:26 vm03 bash[23382]: audit 2026-03-10T07:51:25.263960+0000 mon.c (mon.2) 435 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:26 vm03 bash[23382]: audit 2026-03-10T07:51:25.263960+0000 mon.c (mon.2) 435 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:26 vm00 bash[28005]: cluster 2026-03-10T07:51:24.911284+0000 mgr.y (mgr.24407) 979 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:26 vm00 bash[28005]: cluster 2026-03-10T07:51:24.911284+0000 mgr.y (mgr.24407) 979 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:26 vm00 bash[28005]: audit 2026-03-10T07:51:25.263960+0000 mon.c (mon.2) 435 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:26 vm00 bash[28005]: audit 2026-03-10T07:51:25.263960+0000 mon.c (mon.2) 435 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:26 vm00 bash[20701]: cluster 2026-03-10T07:51:24.911284+0000 mgr.y (mgr.24407) 979 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:26 vm00 bash[20701]: cluster 2026-03-10T07:51:24.911284+0000 mgr.y (mgr.24407) 979 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:26 vm00 bash[20701]: audit 2026-03-10T07:51:25.263960+0000 mon.c (mon.2) 435 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:26 vm00 bash[20701]: audit 2026-03-10T07:51:25.263960+0000 mon.c (mon.2) 435 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:28 vm00 bash[28005]: cluster 2026-03-10T07:51:26.911646+0000 mgr.y (mgr.24407) 980 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:28 vm00 bash[28005]: cluster 2026-03-10T07:51:26.911646+0000 mgr.y (mgr.24407) 980 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:28 vm00 bash[20701]: cluster 2026-03-10T07:51:26.911646+0000 mgr.y (mgr.24407) 980 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:28 vm00 bash[20701]: cluster 2026-03-10T07:51:26.911646+0000 mgr.y (mgr.24407) 980 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:28.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:28 vm03 bash[23382]: cluster 2026-03-10T07:51:26.911646+0000 mgr.y (mgr.24407) 980 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:28.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:28 vm03 bash[23382]: cluster 2026-03-10T07:51:26.911646+0000 mgr.y (mgr.24407) 980 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:30 vm00 bash[28005]: cluster 2026-03-10T07:51:28.912053+0000 mgr.y (mgr.24407) 981 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:30 vm00 bash[28005]: cluster 2026-03-10T07:51:28.912053+0000 mgr.y (mgr.24407) 981 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:30.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:30 vm00 bash[20701]: cluster 2026-03-10T07:51:28.912053+0000 mgr.y (mgr.24407) 981 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:30.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:30 vm00 bash[20701]: cluster 2026-03-10T07:51:28.912053+0000 mgr.y (mgr.24407) 981 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:30.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:30 vm03 bash[23382]: cluster 2026-03-10T07:51:28.912053+0000 mgr.y (mgr.24407) 981 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:30.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:30 vm03 bash[23382]: cluster 2026-03-10T07:51:28.912053+0000 mgr.y (mgr.24407) 981 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:31.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:51:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:51:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:51:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:32 vm00 bash[28005]: cluster 2026-03-10T07:51:30.912744+0000 mgr.y (mgr.24407) 982 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:32 vm00 bash[28005]: cluster 2026-03-10T07:51:30.912744+0000 mgr.y (mgr.24407) 982 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:32 vm00 bash[20701]: cluster 2026-03-10T07:51:30.912744+0000 mgr.y (mgr.24407) 982 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:32 vm00 bash[20701]: cluster 2026-03-10T07:51:30.912744+0000 mgr.y (mgr.24407) 982 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:32.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:32 vm03 bash[23382]: cluster 2026-03-10T07:51:30.912744+0000 mgr.y (mgr.24407) 982 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:32.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:32 vm03 bash[23382]: cluster 2026-03-10T07:51:30.912744+0000 mgr.y (mgr.24407) 982 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:34.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:51:34 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:51:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:34 vm00 bash[28005]: cluster 2026-03-10T07:51:32.913208+0000 mgr.y (mgr.24407) 983 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:34 vm00 bash[28005]: cluster 2026-03-10T07:51:32.913208+0000 mgr.y (mgr.24407) 983 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:34 vm00 bash[20701]: cluster 2026-03-10T07:51:32.913208+0000 mgr.y (mgr.24407) 983 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:34 vm00 bash[20701]: cluster 2026-03-10T07:51:32.913208+0000 mgr.y (mgr.24407) 983 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:34.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:34 vm03 bash[23382]: cluster 2026-03-10T07:51:32.913208+0000 mgr.y (mgr.24407) 983 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:34.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:34 vm03 bash[23382]: cluster 2026-03-10T07:51:32.913208+0000 mgr.y (mgr.24407) 983 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:35.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:35 vm00 bash[28005]: audit 2026-03-10T07:51:34.006381+0000 mgr.y (mgr.24407) 984 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:35.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:35 vm00 bash[28005]: audit 2026-03-10T07:51:34.006381+0000 mgr.y (mgr.24407) 984 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:35.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:35 vm00 bash[20701]: audit 2026-03-10T07:51:34.006381+0000 mgr.y (mgr.24407) 984 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:35.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:35 vm00 bash[20701]: audit 2026-03-10T07:51:34.006381+0000 mgr.y (mgr.24407) 984 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:36.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:35 vm03 bash[23382]: audit 2026-03-10T07:51:34.006381+0000 mgr.y (mgr.24407) 984 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:36.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:35 vm03 bash[23382]: audit 2026-03-10T07:51:34.006381+0000 mgr.y (mgr.24407) 984 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:36.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:36 vm00 bash[28005]: cluster 2026-03-10T07:51:34.913893+0000 mgr.y (mgr.24407) 985 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:36.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:36 vm00 bash[28005]: cluster 2026-03-10T07:51:34.913893+0000 mgr.y (mgr.24407) 985 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:36.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:36 vm00 bash[20701]: cluster 2026-03-10T07:51:34.913893+0000 mgr.y (mgr.24407) 985 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:36.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:36 vm00 bash[20701]: cluster 2026-03-10T07:51:34.913893+0000 mgr.y (mgr.24407) 985 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:37.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:36 vm03 bash[23382]: cluster 2026-03-10T07:51:34.913893+0000 mgr.y (mgr.24407) 985 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:37.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:36 vm03 bash[23382]: cluster 2026-03-10T07:51:34.913893+0000 mgr.y (mgr.24407) 985 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:37 vm00 bash[28005]: cluster 2026-03-10T07:51:36.914231+0000 mgr.y (mgr.24407) 986 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:37 vm00 bash[28005]: cluster 2026-03-10T07:51:36.914231+0000 mgr.y (mgr.24407) 986 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:37 vm00 bash[20701]: cluster 2026-03-10T07:51:36.914231+0000 mgr.y (mgr.24407) 986 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:37 vm00 bash[20701]: cluster 2026-03-10T07:51:36.914231+0000 mgr.y (mgr.24407) 986 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:38.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:37 vm03 bash[23382]: cluster 2026-03-10T07:51:36.914231+0000 mgr.y (mgr.24407) 986 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:38.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:37 vm03 bash[23382]: cluster 2026-03-10T07:51:36.914231+0000 mgr.y (mgr.24407) 986 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:40 vm00 bash[28005]: cluster 2026-03-10T07:51:38.914753+0000 mgr.y (mgr.24407) 987 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:40 vm00 bash[28005]: cluster 2026-03-10T07:51:38.914753+0000 mgr.y (mgr.24407) 987 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:40 vm00 bash[20701]: cluster 2026-03-10T07:51:38.914753+0000 mgr.y (mgr.24407) 987 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:40 vm00 bash[20701]: cluster 2026-03-10T07:51:38.914753+0000 mgr.y (mgr.24407) 987 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:40 vm03 bash[23382]: cluster 2026-03-10T07:51:38.914753+0000 mgr.y (mgr.24407) 987 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:40 vm03 bash[23382]: cluster 2026-03-10T07:51:38.914753+0000 mgr.y (mgr.24407) 987 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:41 vm00 bash[20701]: audit 2026-03-10T07:51:40.273625+0000 mon.c (mon.2) 436 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:41 vm00 bash[20701]: audit 2026-03-10T07:51:40.273625+0000 mon.c (mon.2) 436 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:41.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:51:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:51:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:51:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:41 vm00 bash[28005]: audit 2026-03-10T07:51:40.273625+0000 mon.c (mon.2) 436 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:41 vm00 bash[28005]: audit 2026-03-10T07:51:40.273625+0000 mon.c (mon.2) 436 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:41 vm03 bash[23382]: audit 2026-03-10T07:51:40.273625+0000 mon.c (mon.2) 436 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:41 vm03 bash[23382]: audit 2026-03-10T07:51:40.273625+0000 mon.c (mon.2) 436 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:42.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:42 vm00 bash[20701]: cluster 2026-03-10T07:51:40.915432+0000 mgr.y (mgr.24407) 988 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:42.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:42 vm00 bash[20701]: cluster 2026-03-10T07:51:40.915432+0000 mgr.y (mgr.24407) 988 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:42.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:42 vm00 bash[28005]: cluster 2026-03-10T07:51:40.915432+0000 mgr.y (mgr.24407) 988 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:42.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:42 vm00 bash[28005]: cluster 2026-03-10T07:51:40.915432+0000 mgr.y (mgr.24407) 988 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:42.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:42 vm03 bash[23382]: cluster 2026-03-10T07:51:40.915432+0000 mgr.y (mgr.24407) 988 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:42.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:42 vm03 bash[23382]: cluster 2026-03-10T07:51:40.915432+0000 mgr.y (mgr.24407) 988 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:44.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:51:44 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:51:44.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:44 vm03 bash[23382]: cluster 2026-03-10T07:51:42.915871+0000 mgr.y (mgr.24407) 989 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:44.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:44 vm03 bash[23382]: cluster 2026-03-10T07:51:42.915871+0000 mgr.y (mgr.24407) 989 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:44 vm00 bash[20701]: cluster 2026-03-10T07:51:42.915871+0000 mgr.y (mgr.24407) 989 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:44 vm00 bash[20701]: cluster 2026-03-10T07:51:42.915871+0000 mgr.y (mgr.24407) 989 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:44 vm00 bash[28005]: cluster 2026-03-10T07:51:42.915871+0000 mgr.y (mgr.24407) 989 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:44 vm00 bash[28005]: cluster 2026-03-10T07:51:42.915871+0000 mgr.y (mgr.24407) 989 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:45 vm03 bash[23382]: audit 2026-03-10T07:51:44.017356+0000 mgr.y (mgr.24407) 990 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:45 vm03 bash[23382]: audit 2026-03-10T07:51:44.017356+0000 mgr.y (mgr.24407) 990 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:45 vm00 bash[20701]: audit 2026-03-10T07:51:44.017356+0000 mgr.y (mgr.24407) 990 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:45 vm00 bash[20701]: audit 2026-03-10T07:51:44.017356+0000 mgr.y (mgr.24407) 990 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:45 vm00 bash[28005]: audit 2026-03-10T07:51:44.017356+0000 mgr.y (mgr.24407) 990 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:45 vm00 bash[28005]: audit 2026-03-10T07:51:44.017356+0000 mgr.y (mgr.24407) 990 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:46 vm03 bash[23382]: cluster 2026-03-10T07:51:44.916523+0000 mgr.y (mgr.24407) 991 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:46 vm03 bash[23382]: cluster 2026-03-10T07:51:44.916523+0000 mgr.y (mgr.24407) 991 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:46 vm00 bash[28005]: cluster 2026-03-10T07:51:44.916523+0000 mgr.y (mgr.24407) 991 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:46 vm00 bash[28005]: cluster 2026-03-10T07:51:44.916523+0000 mgr.y (mgr.24407) 991 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:46 vm00 bash[20701]: cluster 2026-03-10T07:51:44.916523+0000 mgr.y (mgr.24407) 991 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:46 vm00 bash[20701]: cluster 2026-03-10T07:51:44.916523+0000 mgr.y (mgr.24407) 991 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:48 vm03 bash[23382]: cluster 2026-03-10T07:51:46.916945+0000 mgr.y (mgr.24407) 992 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:48 vm03 bash[23382]: cluster 2026-03-10T07:51:46.916945+0000 mgr.y (mgr.24407) 992 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:48 vm00 bash[28005]: cluster 2026-03-10T07:51:46.916945+0000 mgr.y (mgr.24407) 992 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:48 vm00 bash[28005]: cluster 2026-03-10T07:51:46.916945+0000 mgr.y (mgr.24407) 992 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:48 vm00 bash[20701]: cluster 2026-03-10T07:51:46.916945+0000 mgr.y (mgr.24407) 992 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:48 vm00 bash[20701]: cluster 2026-03-10T07:51:46.916945+0000 mgr.y (mgr.24407) 992 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:50 vm03 bash[23382]: cluster 2026-03-10T07:51:48.917314+0000 mgr.y (mgr.24407) 993 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:50 vm03 bash[23382]: cluster 2026-03-10T07:51:48.917314+0000 mgr.y (mgr.24407) 993 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:50 vm00 bash[28005]: cluster 2026-03-10T07:51:48.917314+0000 mgr.y (mgr.24407) 993 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:50 vm00 bash[28005]: cluster 2026-03-10T07:51:48.917314+0000 mgr.y (mgr.24407) 993 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:50 vm00 bash[20701]: cluster 2026-03-10T07:51:48.917314+0000 mgr.y (mgr.24407) 993 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:50 vm00 bash[20701]: cluster 2026-03-10T07:51:48.917314+0000 mgr.y (mgr.24407) 993 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:51.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:51:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:51:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:51:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:52 vm00 bash[28005]: cluster 2026-03-10T07:51:50.918048+0000 mgr.y (mgr.24407) 994 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:52 vm00 bash[28005]: cluster 2026-03-10T07:51:50.918048+0000 mgr.y (mgr.24407) 994 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:52 vm00 bash[20701]: cluster 2026-03-10T07:51:50.918048+0000 mgr.y (mgr.24407) 994 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:52 vm00 bash[20701]: cluster 2026-03-10T07:51:50.918048+0000 mgr.y (mgr.24407) 994 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:52.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:52 vm03 bash[23382]: cluster 2026-03-10T07:51:50.918048+0000 mgr.y (mgr.24407) 994 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:52.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:52 vm03 bash[23382]: cluster 2026-03-10T07:51:50.918048+0000 mgr.y (mgr.24407) 994 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:54.328 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:51:54 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:51:54.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:54 vm00 bash[28005]: cluster 2026-03-10T07:51:52.918419+0000 mgr.y (mgr.24407) 995 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:54 vm00 bash[28005]: cluster 2026-03-10T07:51:52.918419+0000 mgr.y (mgr.24407) 995 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:54 vm00 bash[20701]: cluster 2026-03-10T07:51:52.918419+0000 mgr.y (mgr.24407) 995 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:54 vm00 bash[20701]: cluster 2026-03-10T07:51:52.918419+0000 mgr.y (mgr.24407) 995 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:54.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:54 vm03 bash[23382]: cluster 2026-03-10T07:51:52.918419+0000 mgr.y (mgr.24407) 995 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:54.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:54 vm03 bash[23382]: cluster 2026-03-10T07:51:52.918419+0000 mgr.y (mgr.24407) 995 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:55 vm00 bash[28005]: audit 2026-03-10T07:51:54.026897+0000 mgr.y (mgr.24407) 996 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:55 vm00 bash[28005]: audit 2026-03-10T07:51:54.026897+0000 mgr.y (mgr.24407) 996 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:55 vm00 bash[28005]: audit 2026-03-10T07:51:55.280354+0000 mon.c (mon.2) 437 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:55 vm00 bash[28005]: audit 2026-03-10T07:51:55.280354+0000 mon.c (mon.2) 437 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:55 vm00 bash[20701]: audit 2026-03-10T07:51:54.026897+0000 mgr.y (mgr.24407) 996 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:55 vm00 bash[20701]: audit 2026-03-10T07:51:54.026897+0000 mgr.y (mgr.24407) 996 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:55 vm00 bash[20701]: audit 2026-03-10T07:51:55.280354+0000 mon.c (mon.2) 437 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:55 vm00 bash[20701]: audit 2026-03-10T07:51:55.280354+0000 mon.c (mon.2) 437 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:55 vm03 bash[23382]: audit 2026-03-10T07:51:54.026897+0000 mgr.y (mgr.24407) 996 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:55 vm03 bash[23382]: audit 2026-03-10T07:51:54.026897+0000 mgr.y (mgr.24407) 996 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:51:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:55 vm03 bash[23382]: audit 2026-03-10T07:51:55.280354+0000 mon.c (mon.2) 437 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:55 vm03 bash[23382]: audit 2026-03-10T07:51:55.280354+0000 mon.c (mon.2) 437 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:51:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:56 vm00 bash[28005]: cluster 2026-03-10T07:51:54.919142+0000 mgr.y (mgr.24407) 997 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:56 vm00 bash[28005]: cluster 2026-03-10T07:51:54.919142+0000 mgr.y (mgr.24407) 997 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:56.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:56 vm00 bash[20701]: cluster 2026-03-10T07:51:54.919142+0000 mgr.y (mgr.24407) 997 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:56.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:56 vm00 bash[20701]: cluster 2026-03-10T07:51:54.919142+0000 mgr.y (mgr.24407) 997 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:56 vm03 bash[23382]: cluster 2026-03-10T07:51:54.919142+0000 mgr.y (mgr.24407) 997 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:56 vm03 bash[23382]: cluster 2026-03-10T07:51:54.919142+0000 mgr.y (mgr.24407) 997 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:51:58.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:58 vm03 bash[23382]: cluster 2026-03-10T07:51:56.919519+0000 mgr.y (mgr.24407) 998 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:58.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:51:58 vm03 bash[23382]: cluster 2026-03-10T07:51:56.919519+0000 mgr.y (mgr.24407) 998 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:58.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:58 vm00 bash[28005]: cluster 2026-03-10T07:51:56.919519+0000 mgr.y (mgr.24407) 998 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:58.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:51:58 vm00 bash[28005]: cluster 2026-03-10T07:51:56.919519+0000 mgr.y (mgr.24407) 998 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:58 vm00 bash[20701]: cluster 2026-03-10T07:51:56.919519+0000 mgr.y (mgr.24407) 998 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:51:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:51:58 vm00 bash[20701]: cluster 2026-03-10T07:51:56.919519+0000 mgr.y (mgr.24407) 998 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:00.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:00 vm03 bash[23382]: cluster 2026-03-10T07:51:58.919869+0000 mgr.y (mgr.24407) 999 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:00.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:00 vm03 bash[23382]: cluster 2026-03-10T07:51:58.919869+0000 mgr.y (mgr.24407) 999 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:00.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:00 vm00 bash[28005]: cluster 2026-03-10T07:51:58.919869+0000 mgr.y (mgr.24407) 999 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:00.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:00 vm00 bash[28005]: cluster 2026-03-10T07:51:58.919869+0000 mgr.y (mgr.24407) 999 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:00.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:00 vm00 bash[20701]: cluster 2026-03-10T07:51:58.919869+0000 mgr.y (mgr.24407) 999 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:00.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:00 vm00 bash[20701]: cluster 2026-03-10T07:51:58.919869+0000 mgr.y (mgr.24407) 999 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:01.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:52:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:52:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:52:02.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:02 vm03 bash[23382]: cluster 2026-03-10T07:52:00.920574+0000 mgr.y (mgr.24407) 1000 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:02.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:02 vm03 bash[23382]: cluster 2026-03-10T07:52:00.920574+0000 mgr.y (mgr.24407) 1000 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:02 vm00 bash[28005]: cluster 2026-03-10T07:52:00.920574+0000 mgr.y (mgr.24407) 1000 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:02 vm00 bash[28005]: cluster 2026-03-10T07:52:00.920574+0000 mgr.y (mgr.24407) 1000 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:02 vm00 bash[20701]: cluster 2026-03-10T07:52:00.920574+0000 mgr.y (mgr.24407) 1000 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:02 vm00 bash[20701]: cluster 2026-03-10T07:52:00.920574+0000 mgr.y (mgr.24407) 1000 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:04.430 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:52:04 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:52:04.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:04 vm03 bash[23382]: cluster 2026-03-10T07:52:02.920880+0000 mgr.y (mgr.24407) 1001 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:04.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:04 vm03 bash[23382]: cluster 2026-03-10T07:52:02.920880+0000 mgr.y (mgr.24407) 1001 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:04.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:04 vm00 bash[28005]: cluster 2026-03-10T07:52:02.920880+0000 mgr.y (mgr.24407) 1001 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:04 vm00 bash[28005]: cluster 2026-03-10T07:52:02.920880+0000 mgr.y (mgr.24407) 1001 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:04 vm00 bash[20701]: cluster 2026-03-10T07:52:02.920880+0000 mgr.y (mgr.24407) 1001 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:04 vm00 bash[20701]: cluster 2026-03-10T07:52:02.920880+0000 mgr.y (mgr.24407) 1001 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:05 vm03 bash[23382]: audit 2026-03-10T07:52:04.037711+0000 mgr.y (mgr.24407) 1002 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:05 vm03 bash[23382]: audit 2026-03-10T07:52:04.037711+0000 mgr.y (mgr.24407) 1002 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:05 vm00 bash[28005]: audit 2026-03-10T07:52:04.037711+0000 mgr.y (mgr.24407) 1002 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:05 vm00 bash[28005]: audit 2026-03-10T07:52:04.037711+0000 mgr.y (mgr.24407) 1002 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:05 vm00 bash[20701]: audit 2026-03-10T07:52:04.037711+0000 mgr.y (mgr.24407) 1002 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:05 vm00 bash[20701]: audit 2026-03-10T07:52:04.037711+0000 mgr.y (mgr.24407) 1002 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:06 vm03 bash[23382]: cluster 2026-03-10T07:52:04.921545+0000 mgr.y (mgr.24407) 1003 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:06 vm03 bash[23382]: cluster 2026-03-10T07:52:04.921545+0000 mgr.y (mgr.24407) 1003 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:06.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:06 vm00 bash[28005]: cluster 2026-03-10T07:52:04.921545+0000 mgr.y (mgr.24407) 1003 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:06.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:06 vm00 bash[28005]: cluster 2026-03-10T07:52:04.921545+0000 mgr.y (mgr.24407) 1003 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:06.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:06 vm00 bash[20701]: cluster 2026-03-10T07:52:04.921545+0000 mgr.y (mgr.24407) 1003 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:06.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:06 vm00 bash[20701]: cluster 2026-03-10T07:52:04.921545+0000 mgr.y (mgr.24407) 1003 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:08 vm03 bash[23382]: cluster 2026-03-10T07:52:06.921947+0000 mgr.y (mgr.24407) 1004 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:08 vm03 bash[23382]: cluster 2026-03-10T07:52:06.921947+0000 mgr.y (mgr.24407) 1004 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:08.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:08 vm00 bash[28005]: cluster 2026-03-10T07:52:06.921947+0000 mgr.y (mgr.24407) 1004 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:08.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:08 vm00 bash[28005]: cluster 2026-03-10T07:52:06.921947+0000 mgr.y (mgr.24407) 1004 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:08.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:08 vm00 bash[20701]: cluster 2026-03-10T07:52:06.921947+0000 mgr.y (mgr.24407) 1004 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:08.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:08 vm00 bash[20701]: cluster 2026-03-10T07:52:06.921947+0000 mgr.y (mgr.24407) 1004 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:10 vm03 bash[23382]: cluster 2026-03-10T07:52:08.922228+0000 mgr.y (mgr.24407) 1005 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:10 vm03 bash[23382]: cluster 2026-03-10T07:52:08.922228+0000 mgr.y (mgr.24407) 1005 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:10 vm03 bash[23382]: audit 2026-03-10T07:52:10.288914+0000 mon.c (mon.2) 438 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:10 vm03 bash[23382]: audit 2026-03-10T07:52:10.288914+0000 mon.c (mon.2) 438 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:10.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:10 vm00 bash[28005]: cluster 2026-03-10T07:52:08.922228+0000 mgr.y (mgr.24407) 1005 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:10.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:10 vm00 bash[28005]: cluster 2026-03-10T07:52:08.922228+0000 mgr.y (mgr.24407) 1005 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:10.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:10 vm00 bash[28005]: audit 2026-03-10T07:52:10.288914+0000 mon.c (mon.2) 438 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:10.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:10 vm00 bash[28005]: audit 2026-03-10T07:52:10.288914+0000 mon.c (mon.2) 438 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:10 vm00 bash[20701]: cluster 2026-03-10T07:52:08.922228+0000 mgr.y (mgr.24407) 1005 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:10 vm00 bash[20701]: cluster 2026-03-10T07:52:08.922228+0000 mgr.y (mgr.24407) 1005 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:10 vm00 bash[20701]: audit 2026-03-10T07:52:10.288914+0000 mon.c (mon.2) 438 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:10 vm00 bash[20701]: audit 2026-03-10T07:52:10.288914+0000 mon.c (mon.2) 438 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:11.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:52:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:52:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:52:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:12 vm03 bash[23382]: cluster 2026-03-10T07:52:10.922908+0000 mgr.y (mgr.24407) 1006 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:12 vm03 bash[23382]: cluster 2026-03-10T07:52:10.922908+0000 mgr.y (mgr.24407) 1006 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:12.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:12 vm00 bash[28005]: cluster 2026-03-10T07:52:10.922908+0000 mgr.y (mgr.24407) 1006 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:12.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:12 vm00 bash[28005]: cluster 2026-03-10T07:52:10.922908+0000 mgr.y (mgr.24407) 1006 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:12 vm00 bash[20701]: cluster 2026-03-10T07:52:10.922908+0000 mgr.y (mgr.24407) 1006 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:12 vm00 bash[20701]: cluster 2026-03-10T07:52:10.922908+0000 mgr.y (mgr.24407) 1006 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:14.495 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:52:14 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:52:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:14 vm03 bash[23382]: cluster 2026-03-10T07:52:12.923227+0000 mgr.y (mgr.24407) 1007 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:14 vm03 bash[23382]: cluster 2026-03-10T07:52:12.923227+0000 mgr.y (mgr.24407) 1007 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:14.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:14 vm00 bash[28005]: cluster 2026-03-10T07:52:12.923227+0000 mgr.y (mgr.24407) 1007 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:14.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:14 vm00 bash[28005]: cluster 2026-03-10T07:52:12.923227+0000 mgr.y (mgr.24407) 1007 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:14.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:14 vm00 bash[20701]: cluster 2026-03-10T07:52:12.923227+0000 mgr.y (mgr.24407) 1007 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:14.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:14 vm00 bash[20701]: cluster 2026-03-10T07:52:12.923227+0000 mgr.y (mgr.24407) 1007 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:15 vm03 bash[23382]: audit 2026-03-10T07:52:14.048009+0000 mgr.y (mgr.24407) 1008 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:15 vm03 bash[23382]: audit 2026-03-10T07:52:14.048009+0000 mgr.y (mgr.24407) 1008 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:15 vm03 bash[23382]: cluster 2026-03-10T07:52:14.923800+0000 mgr.y (mgr.24407) 1009 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:15 vm03 bash[23382]: cluster 2026-03-10T07:52:14.923800+0000 mgr.y (mgr.24407) 1009 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:15.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:15 vm00 bash[28005]: audit 2026-03-10T07:52:14.048009+0000 mgr.y (mgr.24407) 1008 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:15.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:15 vm00 bash[28005]: audit 2026-03-10T07:52:14.048009+0000 mgr.y (mgr.24407) 1008 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:15.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:15 vm00 bash[28005]: cluster 2026-03-10T07:52:14.923800+0000 mgr.y (mgr.24407) 1009 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:15.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:15 vm00 bash[28005]: cluster 2026-03-10T07:52:14.923800+0000 mgr.y (mgr.24407) 1009 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:15.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:15 vm00 bash[20701]: audit 2026-03-10T07:52:14.048009+0000 mgr.y (mgr.24407) 1008 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:15.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:15 vm00 bash[20701]: audit 2026-03-10T07:52:14.048009+0000 mgr.y (mgr.24407) 1008 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:15.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:15 vm00 bash[20701]: cluster 2026-03-10T07:52:14.923800+0000 mgr.y (mgr.24407) 1009 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:15.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:15 vm00 bash[20701]: cluster 2026-03-10T07:52:14.923800+0000 mgr.y (mgr.24407) 1009 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:18.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:17 vm03 bash[23382]: cluster 2026-03-10T07:52:16.924176+0000 mgr.y (mgr.24407) 1010 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:18.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:17 vm03 bash[23382]: cluster 2026-03-10T07:52:16.924176+0000 mgr.y (mgr.24407) 1010 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:18.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:17 vm00 bash[28005]: cluster 2026-03-10T07:52:16.924176+0000 mgr.y (mgr.24407) 1010 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:18.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:17 vm00 bash[28005]: cluster 2026-03-10T07:52:16.924176+0000 mgr.y (mgr.24407) 1010 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:18.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:17 vm00 bash[20701]: cluster 2026-03-10T07:52:16.924176+0000 mgr.y (mgr.24407) 1010 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:18.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:17 vm00 bash[20701]: cluster 2026-03-10T07:52:16.924176+0000 mgr.y (mgr.24407) 1010 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:20.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:19 vm03 bash[23382]: cluster 2026-03-10T07:52:18.924501+0000 mgr.y (mgr.24407) 1011 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:20.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:19 vm03 bash[23382]: cluster 2026-03-10T07:52:18.924501+0000 mgr.y (mgr.24407) 1011 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:20.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:19 vm00 bash[28005]: cluster 2026-03-10T07:52:18.924501+0000 mgr.y (mgr.24407) 1011 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:20.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:19 vm00 bash[28005]: cluster 2026-03-10T07:52:18.924501+0000 mgr.y (mgr.24407) 1011 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:20.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:19 vm00 bash[20701]: cluster 2026-03-10T07:52:18.924501+0000 mgr.y (mgr.24407) 1011 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:20.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:19 vm00 bash[20701]: cluster 2026-03-10T07:52:18.924501+0000 mgr.y (mgr.24407) 1011 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:21.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:52:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:52:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:52:22.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:22 vm03 bash[23382]: cluster 2026-03-10T07:52:20.925190+0000 mgr.y (mgr.24407) 1012 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:22.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:22 vm03 bash[23382]: cluster 2026-03-10T07:52:20.925190+0000 mgr.y (mgr.24407) 1012 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:22.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:22 vm00 bash[20701]: cluster 2026-03-10T07:52:20.925190+0000 mgr.y (mgr.24407) 1012 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:22.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:22 vm00 bash[20701]: cluster 2026-03-10T07:52:20.925190+0000 mgr.y (mgr.24407) 1012 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:22.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:22 vm00 bash[28005]: cluster 2026-03-10T07:52:20.925190+0000 mgr.y (mgr.24407) 1012 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:22.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:22 vm00 bash[28005]: cluster 2026-03-10T07:52:20.925190+0000 mgr.y (mgr.24407) 1012 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:24.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:24 vm00 bash[28005]: cluster 2026-03-10T07:52:22.925534+0000 mgr.y (mgr.24407) 1013 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:24.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:24 vm00 bash[28005]: cluster 2026-03-10T07:52:22.925534+0000 mgr.y (mgr.24407) 1013 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:24.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:24 vm00 bash[20701]: cluster 2026-03-10T07:52:22.925534+0000 mgr.y (mgr.24407) 1013 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:24.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:24 vm00 bash[20701]: cluster 2026-03-10T07:52:22.925534+0000 mgr.y (mgr.24407) 1013 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:24.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:52:24 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:52:24.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:24 vm03 bash[23382]: cluster 2026-03-10T07:52:22.925534+0000 mgr.y (mgr.24407) 1013 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:24.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:24 vm03 bash[23382]: cluster 2026-03-10T07:52:22.925534+0000 mgr.y (mgr.24407) 1013 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:25.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:25 vm00 bash[20701]: audit 2026-03-10T07:52:24.050209+0000 mgr.y (mgr.24407) 1014 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:25.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:25 vm00 bash[20701]: audit 2026-03-10T07:52:24.050209+0000 mgr.y (mgr.24407) 1014 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:25.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:25 vm00 bash[28005]: audit 2026-03-10T07:52:24.050209+0000 mgr.y (mgr.24407) 1014 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:25.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:25 vm00 bash[28005]: audit 2026-03-10T07:52:24.050209+0000 mgr.y (mgr.24407) 1014 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:25 vm03 bash[23382]: audit 2026-03-10T07:52:24.050209+0000 mgr.y (mgr.24407) 1014 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:25 vm03 bash[23382]: audit 2026-03-10T07:52:24.050209+0000 mgr.y (mgr.24407) 1014 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:26.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: cluster 2026-03-10T07:52:24.926297+0000 mgr.y (mgr.24407) 1015 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:26.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: cluster 2026-03-10T07:52:24.926297+0000 mgr.y (mgr.24407) 1015 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:26.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: audit 2026-03-10T07:52:25.231662+0000 mon.c (mon.2) 439 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:52:26.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: audit 2026-03-10T07:52:25.231662+0000 mon.c (mon.2) 439 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:52:26.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: audit 2026-03-10T07:52:25.302191+0000 mon.c (mon.2) 440 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: audit 2026-03-10T07:52:25.302191+0000 mon.c (mon.2) 440 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: audit 2026-03-10T07:52:25.559979+0000 mon.c (mon.2) 441 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: audit 2026-03-10T07:52:25.559979+0000 mon.c (mon.2) 441 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: audit 2026-03-10T07:52:25.561220+0000 mon.c (mon.2) 442 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: audit 2026-03-10T07:52:25.561220+0000 mon.c (mon.2) 442 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: audit 2026-03-10T07:52:25.647380+0000 mon.a (mon.0) 3561 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:26 vm00 bash[28005]: audit 2026-03-10T07:52:25.647380+0000 mon.a (mon.0) 3561 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: cluster 2026-03-10T07:52:24.926297+0000 mgr.y (mgr.24407) 1015 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: cluster 2026-03-10T07:52:24.926297+0000 mgr.y (mgr.24407) 1015 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: audit 2026-03-10T07:52:25.231662+0000 mon.c (mon.2) 439 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: audit 2026-03-10T07:52:25.231662+0000 mon.c (mon.2) 439 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: audit 2026-03-10T07:52:25.302191+0000 mon.c (mon.2) 440 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: audit 2026-03-10T07:52:25.302191+0000 mon.c (mon.2) 440 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: audit 2026-03-10T07:52:25.559979+0000 mon.c (mon.2) 441 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: audit 2026-03-10T07:52:25.559979+0000 mon.c (mon.2) 441 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: audit 2026-03-10T07:52:25.561220+0000 mon.c (mon.2) 442 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: audit 2026-03-10T07:52:25.561220+0000 mon.c (mon.2) 442 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: audit 2026-03-10T07:52:25.647380+0000 mon.a (mon.0) 3561 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:52:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:26 vm00 bash[20701]: audit 2026-03-10T07:52:25.647380+0000 mon.a (mon.0) 3561 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: cluster 2026-03-10T07:52:24.926297+0000 mgr.y (mgr.24407) 1015 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: cluster 2026-03-10T07:52:24.926297+0000 mgr.y (mgr.24407) 1015 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: audit 2026-03-10T07:52:25.231662+0000 mon.c (mon.2) 439 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: audit 2026-03-10T07:52:25.231662+0000 mon.c (mon.2) 439 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: audit 2026-03-10T07:52:25.302191+0000 mon.c (mon.2) 440 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: audit 2026-03-10T07:52:25.302191+0000 mon.c (mon.2) 440 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: audit 2026-03-10T07:52:25.559979+0000 mon.c (mon.2) 441 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: audit 2026-03-10T07:52:25.559979+0000 mon.c (mon.2) 441 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: audit 2026-03-10T07:52:25.561220+0000 mon.c (mon.2) 442 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: audit 2026-03-10T07:52:25.561220+0000 mon.c (mon.2) 442 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: audit 2026-03-10T07:52:25.647380+0000 mon.a (mon.0) 3561 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:52:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:26 vm03 bash[23382]: audit 2026-03-10T07:52:25.647380+0000 mon.a (mon.0) 3561 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:52:28.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:28 vm00 bash[28005]: cluster 2026-03-10T07:52:26.926627+0000 mgr.y (mgr.24407) 1016 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:28.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:28 vm00 bash[28005]: cluster 2026-03-10T07:52:26.926627+0000 mgr.y (mgr.24407) 1016 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:28.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:28 vm00 bash[20701]: cluster 2026-03-10T07:52:26.926627+0000 mgr.y (mgr.24407) 1016 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:28.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:28 vm00 bash[20701]: cluster 2026-03-10T07:52:26.926627+0000 mgr.y (mgr.24407) 1016 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:28 vm03 bash[23382]: cluster 2026-03-10T07:52:26.926627+0000 mgr.y (mgr.24407) 1016 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:28 vm03 bash[23382]: cluster 2026-03-10T07:52:26.926627+0000 mgr.y (mgr.24407) 1016 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:30.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:30 vm00 bash[28005]: cluster 2026-03-10T07:52:28.926981+0000 mgr.y (mgr.24407) 1017 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:30 vm00 bash[28005]: cluster 2026-03-10T07:52:28.926981+0000 mgr.y (mgr.24407) 1017 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:30.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:30 vm00 bash[20701]: cluster 2026-03-10T07:52:28.926981+0000 mgr.y (mgr.24407) 1017 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:30.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:30 vm00 bash[20701]: cluster 2026-03-10T07:52:28.926981+0000 mgr.y (mgr.24407) 1017 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:30.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:30 vm03 bash[23382]: cluster 2026-03-10T07:52:28.926981+0000 mgr.y (mgr.24407) 1017 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:30.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:30 vm03 bash[23382]: cluster 2026-03-10T07:52:28.926981+0000 mgr.y (mgr.24407) 1017 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:31.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:52:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:52:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:52:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:32 vm00 bash[28005]: cluster 2026-03-10T07:52:30.927653+0000 mgr.y (mgr.24407) 1018 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:32.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:32 vm00 bash[28005]: cluster 2026-03-10T07:52:30.927653+0000 mgr.y (mgr.24407) 1018 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:32.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:32 vm00 bash[20701]: cluster 2026-03-10T07:52:30.927653+0000 mgr.y (mgr.24407) 1018 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:32.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:32 vm00 bash[20701]: cluster 2026-03-10T07:52:30.927653+0000 mgr.y (mgr.24407) 1018 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:32 vm03 bash[23382]: cluster 2026-03-10T07:52:30.927653+0000 mgr.y (mgr.24407) 1018 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:32 vm03 bash[23382]: cluster 2026-03-10T07:52:30.927653+0000 mgr.y (mgr.24407) 1018 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:34.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:34 vm00 bash[28005]: cluster 2026-03-10T07:52:32.928048+0000 mgr.y (mgr.24407) 1019 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:34.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:34 vm00 bash[28005]: cluster 2026-03-10T07:52:32.928048+0000 mgr.y (mgr.24407) 1019 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:34 vm00 bash[20701]: cluster 2026-03-10T07:52:32.928048+0000 mgr.y (mgr.24407) 1019 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:34.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:34 vm00 bash[20701]: cluster 2026-03-10T07:52:32.928048+0000 mgr.y (mgr.24407) 1019 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:34.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:52:34 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:52:34.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:34 vm03 bash[23382]: cluster 2026-03-10T07:52:32.928048+0000 mgr.y (mgr.24407) 1019 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:34.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:34 vm03 bash[23382]: cluster 2026-03-10T07:52:32.928048+0000 mgr.y (mgr.24407) 1019 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:35.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:35 vm00 bash[28005]: audit 2026-03-10T07:52:34.060931+0000 mgr.y (mgr.24407) 1020 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:35.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:35 vm00 bash[28005]: audit 2026-03-10T07:52:34.060931+0000 mgr.y (mgr.24407) 1020 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:35.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:35 vm00 bash[20701]: audit 2026-03-10T07:52:34.060931+0000 mgr.y (mgr.24407) 1020 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:35 vm00 bash[20701]: audit 2026-03-10T07:52:34.060931+0000 mgr.y (mgr.24407) 1020 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:35.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:35 vm03 bash[23382]: audit 2026-03-10T07:52:34.060931+0000 mgr.y (mgr.24407) 1020 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:35.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:35 vm03 bash[23382]: audit 2026-03-10T07:52:34.060931+0000 mgr.y (mgr.24407) 1020 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:36.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:36 vm00 bash[28005]: cluster 2026-03-10T07:52:34.928600+0000 mgr.y (mgr.24407) 1021 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:36.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:36 vm00 bash[28005]: cluster 2026-03-10T07:52:34.928600+0000 mgr.y (mgr.24407) 1021 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:36 vm00 bash[20701]: cluster 2026-03-10T07:52:34.928600+0000 mgr.y (mgr.24407) 1021 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:36.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:36 vm00 bash[20701]: cluster 2026-03-10T07:52:34.928600+0000 mgr.y (mgr.24407) 1021 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:36.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:36 vm03 bash[23382]: cluster 2026-03-10T07:52:34.928600+0000 mgr.y (mgr.24407) 1021 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:36.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:36 vm03 bash[23382]: cluster 2026-03-10T07:52:34.928600+0000 mgr.y (mgr.24407) 1021 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:38.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:38 vm03 bash[23382]: cluster 2026-03-10T07:52:36.928955+0000 mgr.y (mgr.24407) 1022 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:38.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:38 vm03 bash[23382]: cluster 2026-03-10T07:52:36.928955+0000 mgr.y (mgr.24407) 1022 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:38.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:38 vm00 bash[28005]: cluster 2026-03-10T07:52:36.928955+0000 mgr.y (mgr.24407) 1022 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:38.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:38 vm00 bash[28005]: cluster 2026-03-10T07:52:36.928955+0000 mgr.y (mgr.24407) 1022 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:38.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:38 vm00 bash[20701]: cluster 2026-03-10T07:52:36.928955+0000 mgr.y (mgr.24407) 1022 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:38.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:38 vm00 bash[20701]: cluster 2026-03-10T07:52:36.928955+0000 mgr.y (mgr.24407) 1022 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:40 vm03 bash[23382]: cluster 2026-03-10T07:52:38.929295+0000 mgr.y (mgr.24407) 1023 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:40 vm03 bash[23382]: cluster 2026-03-10T07:52:38.929295+0000 mgr.y (mgr.24407) 1023 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:40.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:40 vm00 bash[28005]: cluster 2026-03-10T07:52:38.929295+0000 mgr.y (mgr.24407) 1023 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:40.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:40 vm00 bash[28005]: cluster 2026-03-10T07:52:38.929295+0000 mgr.y (mgr.24407) 1023 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:40.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:40 vm00 bash[20701]: cluster 2026-03-10T07:52:38.929295+0000 mgr.y (mgr.24407) 1023 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:40.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:40 vm00 bash[20701]: cluster 2026-03-10T07:52:38.929295+0000 mgr.y (mgr.24407) 1023 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:41.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:41 vm00 bash[28005]: audit 2026-03-10T07:52:40.309028+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:41.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:41 vm00 bash[28005]: audit 2026-03-10T07:52:40.309028+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:41.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:52:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:52:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:52:41.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:41 vm00 bash[20701]: audit 2026-03-10T07:52:40.309028+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:41.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:41 vm00 bash[20701]: audit 2026-03-10T07:52:40.309028+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:41 vm03 bash[23382]: audit 2026-03-10T07:52:40.309028+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:41 vm03 bash[23382]: audit 2026-03-10T07:52:40.309028+0000 mon.c (mon.2) 443 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:42.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:42 vm03 bash[23382]: cluster 2026-03-10T07:52:40.929940+0000 mgr.y (mgr.24407) 1024 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:42.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:42 vm03 bash[23382]: cluster 2026-03-10T07:52:40.929940+0000 mgr.y (mgr.24407) 1024 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:42.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:42 vm00 bash[28005]: cluster 2026-03-10T07:52:40.929940+0000 mgr.y (mgr.24407) 1024 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:42.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:42 vm00 bash[28005]: cluster 2026-03-10T07:52:40.929940+0000 mgr.y (mgr.24407) 1024 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:42.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:42 vm00 bash[20701]: cluster 2026-03-10T07:52:40.929940+0000 mgr.y (mgr.24407) 1024 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:42.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:42 vm00 bash[20701]: cluster 2026-03-10T07:52:40.929940+0000 mgr.y (mgr.24407) 1024 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:44.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:52:44 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:52:44.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:44 vm03 bash[23382]: cluster 2026-03-10T07:52:42.930326+0000 mgr.y (mgr.24407) 1025 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:44.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:44 vm03 bash[23382]: cluster 2026-03-10T07:52:42.930326+0000 mgr.y (mgr.24407) 1025 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:44.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:44 vm00 bash[28005]: cluster 2026-03-10T07:52:42.930326+0000 mgr.y (mgr.24407) 1025 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:44.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:44 vm00 bash[28005]: cluster 2026-03-10T07:52:42.930326+0000 mgr.y (mgr.24407) 1025 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:44.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:44 vm00 bash[20701]: cluster 2026-03-10T07:52:42.930326+0000 mgr.y (mgr.24407) 1025 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:44.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:44 vm00 bash[20701]: cluster 2026-03-10T07:52:42.930326+0000 mgr.y (mgr.24407) 1025 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:45 vm03 bash[23382]: audit 2026-03-10T07:52:44.066822+0000 mgr.y (mgr.24407) 1026 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:45 vm03 bash[23382]: audit 2026-03-10T07:52:44.066822+0000 mgr.y (mgr.24407) 1026 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:45 vm00 bash[28005]: audit 2026-03-10T07:52:44.066822+0000 mgr.y (mgr.24407) 1026 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:45 vm00 bash[28005]: audit 2026-03-10T07:52:44.066822+0000 mgr.y (mgr.24407) 1026 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:45.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:45 vm00 bash[20701]: audit 2026-03-10T07:52:44.066822+0000 mgr.y (mgr.24407) 1026 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:45.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:45 vm00 bash[20701]: audit 2026-03-10T07:52:44.066822+0000 mgr.y (mgr.24407) 1026 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:46 vm03 bash[23382]: cluster 2026-03-10T07:52:44.930980+0000 mgr.y (mgr.24407) 1027 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:46 vm03 bash[23382]: cluster 2026-03-10T07:52:44.930980+0000 mgr.y (mgr.24407) 1027 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:46 vm00 bash[28005]: cluster 2026-03-10T07:52:44.930980+0000 mgr.y (mgr.24407) 1027 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:46 vm00 bash[28005]: cluster 2026-03-10T07:52:44.930980+0000 mgr.y (mgr.24407) 1027 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:46.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:46 vm00 bash[20701]: cluster 2026-03-10T07:52:44.930980+0000 mgr.y (mgr.24407) 1027 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:46.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:46 vm00 bash[20701]: cluster 2026-03-10T07:52:44.930980+0000 mgr.y (mgr.24407) 1027 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:48 vm03 bash[23382]: cluster 2026-03-10T07:52:46.931281+0000 mgr.y (mgr.24407) 1028 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:48 vm03 bash[23382]: cluster 2026-03-10T07:52:46.931281+0000 mgr.y (mgr.24407) 1028 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:48 vm00 bash[28005]: cluster 2026-03-10T07:52:46.931281+0000 mgr.y (mgr.24407) 1028 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:48 vm00 bash[28005]: cluster 2026-03-10T07:52:46.931281+0000 mgr.y (mgr.24407) 1028 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:48.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:48 vm00 bash[20701]: cluster 2026-03-10T07:52:46.931281+0000 mgr.y (mgr.24407) 1028 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:48.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:48 vm00 bash[20701]: cluster 2026-03-10T07:52:46.931281+0000 mgr.y (mgr.24407) 1028 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:50 vm03 bash[23382]: cluster 2026-03-10T07:52:48.931621+0000 mgr.y (mgr.24407) 1029 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:50 vm03 bash[23382]: cluster 2026-03-10T07:52:48.931621+0000 mgr.y (mgr.24407) 1029 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:50.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:50 vm00 bash[28005]: cluster 2026-03-10T07:52:48.931621+0000 mgr.y (mgr.24407) 1029 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:50.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:50 vm00 bash[28005]: cluster 2026-03-10T07:52:48.931621+0000 mgr.y (mgr.24407) 1029 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:50.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:50 vm00 bash[20701]: cluster 2026-03-10T07:52:48.931621+0000 mgr.y (mgr.24407) 1029 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:50.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:50 vm00 bash[20701]: cluster 2026-03-10T07:52:48.931621+0000 mgr.y (mgr.24407) 1029 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:51.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:52:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:52:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:52:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:52 vm03 bash[23382]: cluster 2026-03-10T07:52:50.932281+0000 mgr.y (mgr.24407) 1030 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:52 vm03 bash[23382]: cluster 2026-03-10T07:52:50.932281+0000 mgr.y (mgr.24407) 1030 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:52.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:52 vm00 bash[28005]: cluster 2026-03-10T07:52:50.932281+0000 mgr.y (mgr.24407) 1030 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:52.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:52 vm00 bash[28005]: cluster 2026-03-10T07:52:50.932281+0000 mgr.y (mgr.24407) 1030 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:52.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:52 vm00 bash[20701]: cluster 2026-03-10T07:52:50.932281+0000 mgr.y (mgr.24407) 1030 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:52.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:52 vm00 bash[20701]: cluster 2026-03-10T07:52:50.932281+0000 mgr.y (mgr.24407) 1030 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:54.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:54 vm03 bash[23382]: cluster 2026-03-10T07:52:52.932575+0000 mgr.y (mgr.24407) 1031 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:54.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:54 vm03 bash[23382]: cluster 2026-03-10T07:52:52.932575+0000 mgr.y (mgr.24407) 1031 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:54.512 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:52:54 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:52:54.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:54 vm00 bash[20701]: cluster 2026-03-10T07:52:52.932575+0000 mgr.y (mgr.24407) 1031 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:54.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:54 vm00 bash[20701]: cluster 2026-03-10T07:52:52.932575+0000 mgr.y (mgr.24407) 1031 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:54.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:54 vm00 bash[28005]: cluster 2026-03-10T07:52:52.932575+0000 mgr.y (mgr.24407) 1031 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:54.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:54 vm00 bash[28005]: cluster 2026-03-10T07:52:52.932575+0000 mgr.y (mgr.24407) 1031 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:55 vm03 bash[23382]: audit 2026-03-10T07:52:54.075065+0000 mgr.y (mgr.24407) 1032 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:55 vm03 bash[23382]: audit 2026-03-10T07:52:54.075065+0000 mgr.y (mgr.24407) 1032 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:55.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:55 vm00 bash[20701]: audit 2026-03-10T07:52:54.075065+0000 mgr.y (mgr.24407) 1032 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:55.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:55 vm00 bash[20701]: audit 2026-03-10T07:52:54.075065+0000 mgr.y (mgr.24407) 1032 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:55.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:55 vm00 bash[28005]: audit 2026-03-10T07:52:54.075065+0000 mgr.y (mgr.24407) 1032 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:55.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:55 vm00 bash[28005]: audit 2026-03-10T07:52:54.075065+0000 mgr.y (mgr.24407) 1032 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:52:56.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:56 vm00 bash[28005]: cluster 2026-03-10T07:52:54.933185+0000 mgr.y (mgr.24407) 1033 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:56.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:56 vm00 bash[28005]: cluster 2026-03-10T07:52:54.933185+0000 mgr.y (mgr.24407) 1033 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:56.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:56 vm00 bash[28005]: audit 2026-03-10T07:52:55.316325+0000 mon.c (mon.2) 444 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:56.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:56 vm00 bash[28005]: audit 2026-03-10T07:52:55.316325+0000 mon.c (mon.2) 444 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:56.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:56 vm00 bash[20701]: cluster 2026-03-10T07:52:54.933185+0000 mgr.y (mgr.24407) 1033 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:56.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:56 vm00 bash[20701]: cluster 2026-03-10T07:52:54.933185+0000 mgr.y (mgr.24407) 1033 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:56.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:56 vm00 bash[20701]: audit 2026-03-10T07:52:55.316325+0000 mon.c (mon.2) 444 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:56.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:56 vm00 bash[20701]: audit 2026-03-10T07:52:55.316325+0000 mon.c (mon.2) 444 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:56 vm03 bash[23382]: cluster 2026-03-10T07:52:54.933185+0000 mgr.y (mgr.24407) 1033 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:56 vm03 bash[23382]: cluster 2026-03-10T07:52:54.933185+0000 mgr.y (mgr.24407) 1033 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:52:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:56 vm03 bash[23382]: audit 2026-03-10T07:52:55.316325+0000 mon.c (mon.2) 444 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:56 vm03 bash[23382]: audit 2026-03-10T07:52:55.316325+0000 mon.c (mon.2) 444 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:52:58.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:58 vm00 bash[20701]: cluster 2026-03-10T07:52:56.933577+0000 mgr.y (mgr.24407) 1034 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:58.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:52:58 vm00 bash[20701]: cluster 2026-03-10T07:52:56.933577+0000 mgr.y (mgr.24407) 1034 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:58.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:58 vm00 bash[28005]: cluster 2026-03-10T07:52:56.933577+0000 mgr.y (mgr.24407) 1034 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:58.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:52:58 vm00 bash[28005]: cluster 2026-03-10T07:52:56.933577+0000 mgr.y (mgr.24407) 1034 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:58.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:58 vm03 bash[23382]: cluster 2026-03-10T07:52:56.933577+0000 mgr.y (mgr.24407) 1034 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:52:58.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:52:58 vm03 bash[23382]: cluster 2026-03-10T07:52:56.933577+0000 mgr.y (mgr.24407) 1034 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:00.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:00 vm00 bash[20701]: cluster 2026-03-10T07:52:58.933915+0000 mgr.y (mgr.24407) 1035 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:00.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:00 vm00 bash[20701]: cluster 2026-03-10T07:52:58.933915+0000 mgr.y (mgr.24407) 1035 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:00.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:00 vm00 bash[28005]: cluster 2026-03-10T07:52:58.933915+0000 mgr.y (mgr.24407) 1035 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:00.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:00 vm00 bash[28005]: cluster 2026-03-10T07:52:58.933915+0000 mgr.y (mgr.24407) 1035 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:00.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:00 vm03 bash[23382]: cluster 2026-03-10T07:52:58.933915+0000 mgr.y (mgr.24407) 1035 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:00.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:00 vm03 bash[23382]: cluster 2026-03-10T07:52:58.933915+0000 mgr.y (mgr.24407) 1035 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:01.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:53:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:53:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:53:02.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:02 vm00 bash[28005]: cluster 2026-03-10T07:53:00.934775+0000 mgr.y (mgr.24407) 1036 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:02.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:02 vm00 bash[28005]: cluster 2026-03-10T07:53:00.934775+0000 mgr.y (mgr.24407) 1036 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:02.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:02 vm00 bash[20701]: cluster 2026-03-10T07:53:00.934775+0000 mgr.y (mgr.24407) 1036 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:02.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:02 vm00 bash[20701]: cluster 2026-03-10T07:53:00.934775+0000 mgr.y (mgr.24407) 1036 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:02.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:02 vm03 bash[23382]: cluster 2026-03-10T07:53:00.934775+0000 mgr.y (mgr.24407) 1036 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:02.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:02 vm03 bash[23382]: cluster 2026-03-10T07:53:00.934775+0000 mgr.y (mgr.24407) 1036 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:04.331 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:53:04 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:53:04.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:04 vm00 bash[28005]: cluster 2026-03-10T07:53:02.935211+0000 mgr.y (mgr.24407) 1037 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:04.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:04 vm00 bash[28005]: cluster 2026-03-10T07:53:02.935211+0000 mgr.y (mgr.24407) 1037 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:04.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:04 vm00 bash[20701]: cluster 2026-03-10T07:53:02.935211+0000 mgr.y (mgr.24407) 1037 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:04.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:04 vm00 bash[20701]: cluster 2026-03-10T07:53:02.935211+0000 mgr.y (mgr.24407) 1037 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:04.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:04 vm03 bash[23382]: cluster 2026-03-10T07:53:02.935211+0000 mgr.y (mgr.24407) 1037 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:04.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:04 vm03 bash[23382]: cluster 2026-03-10T07:53:02.935211+0000 mgr.y (mgr.24407) 1037 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:05.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:05 vm00 bash[28005]: audit 2026-03-10T07:53:04.085861+0000 mgr.y (mgr.24407) 1038 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:05.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:05 vm00 bash[28005]: audit 2026-03-10T07:53:04.085861+0000 mgr.y (mgr.24407) 1038 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:05.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:05 vm00 bash[20701]: audit 2026-03-10T07:53:04.085861+0000 mgr.y (mgr.24407) 1038 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:05.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:05 vm00 bash[20701]: audit 2026-03-10T07:53:04.085861+0000 mgr.y (mgr.24407) 1038 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:05 vm03 bash[23382]: audit 2026-03-10T07:53:04.085861+0000 mgr.y (mgr.24407) 1038 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:05 vm03 bash[23382]: audit 2026-03-10T07:53:04.085861+0000 mgr.y (mgr.24407) 1038 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:06.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:06 vm00 bash[28005]: cluster 2026-03-10T07:53:04.936131+0000 mgr.y (mgr.24407) 1039 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:06.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:06 vm00 bash[28005]: cluster 2026-03-10T07:53:04.936131+0000 mgr.y (mgr.24407) 1039 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:06.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:06 vm00 bash[20701]: cluster 2026-03-10T07:53:04.936131+0000 mgr.y (mgr.24407) 1039 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:06.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:06 vm00 bash[20701]: cluster 2026-03-10T07:53:04.936131+0000 mgr.y (mgr.24407) 1039 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:06 vm03 bash[23382]: cluster 2026-03-10T07:53:04.936131+0000 mgr.y (mgr.24407) 1039 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:06 vm03 bash[23382]: cluster 2026-03-10T07:53:04.936131+0000 mgr.y (mgr.24407) 1039 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:08.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:08 vm00 bash[28005]: cluster 2026-03-10T07:53:06.936474+0000 mgr.y (mgr.24407) 1040 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:08.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:08 vm00 bash[28005]: cluster 2026-03-10T07:53:06.936474+0000 mgr.y (mgr.24407) 1040 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:08.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:08 vm00 bash[20701]: cluster 2026-03-10T07:53:06.936474+0000 mgr.y (mgr.24407) 1040 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:08.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:08 vm00 bash[20701]: cluster 2026-03-10T07:53:06.936474+0000 mgr.y (mgr.24407) 1040 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:08 vm03 bash[23382]: cluster 2026-03-10T07:53:06.936474+0000 mgr.y (mgr.24407) 1040 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:08 vm03 bash[23382]: cluster 2026-03-10T07:53:06.936474+0000 mgr.y (mgr.24407) 1040 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:10.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:10 vm00 bash[28005]: cluster 2026-03-10T07:53:08.936832+0000 mgr.y (mgr.24407) 1041 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:10.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:10 vm00 bash[28005]: cluster 2026-03-10T07:53:08.936832+0000 mgr.y (mgr.24407) 1041 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:10.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:10 vm00 bash[28005]: audit 2026-03-10T07:53:10.323983+0000 mon.c (mon.2) 445 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:10.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:10 vm00 bash[28005]: audit 2026-03-10T07:53:10.323983+0000 mon.c (mon.2) 445 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:10.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:10 vm00 bash[20701]: cluster 2026-03-10T07:53:08.936832+0000 mgr.y (mgr.24407) 1041 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:10.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:10 vm00 bash[20701]: cluster 2026-03-10T07:53:08.936832+0000 mgr.y (mgr.24407) 1041 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:10.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:10 vm00 bash[20701]: audit 2026-03-10T07:53:10.323983+0000 mon.c (mon.2) 445 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:10.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:10 vm00 bash[20701]: audit 2026-03-10T07:53:10.323983+0000 mon.c (mon.2) 445 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:10 vm03 bash[23382]: cluster 2026-03-10T07:53:08.936832+0000 mgr.y (mgr.24407) 1041 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:10 vm03 bash[23382]: cluster 2026-03-10T07:53:08.936832+0000 mgr.y (mgr.24407) 1041 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:10 vm03 bash[23382]: audit 2026-03-10T07:53:10.323983+0000 mon.c (mon.2) 445 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:10 vm03 bash[23382]: audit 2026-03-10T07:53:10.323983+0000 mon.c (mon.2) 445 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:11.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:53:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:53:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:53:12.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:12 vm00 bash[28005]: cluster 2026-03-10T07:53:10.937513+0000 mgr.y (mgr.24407) 1042 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:12.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:12 vm00 bash[28005]: cluster 2026-03-10T07:53:10.937513+0000 mgr.y (mgr.24407) 1042 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:12.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:12 vm00 bash[20701]: cluster 2026-03-10T07:53:10.937513+0000 mgr.y (mgr.24407) 1042 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:12.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:12 vm00 bash[20701]: cluster 2026-03-10T07:53:10.937513+0000 mgr.y (mgr.24407) 1042 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:12 vm03 bash[23382]: cluster 2026-03-10T07:53:10.937513+0000 mgr.y (mgr.24407) 1042 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:12 vm03 bash[23382]: cluster 2026-03-10T07:53:10.937513+0000 mgr.y (mgr.24407) 1042 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:14.369 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:53:14 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:53:14.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:14 vm00 bash[28005]: cluster 2026-03-10T07:53:12.937825+0000 mgr.y (mgr.24407) 1043 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:14.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:14 vm00 bash[28005]: cluster 2026-03-10T07:53:12.937825+0000 mgr.y (mgr.24407) 1043 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:14.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:14 vm00 bash[20701]: cluster 2026-03-10T07:53:12.937825+0000 mgr.y (mgr.24407) 1043 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:14.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:14 vm00 bash[20701]: cluster 2026-03-10T07:53:12.937825+0000 mgr.y (mgr.24407) 1043 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:14 vm03 bash[23382]: cluster 2026-03-10T07:53:12.937825+0000 mgr.y (mgr.24407) 1043 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:14 vm03 bash[23382]: cluster 2026-03-10T07:53:12.937825+0000 mgr.y (mgr.24407) 1043 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:15 vm03 bash[23382]: audit 2026-03-10T07:53:14.096642+0000 mgr.y (mgr.24407) 1044 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:15 vm03 bash[23382]: audit 2026-03-10T07:53:14.096642+0000 mgr.y (mgr.24407) 1044 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:15.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:15 vm00 bash[28005]: audit 2026-03-10T07:53:14.096642+0000 mgr.y (mgr.24407) 1044 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:15.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:15 vm00 bash[28005]: audit 2026-03-10T07:53:14.096642+0000 mgr.y (mgr.24407) 1044 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:15.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:15 vm00 bash[20701]: audit 2026-03-10T07:53:14.096642+0000 mgr.y (mgr.24407) 1044 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:15.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:15 vm00 bash[20701]: audit 2026-03-10T07:53:14.096642+0000 mgr.y (mgr.24407) 1044 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:16 vm03 bash[23382]: cluster 2026-03-10T07:53:14.938396+0000 mgr.y (mgr.24407) 1045 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:16 vm03 bash[23382]: cluster 2026-03-10T07:53:14.938396+0000 mgr.y (mgr.24407) 1045 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:16.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:16 vm00 bash[28005]: cluster 2026-03-10T07:53:14.938396+0000 mgr.y (mgr.24407) 1045 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:16.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:16 vm00 bash[28005]: cluster 2026-03-10T07:53:14.938396+0000 mgr.y (mgr.24407) 1045 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:16.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:16 vm00 bash[20701]: cluster 2026-03-10T07:53:14.938396+0000 mgr.y (mgr.24407) 1045 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:16.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:16 vm00 bash[20701]: cluster 2026-03-10T07:53:14.938396+0000 mgr.y (mgr.24407) 1045 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:18.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:18 vm03 bash[23382]: cluster 2026-03-10T07:53:16.938692+0000 mgr.y (mgr.24407) 1046 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:18.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:18 vm03 bash[23382]: cluster 2026-03-10T07:53:16.938692+0000 mgr.y (mgr.24407) 1046 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:18.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:18 vm00 bash[28005]: cluster 2026-03-10T07:53:16.938692+0000 mgr.y (mgr.24407) 1046 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:18.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:18 vm00 bash[28005]: cluster 2026-03-10T07:53:16.938692+0000 mgr.y (mgr.24407) 1046 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:18.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:18 vm00 bash[20701]: cluster 2026-03-10T07:53:16.938692+0000 mgr.y (mgr.24407) 1046 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:18.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:18 vm00 bash[20701]: cluster 2026-03-10T07:53:16.938692+0000 mgr.y (mgr.24407) 1046 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:20.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:20 vm03 bash[23382]: cluster 2026-03-10T07:53:18.939064+0000 mgr.y (mgr.24407) 1047 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:20.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:20 vm03 bash[23382]: cluster 2026-03-10T07:53:18.939064+0000 mgr.y (mgr.24407) 1047 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:20.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:20 vm00 bash[28005]: cluster 2026-03-10T07:53:18.939064+0000 mgr.y (mgr.24407) 1047 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:20.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:20 vm00 bash[28005]: cluster 2026-03-10T07:53:18.939064+0000 mgr.y (mgr.24407) 1047 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:20.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:20 vm00 bash[20701]: cluster 2026-03-10T07:53:18.939064+0000 mgr.y (mgr.24407) 1047 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:20.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:20 vm00 bash[20701]: cluster 2026-03-10T07:53:18.939064+0000 mgr.y (mgr.24407) 1047 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:21.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:53:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:53:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:53:22.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:22 vm03 bash[23382]: cluster 2026-03-10T07:53:20.940283+0000 mgr.y (mgr.24407) 1048 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:22.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:22 vm03 bash[23382]: cluster 2026-03-10T07:53:20.940283+0000 mgr.y (mgr.24407) 1048 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:22.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:22 vm00 bash[28005]: cluster 2026-03-10T07:53:20.940283+0000 mgr.y (mgr.24407) 1048 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:22.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:22 vm00 bash[28005]: cluster 2026-03-10T07:53:20.940283+0000 mgr.y (mgr.24407) 1048 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:22.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:22 vm00 bash[20701]: cluster 2026-03-10T07:53:20.940283+0000 mgr.y (mgr.24407) 1048 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:22.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:22 vm00 bash[20701]: cluster 2026-03-10T07:53:20.940283+0000 mgr.y (mgr.24407) 1048 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:24.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:53:24 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:53:24.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:24 vm00 bash[28005]: cluster 2026-03-10T07:53:22.940682+0000 mgr.y (mgr.24407) 1049 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:24.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:24 vm00 bash[28005]: cluster 2026-03-10T07:53:22.940682+0000 mgr.y (mgr.24407) 1049 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:24.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:24 vm00 bash[28005]: audit 2026-03-10T07:53:24.102446+0000 mgr.y (mgr.24407) 1050 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:24.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:24 vm00 bash[28005]: audit 2026-03-10T07:53:24.102446+0000 mgr.y (mgr.24407) 1050 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:24.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:24 vm00 bash[20701]: cluster 2026-03-10T07:53:22.940682+0000 mgr.y (mgr.24407) 1049 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:24.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:24 vm00 bash[20701]: cluster 2026-03-10T07:53:22.940682+0000 mgr.y (mgr.24407) 1049 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:24.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:24 vm00 bash[20701]: audit 2026-03-10T07:53:24.102446+0000 mgr.y (mgr.24407) 1050 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:24.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:24 vm00 bash[20701]: audit 2026-03-10T07:53:24.102446+0000 mgr.y (mgr.24407) 1050 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:25.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:24 vm03 bash[23382]: cluster 2026-03-10T07:53:22.940682+0000 mgr.y (mgr.24407) 1049 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:25.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:24 vm03 bash[23382]: cluster 2026-03-10T07:53:22.940682+0000 mgr.y (mgr.24407) 1049 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:25.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:24 vm03 bash[23382]: audit 2026-03-10T07:53:24.102446+0000 mgr.y (mgr.24407) 1050 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:25.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:24 vm03 bash[23382]: audit 2026-03-10T07:53:24.102446+0000 mgr.y (mgr.24407) 1050 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:25.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:25 vm00 bash[28005]: cluster 2026-03-10T07:53:24.941287+0000 mgr.y (mgr.24407) 1051 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:25.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:25 vm00 bash[28005]: cluster 2026-03-10T07:53:24.941287+0000 mgr.y (mgr.24407) 1051 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:25.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:25 vm00 bash[28005]: audit 2026-03-10T07:53:25.330127+0000 mon.c (mon.2) 446 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:25.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:25 vm00 bash[28005]: audit 2026-03-10T07:53:25.330127+0000 mon.c (mon.2) 446 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:25.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:25 vm00 bash[20701]: cluster 2026-03-10T07:53:24.941287+0000 mgr.y (mgr.24407) 1051 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:25.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:25 vm00 bash[20701]: cluster 2026-03-10T07:53:24.941287+0000 mgr.y (mgr.24407) 1051 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:25.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:25 vm00 bash[20701]: audit 2026-03-10T07:53:25.330127+0000 mon.c (mon.2) 446 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:25.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:25 vm00 bash[20701]: audit 2026-03-10T07:53:25.330127+0000 mon.c (mon.2) 446 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:26.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:25 vm03 bash[23382]: cluster 2026-03-10T07:53:24.941287+0000 mgr.y (mgr.24407) 1051 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:26.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:25 vm03 bash[23382]: cluster 2026-03-10T07:53:24.941287+0000 mgr.y (mgr.24407) 1051 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:26.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:25 vm03 bash[23382]: audit 2026-03-10T07:53:25.330127+0000 mon.c (mon.2) 446 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:26.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:25 vm03 bash[23382]: audit 2026-03-10T07:53:25.330127+0000 mon.c (mon.2) 446 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:26.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:26 vm00 bash[28005]: audit 2026-03-10T07:53:25.709638+0000 mon.c (mon.2) 447 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:53:26.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:26 vm00 bash[28005]: audit 2026-03-10T07:53:25.709638+0000 mon.c (mon.2) 447 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:53:26.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:26 vm00 bash[20701]: audit 2026-03-10T07:53:25.709638+0000 mon.c (mon.2) 447 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:53:26.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:26 vm00 bash[20701]: audit 2026-03-10T07:53:25.709638+0000 mon.c (mon.2) 447 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:53:27.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:26 vm03 bash[23382]: audit 2026-03-10T07:53:25.709638+0000 mon.c (mon.2) 447 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:53:27.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:26 vm03 bash[23382]: audit 2026-03-10T07:53:25.709638+0000 mon.c (mon.2) 447 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:53:27.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:27 vm00 bash[28005]: cluster 2026-03-10T07:53:26.941685+0000 mgr.y (mgr.24407) 1052 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:27.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:27 vm00 bash[28005]: cluster 2026-03-10T07:53:26.941685+0000 mgr.y (mgr.24407) 1052 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:27.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:27 vm00 bash[20701]: cluster 2026-03-10T07:53:26.941685+0000 mgr.y (mgr.24407) 1052 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:27.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:27 vm00 bash[20701]: cluster 2026-03-10T07:53:26.941685+0000 mgr.y (mgr.24407) 1052 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:28.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:27 vm03 bash[23382]: cluster 2026-03-10T07:53:26.941685+0000 mgr.y (mgr.24407) 1052 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:28.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:27 vm03 bash[23382]: cluster 2026-03-10T07:53:26.941685+0000 mgr.y (mgr.24407) 1052 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:30.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:30 vm03 bash[23382]: cluster 2026-03-10T07:53:28.942071+0000 mgr.y (mgr.24407) 1053 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:30.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:30 vm03 bash[23382]: cluster 2026-03-10T07:53:28.942071+0000 mgr.y (mgr.24407) 1053 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:30.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:30 vm00 bash[28005]: cluster 2026-03-10T07:53:28.942071+0000 mgr.y (mgr.24407) 1053 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:30.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:30 vm00 bash[28005]: cluster 2026-03-10T07:53:28.942071+0000 mgr.y (mgr.24407) 1053 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:30.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:30 vm00 bash[20701]: cluster 2026-03-10T07:53:28.942071+0000 mgr.y (mgr.24407) 1053 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:30.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:30 vm00 bash[20701]: cluster 2026-03-10T07:53:28.942071+0000 mgr.y (mgr.24407) 1053 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:31.128 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:53:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:53:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:53:32.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: cluster 2026-03-10T07:53:30.942776+0000 mgr.y (mgr.24407) 1054 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:32.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: cluster 2026-03-10T07:53:30.942776+0000 mgr.y (mgr.24407) 1054 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:32.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: audit 2026-03-10T07:53:31.710714+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: audit 2026-03-10T07:53:31.710714+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: audit 2026-03-10T07:53:31.718151+0000 mon.a (mon.0) 3563 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: audit 2026-03-10T07:53:31.718151+0000 mon.a (mon.0) 3563 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: audit 2026-03-10T07:53:31.720080+0000 mon.c (mon.2) 448 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:53:32.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: audit 2026-03-10T07:53:31.720080+0000 mon.c (mon.2) 448 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:53:32.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: audit 2026-03-10T07:53:31.720844+0000 mon.c (mon.2) 449 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:53:32.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: audit 2026-03-10T07:53:31.720844+0000 mon.c (mon.2) 449 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:53:32.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: audit 2026-03-10T07:53:31.725858+0000 mon.a (mon.0) 3564 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:32 vm03 bash[23382]: audit 2026-03-10T07:53:31.725858+0000 mon.a (mon.0) 3564 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: cluster 2026-03-10T07:53:30.942776+0000 mgr.y (mgr.24407) 1054 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: cluster 2026-03-10T07:53:30.942776+0000 mgr.y (mgr.24407) 1054 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: audit 2026-03-10T07:53:31.710714+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: audit 2026-03-10T07:53:31.710714+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: audit 2026-03-10T07:53:31.718151+0000 mon.a (mon.0) 3563 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: audit 2026-03-10T07:53:31.718151+0000 mon.a (mon.0) 3563 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: audit 2026-03-10T07:53:31.720080+0000 mon.c (mon.2) 448 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: audit 2026-03-10T07:53:31.720080+0000 mon.c (mon.2) 448 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: audit 2026-03-10T07:53:31.720844+0000 mon.c (mon.2) 449 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: audit 2026-03-10T07:53:31.720844+0000 mon.c (mon.2) 449 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: audit 2026-03-10T07:53:31.725858+0000 mon.a (mon.0) 3564 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:32 vm00 bash[28005]: audit 2026-03-10T07:53:31.725858+0000 mon.a (mon.0) 3564 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: cluster 2026-03-10T07:53:30.942776+0000 mgr.y (mgr.24407) 1054 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: cluster 2026-03-10T07:53:30.942776+0000 mgr.y (mgr.24407) 1054 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: audit 2026-03-10T07:53:31.710714+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: audit 2026-03-10T07:53:31.710714+0000 mon.a (mon.0) 3562 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: audit 2026-03-10T07:53:31.718151+0000 mon.a (mon.0) 3563 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: audit 2026-03-10T07:53:31.718151+0000 mon.a (mon.0) 3563 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: audit 2026-03-10T07:53:31.720080+0000 mon.c (mon.2) 448 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: audit 2026-03-10T07:53:31.720080+0000 mon.c (mon.2) 448 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: audit 2026-03-10T07:53:31.720844+0000 mon.c (mon.2) 449 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: audit 2026-03-10T07:53:31.720844+0000 mon.c (mon.2) 449 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: audit 2026-03-10T07:53:31.725858+0000 mon.a (mon.0) 3564 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:32 vm00 bash[20701]: audit 2026-03-10T07:53:31.725858+0000 mon.a (mon.0) 3564 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:53:34.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:53:34 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:53:34.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:34 vm03 bash[23382]: cluster 2026-03-10T07:53:32.943172+0000 mgr.y (mgr.24407) 1055 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:34.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:34 vm03 bash[23382]: cluster 2026-03-10T07:53:32.943172+0000 mgr.y (mgr.24407) 1055 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:34.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:34 vm00 bash[28005]: cluster 2026-03-10T07:53:32.943172+0000 mgr.y (mgr.24407) 1055 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:34.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:34 vm00 bash[28005]: cluster 2026-03-10T07:53:32.943172+0000 mgr.y (mgr.24407) 1055 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:34.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:34 vm00 bash[20701]: cluster 2026-03-10T07:53:32.943172+0000 mgr.y (mgr.24407) 1055 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:34.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:34 vm00 bash[20701]: cluster 2026-03-10T07:53:32.943172+0000 mgr.y (mgr.24407) 1055 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:35.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:35 vm00 bash[28005]: audit 2026-03-10T07:53:34.113111+0000 mgr.y (mgr.24407) 1056 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:35.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:35 vm00 bash[28005]: audit 2026-03-10T07:53:34.113111+0000 mgr.y (mgr.24407) 1056 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:35.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:35 vm00 bash[20701]: audit 2026-03-10T07:53:34.113111+0000 mgr.y (mgr.24407) 1056 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:35.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:35 vm00 bash[20701]: audit 2026-03-10T07:53:34.113111+0000 mgr.y (mgr.24407) 1056 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:35.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:35 vm03 bash[23382]: audit 2026-03-10T07:53:34.113111+0000 mgr.y (mgr.24407) 1056 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:35.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:35 vm03 bash[23382]: audit 2026-03-10T07:53:34.113111+0000 mgr.y (mgr.24407) 1056 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:36.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:36 vm00 bash[28005]: cluster 2026-03-10T07:53:34.943736+0000 mgr.y (mgr.24407) 1057 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:36.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:36 vm00 bash[28005]: cluster 2026-03-10T07:53:34.943736+0000 mgr.y (mgr.24407) 1057 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:36.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:36 vm00 bash[20701]: cluster 2026-03-10T07:53:34.943736+0000 mgr.y (mgr.24407) 1057 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:36.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:36 vm00 bash[20701]: cluster 2026-03-10T07:53:34.943736+0000 mgr.y (mgr.24407) 1057 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:36.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:36 vm03 bash[23382]: cluster 2026-03-10T07:53:34.943736+0000 mgr.y (mgr.24407) 1057 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:36.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:36 vm03 bash[23382]: cluster 2026-03-10T07:53:34.943736+0000 mgr.y (mgr.24407) 1057 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:38.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:38 vm00 bash[28005]: cluster 2026-03-10T07:53:36.944100+0000 mgr.y (mgr.24407) 1058 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:38.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:38 vm00 bash[28005]: cluster 2026-03-10T07:53:36.944100+0000 mgr.y (mgr.24407) 1058 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:38.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:38 vm00 bash[20701]: cluster 2026-03-10T07:53:36.944100+0000 mgr.y (mgr.24407) 1058 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:38.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:38 vm00 bash[20701]: cluster 2026-03-10T07:53:36.944100+0000 mgr.y (mgr.24407) 1058 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:38.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:38 vm03 bash[23382]: cluster 2026-03-10T07:53:36.944100+0000 mgr.y (mgr.24407) 1058 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:38.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:38 vm03 bash[23382]: cluster 2026-03-10T07:53:36.944100+0000 mgr.y (mgr.24407) 1058 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:40.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:40 vm00 bash[28005]: cluster 2026-03-10T07:53:38.944434+0000 mgr.y (mgr.24407) 1059 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:40.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:40 vm00 bash[28005]: cluster 2026-03-10T07:53:38.944434+0000 mgr.y (mgr.24407) 1059 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:40.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:40 vm00 bash[20701]: cluster 2026-03-10T07:53:38.944434+0000 mgr.y (mgr.24407) 1059 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:40.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:40 vm00 bash[20701]: cluster 2026-03-10T07:53:38.944434+0000 mgr.y (mgr.24407) 1059 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:40 vm03 bash[23382]: cluster 2026-03-10T07:53:38.944434+0000 mgr.y (mgr.24407) 1059 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:40 vm03 bash[23382]: cluster 2026-03-10T07:53:38.944434+0000 mgr.y (mgr.24407) 1059 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:41.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:41 vm00 bash[28005]: audit 2026-03-10T07:53:40.336360+0000 mon.c (mon.2) 450 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:41.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:41 vm00 bash[28005]: audit 2026-03-10T07:53:40.336360+0000 mon.c (mon.2) 450 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:41.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:53:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:53:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:53:41.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:41 vm00 bash[20701]: audit 2026-03-10T07:53:40.336360+0000 mon.c (mon.2) 450 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:41.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:41 vm00 bash[20701]: audit 2026-03-10T07:53:40.336360+0000 mon.c (mon.2) 450 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:41.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:41 vm03 bash[23382]: audit 2026-03-10T07:53:40.336360+0000 mon.c (mon.2) 450 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:41.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:41 vm03 bash[23382]: audit 2026-03-10T07:53:40.336360+0000 mon.c (mon.2) 450 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:42.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:42 vm00 bash[20701]: cluster 2026-03-10T07:53:40.945097+0000 mgr.y (mgr.24407) 1060 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:42.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:42 vm00 bash[20701]: cluster 2026-03-10T07:53:40.945097+0000 mgr.y (mgr.24407) 1060 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:42.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:42 vm00 bash[28005]: cluster 2026-03-10T07:53:40.945097+0000 mgr.y (mgr.24407) 1060 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:42.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:42 vm00 bash[28005]: cluster 2026-03-10T07:53:40.945097+0000 mgr.y (mgr.24407) 1060 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:42.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:42 vm03 bash[23382]: cluster 2026-03-10T07:53:40.945097+0000 mgr.y (mgr.24407) 1060 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:42.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:42 vm03 bash[23382]: cluster 2026-03-10T07:53:40.945097+0000 mgr.y (mgr.24407) 1060 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:44.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:53:44 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:53:44.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:44 vm03 bash[23382]: cluster 2026-03-10T07:53:42.945382+0000 mgr.y (mgr.24407) 1061 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:44.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:44 vm03 bash[23382]: cluster 2026-03-10T07:53:42.945382+0000 mgr.y (mgr.24407) 1061 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:44.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:44 vm00 bash[28005]: cluster 2026-03-10T07:53:42.945382+0000 mgr.y (mgr.24407) 1061 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:44.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:44 vm00 bash[28005]: cluster 2026-03-10T07:53:42.945382+0000 mgr.y (mgr.24407) 1061 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:44.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:44 vm00 bash[20701]: cluster 2026-03-10T07:53:42.945382+0000 mgr.y (mgr.24407) 1061 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:44.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:44 vm00 bash[20701]: cluster 2026-03-10T07:53:42.945382+0000 mgr.y (mgr.24407) 1061 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:45 vm00 bash[28005]: audit 2026-03-10T07:53:44.123956+0000 mgr.y (mgr.24407) 1062 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:45 vm00 bash[28005]: audit 2026-03-10T07:53:44.123956+0000 mgr.y (mgr.24407) 1062 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:45.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:45 vm00 bash[20701]: audit 2026-03-10T07:53:44.123956+0000 mgr.y (mgr.24407) 1062 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:45.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:45 vm00 bash[20701]: audit 2026-03-10T07:53:44.123956+0000 mgr.y (mgr.24407) 1062 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:45.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:45 vm03 bash[23382]: audit 2026-03-10T07:53:44.123956+0000 mgr.y (mgr.24407) 1062 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:45.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:45 vm03 bash[23382]: audit 2026-03-10T07:53:44.123956+0000 mgr.y (mgr.24407) 1062 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:46 vm00 bash[28005]: cluster 2026-03-10T07:53:44.946119+0000 mgr.y (mgr.24407) 1063 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:46 vm00 bash[28005]: cluster 2026-03-10T07:53:44.946119+0000 mgr.y (mgr.24407) 1063 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:46.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:46 vm00 bash[20701]: cluster 2026-03-10T07:53:44.946119+0000 mgr.y (mgr.24407) 1063 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:46.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:46 vm00 bash[20701]: cluster 2026-03-10T07:53:44.946119+0000 mgr.y (mgr.24407) 1063 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:46.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:46 vm03 bash[23382]: cluster 2026-03-10T07:53:44.946119+0000 mgr.y (mgr.24407) 1063 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:46.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:46 vm03 bash[23382]: cluster 2026-03-10T07:53:44.946119+0000 mgr.y (mgr.24407) 1063 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:48 vm00 bash[28005]: cluster 2026-03-10T07:53:46.946510+0000 mgr.y (mgr.24407) 1064 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:48 vm00 bash[28005]: cluster 2026-03-10T07:53:46.946510+0000 mgr.y (mgr.24407) 1064 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:48.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:48 vm00 bash[20701]: cluster 2026-03-10T07:53:46.946510+0000 mgr.y (mgr.24407) 1064 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:48.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:48 vm00 bash[20701]: cluster 2026-03-10T07:53:46.946510+0000 mgr.y (mgr.24407) 1064 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:48.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:48 vm03 bash[23382]: cluster 2026-03-10T07:53:46.946510+0000 mgr.y (mgr.24407) 1064 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:48.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:48 vm03 bash[23382]: cluster 2026-03-10T07:53:46.946510+0000 mgr.y (mgr.24407) 1064 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:50.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:50 vm00 bash[28005]: cluster 2026-03-10T07:53:48.946874+0000 mgr.y (mgr.24407) 1065 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:50.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:50 vm00 bash[28005]: cluster 2026-03-10T07:53:48.946874+0000 mgr.y (mgr.24407) 1065 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:50.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:50 vm00 bash[20701]: cluster 2026-03-10T07:53:48.946874+0000 mgr.y (mgr.24407) 1065 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:50.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:50 vm00 bash[20701]: cluster 2026-03-10T07:53:48.946874+0000 mgr.y (mgr.24407) 1065 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:50.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:50 vm03 bash[23382]: cluster 2026-03-10T07:53:48.946874+0000 mgr.y (mgr.24407) 1065 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:50.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:50 vm03 bash[23382]: cluster 2026-03-10T07:53:48.946874+0000 mgr.y (mgr.24407) 1065 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:51.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:53:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:53:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:53:52.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:52 vm00 bash[20701]: cluster 2026-03-10T07:53:50.947674+0000 mgr.y (mgr.24407) 1066 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:52.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:52 vm00 bash[20701]: cluster 2026-03-10T07:53:50.947674+0000 mgr.y (mgr.24407) 1066 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:52.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:52 vm00 bash[28005]: cluster 2026-03-10T07:53:50.947674+0000 mgr.y (mgr.24407) 1066 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:52.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:52 vm00 bash[28005]: cluster 2026-03-10T07:53:50.947674+0000 mgr.y (mgr.24407) 1066 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:52.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:52 vm03 bash[23382]: cluster 2026-03-10T07:53:50.947674+0000 mgr.y (mgr.24407) 1066 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:52.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:52 vm03 bash[23382]: cluster 2026-03-10T07:53:50.947674+0000 mgr.y (mgr.24407) 1066 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:54.384 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:53:54 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:53:54.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:54 vm03 bash[23382]: cluster 2026-03-10T07:53:52.948029+0000 mgr.y (mgr.24407) 1067 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:54.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:54 vm03 bash[23382]: cluster 2026-03-10T07:53:52.948029+0000 mgr.y (mgr.24407) 1067 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:54.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:54 vm00 bash[20701]: cluster 2026-03-10T07:53:52.948029+0000 mgr.y (mgr.24407) 1067 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:54.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:54 vm00 bash[20701]: cluster 2026-03-10T07:53:52.948029+0000 mgr.y (mgr.24407) 1067 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:54.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:54 vm00 bash[28005]: cluster 2026-03-10T07:53:52.948029+0000 mgr.y (mgr.24407) 1067 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:54.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:54 vm00 bash[28005]: cluster 2026-03-10T07:53:52.948029+0000 mgr.y (mgr.24407) 1067 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:55 vm03 bash[23382]: audit 2026-03-10T07:53:54.127444+0000 mgr.y (mgr.24407) 1068 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:55 vm03 bash[23382]: audit 2026-03-10T07:53:54.127444+0000 mgr.y (mgr.24407) 1068 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:55 vm03 bash[23382]: audit 2026-03-10T07:53:55.342378+0000 mon.c (mon.2) 451 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:55 vm03 bash[23382]: audit 2026-03-10T07:53:55.342378+0000 mon.c (mon.2) 451 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:55.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:55 vm00 bash[20701]: audit 2026-03-10T07:53:54.127444+0000 mgr.y (mgr.24407) 1068 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:55.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:55 vm00 bash[20701]: audit 2026-03-10T07:53:54.127444+0000 mgr.y (mgr.24407) 1068 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:55.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:55 vm00 bash[20701]: audit 2026-03-10T07:53:55.342378+0000 mon.c (mon.2) 451 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:55.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:55 vm00 bash[20701]: audit 2026-03-10T07:53:55.342378+0000 mon.c (mon.2) 451 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:55.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:55 vm00 bash[28005]: audit 2026-03-10T07:53:54.127444+0000 mgr.y (mgr.24407) 1068 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:55.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:55 vm00 bash[28005]: audit 2026-03-10T07:53:54.127444+0000 mgr.y (mgr.24407) 1068 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:53:55.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:55 vm00 bash[28005]: audit 2026-03-10T07:53:55.342378+0000 mon.c (mon.2) 451 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:55.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:55 vm00 bash[28005]: audit 2026-03-10T07:53:55.342378+0000 mon.c (mon.2) 451 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:53:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:56 vm03 bash[23382]: cluster 2026-03-10T07:53:54.948835+0000 mgr.y (mgr.24407) 1069 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:56 vm03 bash[23382]: cluster 2026-03-10T07:53:54.948835+0000 mgr.y (mgr.24407) 1069 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:56.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:56 vm00 bash[20701]: cluster 2026-03-10T07:53:54.948835+0000 mgr.y (mgr.24407) 1069 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:56.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:56 vm00 bash[20701]: cluster 2026-03-10T07:53:54.948835+0000 mgr.y (mgr.24407) 1069 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:56.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:56 vm00 bash[28005]: cluster 2026-03-10T07:53:54.948835+0000 mgr.y (mgr.24407) 1069 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:56.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:56 vm00 bash[28005]: cluster 2026-03-10T07:53:54.948835+0000 mgr.y (mgr.24407) 1069 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:53:58.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:58 vm03 bash[23382]: cluster 2026-03-10T07:53:56.949226+0000 mgr.y (mgr.24407) 1070 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:58.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:53:58 vm03 bash[23382]: cluster 2026-03-10T07:53:56.949226+0000 mgr.y (mgr.24407) 1070 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:58.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:58 vm00 bash[20701]: cluster 2026-03-10T07:53:56.949226+0000 mgr.y (mgr.24407) 1070 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:58.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:53:58 vm00 bash[20701]: cluster 2026-03-10T07:53:56.949226+0000 mgr.y (mgr.24407) 1070 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:58.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:58 vm00 bash[28005]: cluster 2026-03-10T07:53:56.949226+0000 mgr.y (mgr.24407) 1070 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:53:58.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:53:58 vm00 bash[28005]: cluster 2026-03-10T07:53:56.949226+0000 mgr.y (mgr.24407) 1070 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:00.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:00 vm03 bash[23382]: cluster 2026-03-10T07:53:58.949636+0000 mgr.y (mgr.24407) 1071 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:00.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:00 vm03 bash[23382]: cluster 2026-03-10T07:53:58.949636+0000 mgr.y (mgr.24407) 1071 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:00.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:00 vm00 bash[20701]: cluster 2026-03-10T07:53:58.949636+0000 mgr.y (mgr.24407) 1071 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:00.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:00 vm00 bash[20701]: cluster 2026-03-10T07:53:58.949636+0000 mgr.y (mgr.24407) 1071 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:00.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:00 vm00 bash[28005]: cluster 2026-03-10T07:53:58.949636+0000 mgr.y (mgr.24407) 1071 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:00.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:00 vm00 bash[28005]: cluster 2026-03-10T07:53:58.949636+0000 mgr.y (mgr.24407) 1071 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:01.377 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:54:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:54:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:54:02.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:02 vm03 bash[23382]: cluster 2026-03-10T07:54:00.950282+0000 mgr.y (mgr.24407) 1072 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:02.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:02 vm03 bash[23382]: cluster 2026-03-10T07:54:00.950282+0000 mgr.y (mgr.24407) 1072 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:02.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:02 vm00 bash[28005]: cluster 2026-03-10T07:54:00.950282+0000 mgr.y (mgr.24407) 1072 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:02.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:02 vm00 bash[28005]: cluster 2026-03-10T07:54:00.950282+0000 mgr.y (mgr.24407) 1072 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:02.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:02 vm00 bash[20701]: cluster 2026-03-10T07:54:00.950282+0000 mgr.y (mgr.24407) 1072 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:02.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:02 vm00 bash[20701]: cluster 2026-03-10T07:54:00.950282+0000 mgr.y (mgr.24407) 1072 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:04.439 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:54:04 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:54:04.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:04 vm03 bash[23382]: cluster 2026-03-10T07:54:02.950652+0000 mgr.y (mgr.24407) 1073 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:04.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:04 vm03 bash[23382]: cluster 2026-03-10T07:54:02.950652+0000 mgr.y (mgr.24407) 1073 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:04.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:04 vm00 bash[28005]: cluster 2026-03-10T07:54:02.950652+0000 mgr.y (mgr.24407) 1073 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:04.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:04 vm00 bash[28005]: cluster 2026-03-10T07:54:02.950652+0000 mgr.y (mgr.24407) 1073 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:04.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:04 vm00 bash[20701]: cluster 2026-03-10T07:54:02.950652+0000 mgr.y (mgr.24407) 1073 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:04.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:04 vm00 bash[20701]: cluster 2026-03-10T07:54:02.950652+0000 mgr.y (mgr.24407) 1073 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:05 vm03 bash[23382]: audit 2026-03-10T07:54:04.135145+0000 mgr.y (mgr.24407) 1074 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:05 vm03 bash[23382]: audit 2026-03-10T07:54:04.135145+0000 mgr.y (mgr.24407) 1074 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:05.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:05 vm00 bash[28005]: audit 2026-03-10T07:54:04.135145+0000 mgr.y (mgr.24407) 1074 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:05.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:05 vm00 bash[28005]: audit 2026-03-10T07:54:04.135145+0000 mgr.y (mgr.24407) 1074 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:05.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:05 vm00 bash[20701]: audit 2026-03-10T07:54:04.135145+0000 mgr.y (mgr.24407) 1074 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:05.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:05 vm00 bash[20701]: audit 2026-03-10T07:54:04.135145+0000 mgr.y (mgr.24407) 1074 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:06 vm03 bash[23382]: cluster 2026-03-10T07:54:04.951290+0000 mgr.y (mgr.24407) 1075 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:06 vm03 bash[23382]: cluster 2026-03-10T07:54:04.951290+0000 mgr.y (mgr.24407) 1075 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:06.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:06 vm00 bash[28005]: cluster 2026-03-10T07:54:04.951290+0000 mgr.y (mgr.24407) 1075 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:06.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:06 vm00 bash[28005]: cluster 2026-03-10T07:54:04.951290+0000 mgr.y (mgr.24407) 1075 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:06.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:06 vm00 bash[20701]: cluster 2026-03-10T07:54:04.951290+0000 mgr.y (mgr.24407) 1075 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:06.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:06 vm00 bash[20701]: cluster 2026-03-10T07:54:04.951290+0000 mgr.y (mgr.24407) 1075 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:08 vm03 bash[23382]: cluster 2026-03-10T07:54:06.951646+0000 mgr.y (mgr.24407) 1076 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:08 vm03 bash[23382]: cluster 2026-03-10T07:54:06.951646+0000 mgr.y (mgr.24407) 1076 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:08.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:08 vm00 bash[28005]: cluster 2026-03-10T07:54:06.951646+0000 mgr.y (mgr.24407) 1076 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:08.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:08 vm00 bash[28005]: cluster 2026-03-10T07:54:06.951646+0000 mgr.y (mgr.24407) 1076 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:08.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:08 vm00 bash[20701]: cluster 2026-03-10T07:54:06.951646+0000 mgr.y (mgr.24407) 1076 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:08.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:08 vm00 bash[20701]: cluster 2026-03-10T07:54:06.951646+0000 mgr.y (mgr.24407) 1076 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:10 vm03 bash[23382]: cluster 2026-03-10T07:54:08.951945+0000 mgr.y (mgr.24407) 1077 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:10 vm03 bash[23382]: cluster 2026-03-10T07:54:08.951945+0000 mgr.y (mgr.24407) 1077 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:10 vm03 bash[23382]: audit 2026-03-10T07:54:10.348838+0000 mon.c (mon.2) 452 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:10 vm03 bash[23382]: audit 2026-03-10T07:54:10.348838+0000 mon.c (mon.2) 452 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:10.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:10 vm00 bash[28005]: cluster 2026-03-10T07:54:08.951945+0000 mgr.y (mgr.24407) 1077 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:10.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:10 vm00 bash[28005]: cluster 2026-03-10T07:54:08.951945+0000 mgr.y (mgr.24407) 1077 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:10.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:10 vm00 bash[28005]: audit 2026-03-10T07:54:10.348838+0000 mon.c (mon.2) 452 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:10.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:10 vm00 bash[28005]: audit 2026-03-10T07:54:10.348838+0000 mon.c (mon.2) 452 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:10.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:10 vm00 bash[20701]: cluster 2026-03-10T07:54:08.951945+0000 mgr.y (mgr.24407) 1077 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:10.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:10 vm00 bash[20701]: cluster 2026-03-10T07:54:08.951945+0000 mgr.y (mgr.24407) 1077 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:10.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:10 vm00 bash[20701]: audit 2026-03-10T07:54:10.348838+0000 mon.c (mon.2) 452 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:10.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:10 vm00 bash[20701]: audit 2026-03-10T07:54:10.348838+0000 mon.c (mon.2) 452 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:11.377 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:54:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:54:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:54:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:12 vm03 bash[23382]: cluster 2026-03-10T07:54:10.952524+0000 mgr.y (mgr.24407) 1078 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:12 vm03 bash[23382]: cluster 2026-03-10T07:54:10.952524+0000 mgr.y (mgr.24407) 1078 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:12.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:12 vm00 bash[28005]: cluster 2026-03-10T07:54:10.952524+0000 mgr.y (mgr.24407) 1078 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:12.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:12 vm00 bash[28005]: cluster 2026-03-10T07:54:10.952524+0000 mgr.y (mgr.24407) 1078 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:12.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:12 vm00 bash[20701]: cluster 2026-03-10T07:54:10.952524+0000 mgr.y (mgr.24407) 1078 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:12.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:12 vm00 bash[20701]: cluster 2026-03-10T07:54:10.952524+0000 mgr.y (mgr.24407) 1078 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:14.477 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:54:14 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:54:14.479 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:14 vm00 bash[28005]: cluster 2026-03-10T07:54:12.952818+0000 mgr.y (mgr.24407) 1079 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:14.479 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:14 vm00 bash[20701]: cluster 2026-03-10T07:54:12.952818+0000 mgr.y (mgr.24407) 1079 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:14.479 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:14 vm00 bash[20701]: cluster 2026-03-10T07:54:12.952818+0000 mgr.y (mgr.24407) 1079 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:14 vm03 bash[23382]: cluster 2026-03-10T07:54:12.952818+0000 mgr.y (mgr.24407) 1079 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:14.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:14 vm03 bash[23382]: cluster 2026-03-10T07:54:12.952818+0000 mgr.y (mgr.24407) 1079 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:14.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:14 vm00 bash[28005]: cluster 2026-03-10T07:54:12.952818+0000 mgr.y (mgr.24407) 1079 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:15 vm03 bash[23382]: audit 2026-03-10T07:54:14.145928+0000 mgr.y (mgr.24407) 1080 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:15 vm03 bash[23382]: audit 2026-03-10T07:54:14.145928+0000 mgr.y (mgr.24407) 1080 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:15.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:15 vm00 bash[28005]: audit 2026-03-10T07:54:14.145928+0000 mgr.y (mgr.24407) 1080 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:15.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:15 vm00 bash[28005]: audit 2026-03-10T07:54:14.145928+0000 mgr.y (mgr.24407) 1080 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:15.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:15 vm00 bash[20701]: audit 2026-03-10T07:54:14.145928+0000 mgr.y (mgr.24407) 1080 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:15.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:15 vm00 bash[20701]: audit 2026-03-10T07:54:14.145928+0000 mgr.y (mgr.24407) 1080 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:16 vm03 bash[23382]: cluster 2026-03-10T07:54:14.953348+0000 mgr.y (mgr.24407) 1081 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:16 vm03 bash[23382]: cluster 2026-03-10T07:54:14.953348+0000 mgr.y (mgr.24407) 1081 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:16.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:16 vm00 bash[28005]: cluster 2026-03-10T07:54:14.953348+0000 mgr.y (mgr.24407) 1081 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:16.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:16 vm00 bash[28005]: cluster 2026-03-10T07:54:14.953348+0000 mgr.y (mgr.24407) 1081 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:16.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:16 vm00 bash[20701]: cluster 2026-03-10T07:54:14.953348+0000 mgr.y (mgr.24407) 1081 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:16.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:16 vm00 bash[20701]: cluster 2026-03-10T07:54:14.953348+0000 mgr.y (mgr.24407) 1081 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:17.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:17 vm00 bash[28005]: cluster 2026-03-10T07:54:16.953723+0000 mgr.y (mgr.24407) 1082 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:17.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:17 vm00 bash[28005]: cluster 2026-03-10T07:54:16.953723+0000 mgr.y (mgr.24407) 1082 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:17.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:17 vm00 bash[20701]: cluster 2026-03-10T07:54:16.953723+0000 mgr.y (mgr.24407) 1082 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:17.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:17 vm00 bash[20701]: cluster 2026-03-10T07:54:16.953723+0000 mgr.y (mgr.24407) 1082 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:18.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:17 vm03 bash[23382]: cluster 2026-03-10T07:54:16.953723+0000 mgr.y (mgr.24407) 1082 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:18.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:17 vm03 bash[23382]: cluster 2026-03-10T07:54:16.953723+0000 mgr.y (mgr.24407) 1082 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:20.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:20 vm03 bash[23382]: cluster 2026-03-10T07:54:18.954130+0000 mgr.y (mgr.24407) 1083 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:20.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:20 vm03 bash[23382]: cluster 2026-03-10T07:54:18.954130+0000 mgr.y (mgr.24407) 1083 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:20.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:20 vm00 bash[28005]: cluster 2026-03-10T07:54:18.954130+0000 mgr.y (mgr.24407) 1083 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:20.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:20 vm00 bash[28005]: cluster 2026-03-10T07:54:18.954130+0000 mgr.y (mgr.24407) 1083 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:20.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:20 vm00 bash[20701]: cluster 2026-03-10T07:54:18.954130+0000 mgr.y (mgr.24407) 1083 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:20.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:20 vm00 bash[20701]: cluster 2026-03-10T07:54:18.954130+0000 mgr.y (mgr.24407) 1083 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:21.377 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:54:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:54:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:54:22.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:22 vm00 bash[28005]: cluster 2026-03-10T07:54:20.954995+0000 mgr.y (mgr.24407) 1084 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:22.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:22 vm00 bash[28005]: cluster 2026-03-10T07:54:20.954995+0000 mgr.y (mgr.24407) 1084 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:22.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:22 vm00 bash[20701]: cluster 2026-03-10T07:54:20.954995+0000 mgr.y (mgr.24407) 1084 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:22.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:22 vm00 bash[20701]: cluster 2026-03-10T07:54:20.954995+0000 mgr.y (mgr.24407) 1084 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:22.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:22 vm03 bash[23382]: cluster 2026-03-10T07:54:20.954995+0000 mgr.y (mgr.24407) 1084 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:22.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:22 vm03 bash[23382]: cluster 2026-03-10T07:54:20.954995+0000 mgr.y (mgr.24407) 1084 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:24.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:24 vm00 bash[28005]: cluster 2026-03-10T07:54:22.955336+0000 mgr.y (mgr.24407) 1085 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:24.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:24 vm00 bash[28005]: cluster 2026-03-10T07:54:22.955336+0000 mgr.y (mgr.24407) 1085 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:24.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:24 vm00 bash[20701]: cluster 2026-03-10T07:54:22.955336+0000 mgr.y (mgr.24407) 1085 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:24.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:24 vm00 bash[20701]: cluster 2026-03-10T07:54:22.955336+0000 mgr.y (mgr.24407) 1085 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:24.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:54:24 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:54:24.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:24 vm03 bash[23382]: cluster 2026-03-10T07:54:22.955336+0000 mgr.y (mgr.24407) 1085 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:24.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:24 vm03 bash[23382]: cluster 2026-03-10T07:54:22.955336+0000 mgr.y (mgr.24407) 1085 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:25.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:25 vm00 bash[28005]: audit 2026-03-10T07:54:24.150229+0000 mgr.y (mgr.24407) 1086 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:25.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:25 vm00 bash[28005]: audit 2026-03-10T07:54:24.150229+0000 mgr.y (mgr.24407) 1086 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:25.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:25 vm00 bash[20701]: audit 2026-03-10T07:54:24.150229+0000 mgr.y (mgr.24407) 1086 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:25.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:25 vm00 bash[20701]: audit 2026-03-10T07:54:24.150229+0000 mgr.y (mgr.24407) 1086 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:25 vm03 bash[23382]: audit 2026-03-10T07:54:24.150229+0000 mgr.y (mgr.24407) 1086 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:25.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:25 vm03 bash[23382]: audit 2026-03-10T07:54:24.150229+0000 mgr.y (mgr.24407) 1086 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:26.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:26 vm00 bash[20701]: cluster 2026-03-10T07:54:24.956392+0000 mgr.y (mgr.24407) 1087 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:26.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:26 vm00 bash[20701]: cluster 2026-03-10T07:54:24.956392+0000 mgr.y (mgr.24407) 1087 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:26.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:26 vm00 bash[20701]: audit 2026-03-10T07:54:25.355015+0000 mon.c (mon.2) 453 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:26.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:26 vm00 bash[20701]: audit 2026-03-10T07:54:25.355015+0000 mon.c (mon.2) 453 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:26.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:26 vm00 bash[28005]: cluster 2026-03-10T07:54:24.956392+0000 mgr.y (mgr.24407) 1087 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:26.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:26 vm00 bash[28005]: cluster 2026-03-10T07:54:24.956392+0000 mgr.y (mgr.24407) 1087 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:26.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:26 vm00 bash[28005]: audit 2026-03-10T07:54:25.355015+0000 mon.c (mon.2) 453 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:26.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:26 vm00 bash[28005]: audit 2026-03-10T07:54:25.355015+0000 mon.c (mon.2) 453 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:26 vm03 bash[23382]: cluster 2026-03-10T07:54:24.956392+0000 mgr.y (mgr.24407) 1087 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:26 vm03 bash[23382]: cluster 2026-03-10T07:54:24.956392+0000 mgr.y (mgr.24407) 1087 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:26 vm03 bash[23382]: audit 2026-03-10T07:54:25.355015+0000 mon.c (mon.2) 453 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:26 vm03 bash[23382]: audit 2026-03-10T07:54:25.355015+0000 mon.c (mon.2) 453 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:28.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:28 vm00 bash[20701]: cluster 2026-03-10T07:54:26.956728+0000 mgr.y (mgr.24407) 1088 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:28.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:28 vm00 bash[20701]: cluster 2026-03-10T07:54:26.956728+0000 mgr.y (mgr.24407) 1088 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:28.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:28 vm00 bash[28005]: cluster 2026-03-10T07:54:26.956728+0000 mgr.y (mgr.24407) 1088 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:28.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:28 vm00 bash[28005]: cluster 2026-03-10T07:54:26.956728+0000 mgr.y (mgr.24407) 1088 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:28 vm03 bash[23382]: cluster 2026-03-10T07:54:26.956728+0000 mgr.y (mgr.24407) 1088 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:28 vm03 bash[23382]: cluster 2026-03-10T07:54:26.956728+0000 mgr.y (mgr.24407) 1088 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:30.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:30 vm00 bash[20701]: cluster 2026-03-10T07:54:28.957052+0000 mgr.y (mgr.24407) 1089 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:30.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:30 vm00 bash[20701]: cluster 2026-03-10T07:54:28.957052+0000 mgr.y (mgr.24407) 1089 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:30.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:30 vm00 bash[28005]: cluster 2026-03-10T07:54:28.957052+0000 mgr.y (mgr.24407) 1089 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:30.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:30 vm00 bash[28005]: cluster 2026-03-10T07:54:28.957052+0000 mgr.y (mgr.24407) 1089 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:30 vm03 bash[23382]: cluster 2026-03-10T07:54:28.957052+0000 mgr.y (mgr.24407) 1089 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:30 vm03 bash[23382]: cluster 2026-03-10T07:54:28.957052+0000 mgr.y (mgr.24407) 1089 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:31.377 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:54:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:54:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:54:32.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:32 vm00 bash[20701]: cluster 2026-03-10T07:54:30.957820+0000 mgr.y (mgr.24407) 1090 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:32.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:32 vm00 bash[20701]: cluster 2026-03-10T07:54:30.957820+0000 mgr.y (mgr.24407) 1090 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:32.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:32 vm00 bash[20701]: audit 2026-03-10T07:54:31.767751+0000 mon.c (mon.2) 454 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:54:32.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:32 vm00 bash[20701]: audit 2026-03-10T07:54:31.767751+0000 mon.c (mon.2) 454 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:54:32.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:32 vm00 bash[20701]: audit 2026-03-10T07:54:32.074631+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:54:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:32 vm00 bash[20701]: audit 2026-03-10T07:54:32.074631+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:54:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:32 vm00 bash[20701]: audit 2026-03-10T07:54:32.075682+0000 mon.c (mon.2) 456 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:54:32.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:32 vm00 bash[20701]: audit 2026-03-10T07:54:32.075682+0000 mon.c (mon.2) 456 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:54:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:32 vm00 bash[28005]: cluster 2026-03-10T07:54:30.957820+0000 mgr.y (mgr.24407) 1090 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:32 vm00 bash[28005]: cluster 2026-03-10T07:54:30.957820+0000 mgr.y (mgr.24407) 1090 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:32 vm00 bash[28005]: audit 2026-03-10T07:54:31.767751+0000 mon.c (mon.2) 454 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:54:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:32 vm00 bash[28005]: audit 2026-03-10T07:54:31.767751+0000 mon.c (mon.2) 454 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:54:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:32 vm00 bash[28005]: audit 2026-03-10T07:54:32.074631+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:54:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:32 vm00 bash[28005]: audit 2026-03-10T07:54:32.074631+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:54:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:32 vm00 bash[28005]: audit 2026-03-10T07:54:32.075682+0000 mon.c (mon.2) 456 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:54:32.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:32 vm00 bash[28005]: audit 2026-03-10T07:54:32.075682+0000 mon.c (mon.2) 456 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:54:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:32 vm03 bash[23382]: cluster 2026-03-10T07:54:30.957820+0000 mgr.y (mgr.24407) 1090 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:32 vm03 bash[23382]: cluster 2026-03-10T07:54:30.957820+0000 mgr.y (mgr.24407) 1090 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:32 vm03 bash[23382]: audit 2026-03-10T07:54:31.767751+0000 mon.c (mon.2) 454 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:54:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:32 vm03 bash[23382]: audit 2026-03-10T07:54:31.767751+0000 mon.c (mon.2) 454 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:54:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:32 vm03 bash[23382]: audit 2026-03-10T07:54:32.074631+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:54:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:32 vm03 bash[23382]: audit 2026-03-10T07:54:32.074631+0000 mon.c (mon.2) 455 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:54:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:32 vm03 bash[23382]: audit 2026-03-10T07:54:32.075682+0000 mon.c (mon.2) 456 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:54:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:32 vm03 bash[23382]: audit 2026-03-10T07:54:32.075682+0000 mon.c (mon.2) 456 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:54:33.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:33 vm00 bash[20701]: audit 2026-03-10T07:54:32.089074+0000 mon.a (mon.0) 3565 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:54:33.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:33 vm00 bash[20701]: audit 2026-03-10T07:54:32.089074+0000 mon.a (mon.0) 3565 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:54:33.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:33 vm00 bash[28005]: audit 2026-03-10T07:54:32.089074+0000 mon.a (mon.0) 3565 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:54:33.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:33 vm00 bash[28005]: audit 2026-03-10T07:54:32.089074+0000 mon.a (mon.0) 3565 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:54:33.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:33 vm03 bash[23382]: audit 2026-03-10T07:54:32.089074+0000 mon.a (mon.0) 3565 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:54:33.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:33 vm03 bash[23382]: audit 2026-03-10T07:54:32.089074+0000 mon.a (mon.0) 3565 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:54:34.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:34 vm00 bash[28005]: cluster 2026-03-10T07:54:32.958162+0000 mgr.y (mgr.24407) 1091 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:34.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:34 vm00 bash[28005]: cluster 2026-03-10T07:54:32.958162+0000 mgr.y (mgr.24407) 1091 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:34.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:34 vm00 bash[20701]: cluster 2026-03-10T07:54:32.958162+0000 mgr.y (mgr.24407) 1091 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:34.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:34 vm00 bash[20701]: cluster 2026-03-10T07:54:32.958162+0000 mgr.y (mgr.24407) 1091 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:34.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:54:34 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:54:34.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:34 vm03 bash[23382]: cluster 2026-03-10T07:54:32.958162+0000 mgr.y (mgr.24407) 1091 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:34.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:34 vm03 bash[23382]: cluster 2026-03-10T07:54:32.958162+0000 mgr.y (mgr.24407) 1091 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:35.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:35 vm03 bash[23382]: audit 2026-03-10T07:54:34.157558+0000 mgr.y (mgr.24407) 1092 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:35.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:35 vm03 bash[23382]: audit 2026-03-10T07:54:34.157558+0000 mgr.y (mgr.24407) 1092 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:35.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:35 vm00 bash[20701]: audit 2026-03-10T07:54:34.157558+0000 mgr.y (mgr.24407) 1092 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:35.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:35 vm00 bash[20701]: audit 2026-03-10T07:54:34.157558+0000 mgr.y (mgr.24407) 1092 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:35.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:35 vm00 bash[28005]: audit 2026-03-10T07:54:34.157558+0000 mgr.y (mgr.24407) 1092 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:35.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:35 vm00 bash[28005]: audit 2026-03-10T07:54:34.157558+0000 mgr.y (mgr.24407) 1092 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:36.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:36 vm03 bash[23382]: cluster 2026-03-10T07:54:34.958910+0000 mgr.y (mgr.24407) 1093 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:36.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:36 vm03 bash[23382]: cluster 2026-03-10T07:54:34.958910+0000 mgr.y (mgr.24407) 1093 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:36.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:36 vm00 bash[20701]: cluster 2026-03-10T07:54:34.958910+0000 mgr.y (mgr.24407) 1093 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:36.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:36 vm00 bash[20701]: cluster 2026-03-10T07:54:34.958910+0000 mgr.y (mgr.24407) 1093 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:36.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:36 vm00 bash[28005]: cluster 2026-03-10T07:54:34.958910+0000 mgr.y (mgr.24407) 1093 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:36.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:36 vm00 bash[28005]: cluster 2026-03-10T07:54:34.958910+0000 mgr.y (mgr.24407) 1093 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:38.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:38 vm03 bash[23382]: cluster 2026-03-10T07:54:36.959239+0000 mgr.y (mgr.24407) 1094 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:38.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:38 vm03 bash[23382]: cluster 2026-03-10T07:54:36.959239+0000 mgr.y (mgr.24407) 1094 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:38.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:38 vm00 bash[20701]: cluster 2026-03-10T07:54:36.959239+0000 mgr.y (mgr.24407) 1094 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:38.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:38 vm00 bash[20701]: cluster 2026-03-10T07:54:36.959239+0000 mgr.y (mgr.24407) 1094 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:38.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:38 vm00 bash[28005]: cluster 2026-03-10T07:54:36.959239+0000 mgr.y (mgr.24407) 1094 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:38.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:38 vm00 bash[28005]: cluster 2026-03-10T07:54:36.959239+0000 mgr.y (mgr.24407) 1094 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:40 vm03 bash[23382]: cluster 2026-03-10T07:54:38.959565+0000 mgr.y (mgr.24407) 1095 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:40 vm03 bash[23382]: cluster 2026-03-10T07:54:38.959565+0000 mgr.y (mgr.24407) 1095 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:40.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:40 vm00 bash[28005]: cluster 2026-03-10T07:54:38.959565+0000 mgr.y (mgr.24407) 1095 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:40.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:40 vm00 bash[28005]: cluster 2026-03-10T07:54:38.959565+0000 mgr.y (mgr.24407) 1095 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:40.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:40 vm00 bash[20701]: cluster 2026-03-10T07:54:38.959565+0000 mgr.y (mgr.24407) 1095 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:40.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:40 vm00 bash[20701]: cluster 2026-03-10T07:54:38.959565+0000 mgr.y (mgr.24407) 1095 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:41.377 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:54:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:54:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:54:41.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:41 vm00 bash[20701]: audit 2026-03-10T07:54:40.361824+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:41.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:41 vm00 bash[20701]: audit 2026-03-10T07:54:40.361824+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:41.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:41 vm00 bash[28005]: audit 2026-03-10T07:54:40.361824+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:41.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:41 vm00 bash[28005]: audit 2026-03-10T07:54:40.361824+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:41 vm03 bash[23382]: audit 2026-03-10T07:54:40.361824+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:41 vm03 bash[23382]: audit 2026-03-10T07:54:40.361824+0000 mon.c (mon.2) 457 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:42.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:42 vm03 bash[23382]: cluster 2026-03-10T07:54:40.960282+0000 mgr.y (mgr.24407) 1096 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:42.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:42 vm03 bash[23382]: cluster 2026-03-10T07:54:40.960282+0000 mgr.y (mgr.24407) 1096 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:42.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:42 vm00 bash[28005]: cluster 2026-03-10T07:54:40.960282+0000 mgr.y (mgr.24407) 1096 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:42.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:42 vm00 bash[28005]: cluster 2026-03-10T07:54:40.960282+0000 mgr.y (mgr.24407) 1096 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:42.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:42 vm00 bash[20701]: cluster 2026-03-10T07:54:40.960282+0000 mgr.y (mgr.24407) 1096 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:42.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:42 vm00 bash[20701]: cluster 2026-03-10T07:54:40.960282+0000 mgr.y (mgr.24407) 1096 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:44.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:54:44 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:54:44.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:44 vm03 bash[23382]: cluster 2026-03-10T07:54:42.960622+0000 mgr.y (mgr.24407) 1097 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:44.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:44 vm03 bash[23382]: cluster 2026-03-10T07:54:42.960622+0000 mgr.y (mgr.24407) 1097 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:44.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:44 vm00 bash[28005]: cluster 2026-03-10T07:54:42.960622+0000 mgr.y (mgr.24407) 1097 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:44.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:44 vm00 bash[28005]: cluster 2026-03-10T07:54:42.960622+0000 mgr.y (mgr.24407) 1097 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:44.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:44 vm00 bash[20701]: cluster 2026-03-10T07:54:42.960622+0000 mgr.y (mgr.24407) 1097 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:44.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:44 vm00 bash[20701]: cluster 2026-03-10T07:54:42.960622+0000 mgr.y (mgr.24407) 1097 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:45 vm03 bash[23382]: audit 2026-03-10T07:54:44.164076+0000 mgr.y (mgr.24407) 1098 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:45 vm03 bash[23382]: audit 2026-03-10T07:54:44.164076+0000 mgr.y (mgr.24407) 1098 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:45.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:45 vm00 bash[28005]: audit 2026-03-10T07:54:44.164076+0000 mgr.y (mgr.24407) 1098 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:45.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:45 vm00 bash[28005]: audit 2026-03-10T07:54:44.164076+0000 mgr.y (mgr.24407) 1098 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:45.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:45 vm00 bash[20701]: audit 2026-03-10T07:54:44.164076+0000 mgr.y (mgr.24407) 1098 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:45.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:45 vm00 bash[20701]: audit 2026-03-10T07:54:44.164076+0000 mgr.y (mgr.24407) 1098 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:46 vm03 bash[23382]: cluster 2026-03-10T07:54:44.961358+0000 mgr.y (mgr.24407) 1099 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:46 vm03 bash[23382]: cluster 2026-03-10T07:54:44.961358+0000 mgr.y (mgr.24407) 1099 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:46.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:46 vm00 bash[28005]: cluster 2026-03-10T07:54:44.961358+0000 mgr.y (mgr.24407) 1099 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:46.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:46 vm00 bash[28005]: cluster 2026-03-10T07:54:44.961358+0000 mgr.y (mgr.24407) 1099 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:46.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:46 vm00 bash[20701]: cluster 2026-03-10T07:54:44.961358+0000 mgr.y (mgr.24407) 1099 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:46.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:46 vm00 bash[20701]: cluster 2026-03-10T07:54:44.961358+0000 mgr.y (mgr.24407) 1099 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:48 vm03 bash[23382]: cluster 2026-03-10T07:54:46.961676+0000 mgr.y (mgr.24407) 1100 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:48 vm03 bash[23382]: cluster 2026-03-10T07:54:46.961676+0000 mgr.y (mgr.24407) 1100 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:48.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:48 vm00 bash[28005]: cluster 2026-03-10T07:54:46.961676+0000 mgr.y (mgr.24407) 1100 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:48.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:48 vm00 bash[28005]: cluster 2026-03-10T07:54:46.961676+0000 mgr.y (mgr.24407) 1100 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:48.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:48 vm00 bash[20701]: cluster 2026-03-10T07:54:46.961676+0000 mgr.y (mgr.24407) 1100 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:48.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:48 vm00 bash[20701]: cluster 2026-03-10T07:54:46.961676+0000 mgr.y (mgr.24407) 1100 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:50 vm03 bash[23382]: cluster 2026-03-10T07:54:48.962014+0000 mgr.y (mgr.24407) 1101 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:50 vm03 bash[23382]: cluster 2026-03-10T07:54:48.962014+0000 mgr.y (mgr.24407) 1101 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:50.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:50 vm00 bash[28005]: cluster 2026-03-10T07:54:48.962014+0000 mgr.y (mgr.24407) 1101 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:50.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:50 vm00 bash[28005]: cluster 2026-03-10T07:54:48.962014+0000 mgr.y (mgr.24407) 1101 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:50.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:50 vm00 bash[20701]: cluster 2026-03-10T07:54:48.962014+0000 mgr.y (mgr.24407) 1101 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:50.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:50 vm00 bash[20701]: cluster 2026-03-10T07:54:48.962014+0000 mgr.y (mgr.24407) 1101 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:51.377 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:54:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:54:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:54:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:52 vm03 bash[23382]: cluster 2026-03-10T07:54:50.962652+0000 mgr.y (mgr.24407) 1102 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:52 vm03 bash[23382]: cluster 2026-03-10T07:54:50.962652+0000 mgr.y (mgr.24407) 1102 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:52.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:52 vm00 bash[28005]: cluster 2026-03-10T07:54:50.962652+0000 mgr.y (mgr.24407) 1102 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:52.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:52 vm00 bash[28005]: cluster 2026-03-10T07:54:50.962652+0000 mgr.y (mgr.24407) 1102 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:52.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:52 vm00 bash[20701]: cluster 2026-03-10T07:54:50.962652+0000 mgr.y (mgr.24407) 1102 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:52.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:52 vm00 bash[20701]: cluster 2026-03-10T07:54:50.962652+0000 mgr.y (mgr.24407) 1102 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:54.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:54:54 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:54:54.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:54 vm03 bash[23382]: cluster 2026-03-10T07:54:52.962996+0000 mgr.y (mgr.24407) 1103 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:54.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:54 vm03 bash[23382]: cluster 2026-03-10T07:54:52.962996+0000 mgr.y (mgr.24407) 1103 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:54.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:54 vm00 bash[28005]: cluster 2026-03-10T07:54:52.962996+0000 mgr.y (mgr.24407) 1103 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:54.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:54 vm00 bash[28005]: cluster 2026-03-10T07:54:52.962996+0000 mgr.y (mgr.24407) 1103 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:54.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:54 vm00 bash[20701]: cluster 2026-03-10T07:54:52.962996+0000 mgr.y (mgr.24407) 1103 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:54.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:54 vm00 bash[20701]: cluster 2026-03-10T07:54:52.962996+0000 mgr.y (mgr.24407) 1103 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:55.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:55 vm00 bash[28005]: audit 2026-03-10T07:54:54.172427+0000 mgr.y (mgr.24407) 1104 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:55.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:55 vm00 bash[28005]: audit 2026-03-10T07:54:54.172427+0000 mgr.y (mgr.24407) 1104 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:55.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:55 vm00 bash[20701]: audit 2026-03-10T07:54:54.172427+0000 mgr.y (mgr.24407) 1104 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:55.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:55 vm00 bash[20701]: audit 2026-03-10T07:54:54.172427+0000 mgr.y (mgr.24407) 1104 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:55 vm03 bash[23382]: audit 2026-03-10T07:54:54.172427+0000 mgr.y (mgr.24407) 1104 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:55 vm03 bash[23382]: audit 2026-03-10T07:54:54.172427+0000 mgr.y (mgr.24407) 1104 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:54:56.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:56 vm00 bash[28005]: cluster 2026-03-10T07:54:54.963577+0000 mgr.y (mgr.24407) 1105 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:56.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:56 vm00 bash[28005]: cluster 2026-03-10T07:54:54.963577+0000 mgr.y (mgr.24407) 1105 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:56.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:56 vm00 bash[28005]: audit 2026-03-10T07:54:55.368277+0000 mon.c (mon.2) 458 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:56.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:56 vm00 bash[28005]: audit 2026-03-10T07:54:55.368277+0000 mon.c (mon.2) 458 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:56.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:56 vm00 bash[20701]: cluster 2026-03-10T07:54:54.963577+0000 mgr.y (mgr.24407) 1105 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:56.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:56 vm00 bash[20701]: cluster 2026-03-10T07:54:54.963577+0000 mgr.y (mgr.24407) 1105 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:56.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:56 vm00 bash[20701]: audit 2026-03-10T07:54:55.368277+0000 mon.c (mon.2) 458 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:56.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:56 vm00 bash[20701]: audit 2026-03-10T07:54:55.368277+0000 mon.c (mon.2) 458 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:56 vm03 bash[23382]: cluster 2026-03-10T07:54:54.963577+0000 mgr.y (mgr.24407) 1105 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:56 vm03 bash[23382]: cluster 2026-03-10T07:54:54.963577+0000 mgr.y (mgr.24407) 1105 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:54:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:56 vm03 bash[23382]: audit 2026-03-10T07:54:55.368277+0000 mon.c (mon.2) 458 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:56.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:56 vm03 bash[23382]: audit 2026-03-10T07:54:55.368277+0000 mon.c (mon.2) 458 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:54:58.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:58 vm00 bash[28005]: cluster 2026-03-10T07:54:56.963967+0000 mgr.y (mgr.24407) 1106 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:58.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:54:58 vm00 bash[28005]: cluster 2026-03-10T07:54:56.963967+0000 mgr.y (mgr.24407) 1106 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:58.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:58 vm00 bash[20701]: cluster 2026-03-10T07:54:56.963967+0000 mgr.y (mgr.24407) 1106 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:58.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:54:58 vm00 bash[20701]: cluster 2026-03-10T07:54:56.963967+0000 mgr.y (mgr.24407) 1106 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:58.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:58 vm03 bash[23382]: cluster 2026-03-10T07:54:56.963967+0000 mgr.y (mgr.24407) 1106 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:54:58.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:54:58 vm03 bash[23382]: cluster 2026-03-10T07:54:56.963967+0000 mgr.y (mgr.24407) 1106 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:00.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:00 vm00 bash[28005]: cluster 2026-03-10T07:54:58.964293+0000 mgr.y (mgr.24407) 1107 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:00.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:00 vm00 bash[28005]: cluster 2026-03-10T07:54:58.964293+0000 mgr.y (mgr.24407) 1107 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:00.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:00 vm00 bash[20701]: cluster 2026-03-10T07:54:58.964293+0000 mgr.y (mgr.24407) 1107 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:00.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:00 vm00 bash[20701]: cluster 2026-03-10T07:54:58.964293+0000 mgr.y (mgr.24407) 1107 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:00.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:00 vm03 bash[23382]: cluster 2026-03-10T07:54:58.964293+0000 mgr.y (mgr.24407) 1107 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:00.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:00 vm03 bash[23382]: cluster 2026-03-10T07:54:58.964293+0000 mgr.y (mgr.24407) 1107 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:01.377 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:55:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:55:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:55:02.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:02 vm00 bash[28005]: cluster 2026-03-10T07:55:00.964993+0000 mgr.y (mgr.24407) 1108 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:02.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:02 vm00 bash[28005]: cluster 2026-03-10T07:55:00.964993+0000 mgr.y (mgr.24407) 1108 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:02.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:02 vm00 bash[20701]: cluster 2026-03-10T07:55:00.964993+0000 mgr.y (mgr.24407) 1108 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:02.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:02 vm00 bash[20701]: cluster 2026-03-10T07:55:00.964993+0000 mgr.y (mgr.24407) 1108 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:02.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:02 vm03 bash[23382]: cluster 2026-03-10T07:55:00.964993+0000 mgr.y (mgr.24407) 1108 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:02.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:02 vm03 bash[23382]: cluster 2026-03-10T07:55:00.964993+0000 mgr.y (mgr.24407) 1108 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:04.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:55:04 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:55:04.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:04 vm03 bash[23382]: cluster 2026-03-10T07:55:02.965366+0000 mgr.y (mgr.24407) 1109 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:04.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:04 vm03 bash[23382]: cluster 2026-03-10T07:55:02.965366+0000 mgr.y (mgr.24407) 1109 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:04.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:04 vm00 bash[28005]: cluster 2026-03-10T07:55:02.965366+0000 mgr.y (mgr.24407) 1109 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:04.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:04 vm00 bash[28005]: cluster 2026-03-10T07:55:02.965366+0000 mgr.y (mgr.24407) 1109 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:04.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:04 vm00 bash[20701]: cluster 2026-03-10T07:55:02.965366+0000 mgr.y (mgr.24407) 1109 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:04.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:04 vm00 bash[20701]: cluster 2026-03-10T07:55:02.965366+0000 mgr.y (mgr.24407) 1109 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:05.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:05 vm00 bash[28005]: audit 2026-03-10T07:55:04.179938+0000 mgr.y (mgr.24407) 1110 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:05.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:05 vm00 bash[28005]: audit 2026-03-10T07:55:04.179938+0000 mgr.y (mgr.24407) 1110 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:05.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:05 vm00 bash[20701]: audit 2026-03-10T07:55:04.179938+0000 mgr.y (mgr.24407) 1110 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:05.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:05 vm00 bash[20701]: audit 2026-03-10T07:55:04.179938+0000 mgr.y (mgr.24407) 1110 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:05 vm03 bash[23382]: audit 2026-03-10T07:55:04.179938+0000 mgr.y (mgr.24407) 1110 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:05 vm03 bash[23382]: audit 2026-03-10T07:55:04.179938+0000 mgr.y (mgr.24407) 1110 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:06.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:06 vm00 bash[28005]: cluster 2026-03-10T07:55:04.966107+0000 mgr.y (mgr.24407) 1111 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:06.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:06 vm00 bash[28005]: cluster 2026-03-10T07:55:04.966107+0000 mgr.y (mgr.24407) 1111 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:06.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:06 vm00 bash[20701]: cluster 2026-03-10T07:55:04.966107+0000 mgr.y (mgr.24407) 1111 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:06.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:06 vm00 bash[20701]: cluster 2026-03-10T07:55:04.966107+0000 mgr.y (mgr.24407) 1111 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:06 vm03 bash[23382]: cluster 2026-03-10T07:55:04.966107+0000 mgr.y (mgr.24407) 1111 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:06 vm03 bash[23382]: cluster 2026-03-10T07:55:04.966107+0000 mgr.y (mgr.24407) 1111 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:08.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:08 vm00 bash[28005]: cluster 2026-03-10T07:55:06.966423+0000 mgr.y (mgr.24407) 1112 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:08.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:08 vm00 bash[28005]: cluster 2026-03-10T07:55:06.966423+0000 mgr.y (mgr.24407) 1112 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:08.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:08 vm00 bash[20701]: cluster 2026-03-10T07:55:06.966423+0000 mgr.y (mgr.24407) 1112 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:08.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:08 vm00 bash[20701]: cluster 2026-03-10T07:55:06.966423+0000 mgr.y (mgr.24407) 1112 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:08 vm03 bash[23382]: cluster 2026-03-10T07:55:06.966423+0000 mgr.y (mgr.24407) 1112 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:08 vm03 bash[23382]: cluster 2026-03-10T07:55:06.966423+0000 mgr.y (mgr.24407) 1112 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:10.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:10 vm00 bash[28005]: cluster 2026-03-10T07:55:08.966723+0000 mgr.y (mgr.24407) 1113 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:10.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:10 vm00 bash[28005]: cluster 2026-03-10T07:55:08.966723+0000 mgr.y (mgr.24407) 1113 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:10.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:10 vm00 bash[20701]: cluster 2026-03-10T07:55:08.966723+0000 mgr.y (mgr.24407) 1113 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:10.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:10 vm00 bash[20701]: cluster 2026-03-10T07:55:08.966723+0000 mgr.y (mgr.24407) 1113 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:10 vm03 bash[23382]: cluster 2026-03-10T07:55:08.966723+0000 mgr.y (mgr.24407) 1113 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:10 vm03 bash[23382]: cluster 2026-03-10T07:55:08.966723+0000 mgr.y (mgr.24407) 1113 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:11.356 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:55:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:55:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:55:11.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:11 vm00 bash[28005]: audit 2026-03-10T07:55:10.374534+0000 mon.c (mon.2) 459 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:11.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:11 vm00 bash[28005]: audit 2026-03-10T07:55:10.374534+0000 mon.c (mon.2) 459 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:11.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:11 vm00 bash[20701]: audit 2026-03-10T07:55:10.374534+0000 mon.c (mon.2) 459 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:11.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:11 vm00 bash[20701]: audit 2026-03-10T07:55:10.374534+0000 mon.c (mon.2) 459 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:11.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:11 vm03 bash[23382]: audit 2026-03-10T07:55:10.374534+0000 mon.c (mon.2) 459 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:11.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:11 vm03 bash[23382]: audit 2026-03-10T07:55:10.374534+0000 mon.c (mon.2) 459 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:12.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:12 vm00 bash[28005]: cluster 2026-03-10T07:55:10.967373+0000 mgr.y (mgr.24407) 1114 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:12.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:12 vm00 bash[28005]: cluster 2026-03-10T07:55:10.967373+0000 mgr.y (mgr.24407) 1114 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:12.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:12 vm00 bash[20701]: cluster 2026-03-10T07:55:10.967373+0000 mgr.y (mgr.24407) 1114 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:12.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:12 vm00 bash[20701]: cluster 2026-03-10T07:55:10.967373+0000 mgr.y (mgr.24407) 1114 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:12 vm03 bash[23382]: cluster 2026-03-10T07:55:10.967373+0000 mgr.y (mgr.24407) 1114 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:12 vm03 bash[23382]: cluster 2026-03-10T07:55:10.967373+0000 mgr.y (mgr.24407) 1114 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:14.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:55:14 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:55:14.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:14 vm03 bash[23382]: cluster 2026-03-10T07:55:12.967719+0000 mgr.y (mgr.24407) 1115 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:14.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:14 vm03 bash[23382]: cluster 2026-03-10T07:55:12.967719+0000 mgr.y (mgr.24407) 1115 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:14.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:14 vm00 bash[28005]: cluster 2026-03-10T07:55:12.967719+0000 mgr.y (mgr.24407) 1115 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:14.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:14 vm00 bash[28005]: cluster 2026-03-10T07:55:12.967719+0000 mgr.y (mgr.24407) 1115 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:14.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:14 vm00 bash[20701]: cluster 2026-03-10T07:55:12.967719+0000 mgr.y (mgr.24407) 1115 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:14.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:14 vm00 bash[20701]: cluster 2026-03-10T07:55:12.967719+0000 mgr.y (mgr.24407) 1115 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:15 vm03 bash[23382]: audit 2026-03-10T07:55:14.187910+0000 mgr.y (mgr.24407) 1116 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:15.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:15 vm03 bash[23382]: audit 2026-03-10T07:55:14.187910+0000 mgr.y (mgr.24407) 1116 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:15.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:15 vm00 bash[28005]: audit 2026-03-10T07:55:14.187910+0000 mgr.y (mgr.24407) 1116 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:15.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:15 vm00 bash[28005]: audit 2026-03-10T07:55:14.187910+0000 mgr.y (mgr.24407) 1116 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:15.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:15 vm00 bash[20701]: audit 2026-03-10T07:55:14.187910+0000 mgr.y (mgr.24407) 1116 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:15.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:15 vm00 bash[20701]: audit 2026-03-10T07:55:14.187910+0000 mgr.y (mgr.24407) 1116 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:16 vm03 bash[23382]: cluster 2026-03-10T07:55:14.968397+0000 mgr.y (mgr.24407) 1117 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:16 vm03 bash[23382]: cluster 2026-03-10T07:55:14.968397+0000 mgr.y (mgr.24407) 1117 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:16.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:16 vm00 bash[28005]: cluster 2026-03-10T07:55:14.968397+0000 mgr.y (mgr.24407) 1117 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:16.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:16 vm00 bash[28005]: cluster 2026-03-10T07:55:14.968397+0000 mgr.y (mgr.24407) 1117 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:16.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:16 vm00 bash[20701]: cluster 2026-03-10T07:55:14.968397+0000 mgr.y (mgr.24407) 1117 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:16.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:16 vm00 bash[20701]: cluster 2026-03-10T07:55:14.968397+0000 mgr.y (mgr.24407) 1117 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:18.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:18 vm03 bash[23382]: cluster 2026-03-10T07:55:16.968746+0000 mgr.y (mgr.24407) 1118 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:18.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:18 vm03 bash[23382]: cluster 2026-03-10T07:55:16.968746+0000 mgr.y (mgr.24407) 1118 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:18.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:18 vm00 bash[28005]: cluster 2026-03-10T07:55:16.968746+0000 mgr.y (mgr.24407) 1118 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:18.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:18 vm00 bash[28005]: cluster 2026-03-10T07:55:16.968746+0000 mgr.y (mgr.24407) 1118 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:18.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:18 vm00 bash[20701]: cluster 2026-03-10T07:55:16.968746+0000 mgr.y (mgr.24407) 1118 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:18.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:18 vm00 bash[20701]: cluster 2026-03-10T07:55:16.968746+0000 mgr.y (mgr.24407) 1118 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:20.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:20 vm03 bash[23382]: cluster 2026-03-10T07:55:18.969078+0000 mgr.y (mgr.24407) 1119 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:20.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:20 vm03 bash[23382]: cluster 2026-03-10T07:55:18.969078+0000 mgr.y (mgr.24407) 1119 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:20.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:20 vm00 bash[28005]: cluster 2026-03-10T07:55:18.969078+0000 mgr.y (mgr.24407) 1119 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:20.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:20 vm00 bash[28005]: cluster 2026-03-10T07:55:18.969078+0000 mgr.y (mgr.24407) 1119 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:20.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:20 vm00 bash[20701]: cluster 2026-03-10T07:55:18.969078+0000 mgr.y (mgr.24407) 1119 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:20.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:20 vm00 bash[20701]: cluster 2026-03-10T07:55:18.969078+0000 mgr.y (mgr.24407) 1119 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:21.377 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:55:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:55:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:55:22.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:22 vm03 bash[23382]: cluster 2026-03-10T07:55:20.969752+0000 mgr.y (mgr.24407) 1120 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:22.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:22 vm03 bash[23382]: cluster 2026-03-10T07:55:20.969752+0000 mgr.y (mgr.24407) 1120 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:22.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:22 vm00 bash[28005]: cluster 2026-03-10T07:55:20.969752+0000 mgr.y (mgr.24407) 1120 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:22.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:22 vm00 bash[28005]: cluster 2026-03-10T07:55:20.969752+0000 mgr.y (mgr.24407) 1120 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:22.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:22 vm00 bash[20701]: cluster 2026-03-10T07:55:20.969752+0000 mgr.y (mgr.24407) 1120 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:22.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:22 vm00 bash[20701]: cluster 2026-03-10T07:55:20.969752+0000 mgr.y (mgr.24407) 1120 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:24.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:55:24 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:55:24.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:24 vm03 bash[23382]: cluster 2026-03-10T07:55:22.970075+0000 mgr.y (mgr.24407) 1121 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:24.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:24 vm03 bash[23382]: cluster 2026-03-10T07:55:22.970075+0000 mgr.y (mgr.24407) 1121 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:24.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:24 vm00 bash[28005]: cluster 2026-03-10T07:55:22.970075+0000 mgr.y (mgr.24407) 1121 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:24.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:24 vm00 bash[28005]: cluster 2026-03-10T07:55:22.970075+0000 mgr.y (mgr.24407) 1121 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:24.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:24 vm00 bash[20701]: cluster 2026-03-10T07:55:22.970075+0000 mgr.y (mgr.24407) 1121 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:24.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:24 vm00 bash[20701]: cluster 2026-03-10T07:55:22.970075+0000 mgr.y (mgr.24407) 1121 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:25.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:25 vm03 bash[23382]: audit 2026-03-10T07:55:24.193977+0000 mgr.y (mgr.24407) 1122 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:25.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:25 vm03 bash[23382]: audit 2026-03-10T07:55:24.193977+0000 mgr.y (mgr.24407) 1122 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:25.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:25 vm03 bash[23382]: audit 2026-03-10T07:55:25.382058+0000 mon.c (mon.2) 460 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:25.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:25 vm03 bash[23382]: audit 2026-03-10T07:55:25.382058+0000 mon.c (mon.2) 460 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:25.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:25 vm00 bash[28005]: audit 2026-03-10T07:55:24.193977+0000 mgr.y (mgr.24407) 1122 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:25.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:25 vm00 bash[28005]: audit 2026-03-10T07:55:24.193977+0000 mgr.y (mgr.24407) 1122 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:25.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:25 vm00 bash[28005]: audit 2026-03-10T07:55:25.382058+0000 mon.c (mon.2) 460 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:25.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:25 vm00 bash[28005]: audit 2026-03-10T07:55:25.382058+0000 mon.c (mon.2) 460 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:25.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:25 vm00 bash[20701]: audit 2026-03-10T07:55:24.193977+0000 mgr.y (mgr.24407) 1122 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:25.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:25 vm00 bash[20701]: audit 2026-03-10T07:55:24.193977+0000 mgr.y (mgr.24407) 1122 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:25.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:25 vm00 bash[20701]: audit 2026-03-10T07:55:25.382058+0000 mon.c (mon.2) 460 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:25.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:25 vm00 bash[20701]: audit 2026-03-10T07:55:25.382058+0000 mon.c (mon.2) 460 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:26.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:26 vm03 bash[23382]: cluster 2026-03-10T07:55:24.970749+0000 mgr.y (mgr.24407) 1123 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:26.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:26 vm03 bash[23382]: cluster 2026-03-10T07:55:24.970749+0000 mgr.y (mgr.24407) 1123 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:26.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:26 vm00 bash[28005]: cluster 2026-03-10T07:55:24.970749+0000 mgr.y (mgr.24407) 1123 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:26.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:26 vm00 bash[28005]: cluster 2026-03-10T07:55:24.970749+0000 mgr.y (mgr.24407) 1123 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:26.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:26 vm00 bash[20701]: cluster 2026-03-10T07:55:24.970749+0000 mgr.y (mgr.24407) 1123 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:26.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:26 vm00 bash[20701]: cluster 2026-03-10T07:55:24.970749+0000 mgr.y (mgr.24407) 1123 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:28.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:28 vm03 bash[23382]: cluster 2026-03-10T07:55:26.971141+0000 mgr.y (mgr.24407) 1124 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:28.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:28 vm03 bash[23382]: cluster 2026-03-10T07:55:26.971141+0000 mgr.y (mgr.24407) 1124 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:28.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:28 vm00 bash[28005]: cluster 2026-03-10T07:55:26.971141+0000 mgr.y (mgr.24407) 1124 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:28.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:28 vm00 bash[28005]: cluster 2026-03-10T07:55:26.971141+0000 mgr.y (mgr.24407) 1124 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:28.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:28 vm00 bash[20701]: cluster 2026-03-10T07:55:26.971141+0000 mgr.y (mgr.24407) 1124 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:28.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:28 vm00 bash[20701]: cluster 2026-03-10T07:55:26.971141+0000 mgr.y (mgr.24407) 1124 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:30.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:30 vm03 bash[23382]: cluster 2026-03-10T07:55:28.971528+0000 mgr.y (mgr.24407) 1125 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:30.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:30 vm03 bash[23382]: cluster 2026-03-10T07:55:28.971528+0000 mgr.y (mgr.24407) 1125 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:30.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:30 vm00 bash[28005]: cluster 2026-03-10T07:55:28.971528+0000 mgr.y (mgr.24407) 1125 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:30.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:30 vm00 bash[28005]: cluster 2026-03-10T07:55:28.971528+0000 mgr.y (mgr.24407) 1125 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:30.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:30 vm00 bash[20701]: cluster 2026-03-10T07:55:28.971528+0000 mgr.y (mgr.24407) 1125 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:30.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:30 vm00 bash[20701]: cluster 2026-03-10T07:55:28.971528+0000 mgr.y (mgr.24407) 1125 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:31.377 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:55:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:55:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:55:32.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:32 vm00 bash[28005]: cluster 2026-03-10T07:55:30.972385+0000 mgr.y (mgr.24407) 1126 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:32.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:32 vm00 bash[28005]: cluster 2026-03-10T07:55:30.972385+0000 mgr.y (mgr.24407) 1126 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:32.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:32 vm00 bash[28005]: audit 2026-03-10T07:55:32.128339+0000 mon.c (mon.2) 461 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:55:32.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:32 vm00 bash[28005]: audit 2026-03-10T07:55:32.128339+0000 mon.c (mon.2) 461 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:55:32.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:32 vm00 bash[20701]: cluster 2026-03-10T07:55:30.972385+0000 mgr.y (mgr.24407) 1126 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:32.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:32 vm00 bash[20701]: cluster 2026-03-10T07:55:30.972385+0000 mgr.y (mgr.24407) 1126 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:32.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:32 vm00 bash[20701]: audit 2026-03-10T07:55:32.128339+0000 mon.c (mon.2) 461 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:55:32.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:32 vm00 bash[20701]: audit 2026-03-10T07:55:32.128339+0000 mon.c (mon.2) 461 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:55:33.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:32 vm03 bash[23382]: cluster 2026-03-10T07:55:30.972385+0000 mgr.y (mgr.24407) 1126 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:33.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:32 vm03 bash[23382]: cluster 2026-03-10T07:55:30.972385+0000 mgr.y (mgr.24407) 1126 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:33.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:32 vm03 bash[23382]: audit 2026-03-10T07:55:32.128339+0000 mon.c (mon.2) 461 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:55:33.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:32 vm03 bash[23382]: audit 2026-03-10T07:55:32.128339+0000 mon.c (mon.2) 461 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:55:33.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:33 vm00 bash[28005]: cluster 2026-03-10T07:55:32.972802+0000 mgr.y (mgr.24407) 1127 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:33.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:33 vm00 bash[28005]: cluster 2026-03-10T07:55:32.972802+0000 mgr.y (mgr.24407) 1127 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:33.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:33 vm00 bash[20701]: cluster 2026-03-10T07:55:32.972802+0000 mgr.y (mgr.24407) 1127 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:33.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:33 vm00 bash[20701]: cluster 2026-03-10T07:55:32.972802+0000 mgr.y (mgr.24407) 1127 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:34.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:33 vm03 bash[23382]: cluster 2026-03-10T07:55:32.972802+0000 mgr.y (mgr.24407) 1127 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:34.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:33 vm03 bash[23382]: cluster 2026-03-10T07:55:32.972802+0000 mgr.y (mgr.24407) 1127 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:34.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:55:34 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:55:34.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:34 vm00 bash[28005]: audit 2026-03-10T07:55:34.199173+0000 mgr.y (mgr.24407) 1128 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:34.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:34 vm00 bash[28005]: audit 2026-03-10T07:55:34.199173+0000 mgr.y (mgr.24407) 1128 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:34.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:34 vm00 bash[20701]: audit 2026-03-10T07:55:34.199173+0000 mgr.y (mgr.24407) 1128 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:34.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:34 vm00 bash[20701]: audit 2026-03-10T07:55:34.199173+0000 mgr.y (mgr.24407) 1128 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:35.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:34 vm03 bash[23382]: audit 2026-03-10T07:55:34.199173+0000 mgr.y (mgr.24407) 1128 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:35.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:34 vm03 bash[23382]: audit 2026-03-10T07:55:34.199173+0000 mgr.y (mgr.24407) 1128 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:35.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:35 vm00 bash[28005]: cluster 2026-03-10T07:55:34.973482+0000 mgr.y (mgr.24407) 1129 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:35.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:35 vm00 bash[28005]: cluster 2026-03-10T07:55:34.973482+0000 mgr.y (mgr.24407) 1129 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:35.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:35 vm00 bash[20701]: cluster 2026-03-10T07:55:34.973482+0000 mgr.y (mgr.24407) 1129 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:35.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:35 vm00 bash[20701]: cluster 2026-03-10T07:55:34.973482+0000 mgr.y (mgr.24407) 1129 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:36.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:35 vm03 bash[23382]: cluster 2026-03-10T07:55:34.973482+0000 mgr.y (mgr.24407) 1129 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:36.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:35 vm03 bash[23382]: cluster 2026-03-10T07:55:34.973482+0000 mgr.y (mgr.24407) 1129 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:38.214 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:38 vm03 bash[23382]: cluster 2026-03-10T07:55:36.973796+0000 mgr.y (mgr.24407) 1130 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:38.214 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:38 vm03 bash[23382]: cluster 2026-03-10T07:55:36.973796+0000 mgr.y (mgr.24407) 1130 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:38.214 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:38 vm03 bash[23382]: audit 2026-03-10T07:55:38.015087+0000 mon.a (mon.0) 3566 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.214 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:38 vm03 bash[23382]: audit 2026-03-10T07:55:38.015087+0000 mon.a (mon.0) 3566 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.214 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:38 vm03 bash[23382]: audit 2026-03-10T07:55:38.023789+0000 mon.a (mon.0) 3567 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.214 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:38 vm03 bash[23382]: audit 2026-03-10T07:55:38.023789+0000 mon.a (mon.0) 3567 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.214 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:38 vm03 bash[23382]: audit 2026-03-10T07:55:38.026569+0000 mon.c (mon.2) 462 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:55:38.214 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:38 vm03 bash[23382]: audit 2026-03-10T07:55:38.026569+0000 mon.c (mon.2) 462 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:55:38.214 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:38 vm03 bash[23382]: audit 2026-03-10T07:55:38.027168+0000 mon.c (mon.2) 463 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:55:38.214 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:38 vm03 bash[23382]: audit 2026-03-10T07:55:38.027168+0000 mon.c (mon.2) 463 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:38 vm00 bash[28005]: cluster 2026-03-10T07:55:36.973796+0000 mgr.y (mgr.24407) 1130 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:38 vm00 bash[28005]: cluster 2026-03-10T07:55:36.973796+0000 mgr.y (mgr.24407) 1130 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:38 vm00 bash[28005]: audit 2026-03-10T07:55:38.015087+0000 mon.a (mon.0) 3566 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:38 vm00 bash[28005]: audit 2026-03-10T07:55:38.015087+0000 mon.a (mon.0) 3566 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:38 vm00 bash[28005]: audit 2026-03-10T07:55:38.023789+0000 mon.a (mon.0) 3567 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:38 vm00 bash[28005]: audit 2026-03-10T07:55:38.023789+0000 mon.a (mon.0) 3567 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:38 vm00 bash[28005]: audit 2026-03-10T07:55:38.026569+0000 mon.c (mon.2) 462 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:38 vm00 bash[28005]: audit 2026-03-10T07:55:38.026569+0000 mon.c (mon.2) 462 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:38 vm00 bash[28005]: audit 2026-03-10T07:55:38.027168+0000 mon.c (mon.2) 463 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:38 vm00 bash[28005]: audit 2026-03-10T07:55:38.027168+0000 mon.c (mon.2) 463 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:38 vm00 bash[20701]: cluster 2026-03-10T07:55:36.973796+0000 mgr.y (mgr.24407) 1130 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:38 vm00 bash[20701]: cluster 2026-03-10T07:55:36.973796+0000 mgr.y (mgr.24407) 1130 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:38 vm00 bash[20701]: audit 2026-03-10T07:55:38.015087+0000 mon.a (mon.0) 3566 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:38 vm00 bash[20701]: audit 2026-03-10T07:55:38.015087+0000 mon.a (mon.0) 3566 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:38 vm00 bash[20701]: audit 2026-03-10T07:55:38.023789+0000 mon.a (mon.0) 3567 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:38 vm00 bash[20701]: audit 2026-03-10T07:55:38.023789+0000 mon.a (mon.0) 3567 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:38 vm00 bash[20701]: audit 2026-03-10T07:55:38.026569+0000 mon.c (mon.2) 462 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:38 vm00 bash[20701]: audit 2026-03-10T07:55:38.026569+0000 mon.c (mon.2) 462 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:38 vm00 bash[20701]: audit 2026-03-10T07:55:38.027168+0000 mon.c (mon.2) 463 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:55:38.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:38 vm00 bash[20701]: audit 2026-03-10T07:55:38.027168+0000 mon.c (mon.2) 463 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:55:39.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:39 vm00 bash[28005]: audit 2026-03-10T07:55:38.037899+0000 mon.a (mon.0) 3568 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:39.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:39 vm00 bash[28005]: audit 2026-03-10T07:55:38.037899+0000 mon.a (mon.0) 3568 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:39.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:39 vm00 bash[20701]: audit 2026-03-10T07:55:38.037899+0000 mon.a (mon.0) 3568 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:39.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:39 vm00 bash[20701]: audit 2026-03-10T07:55:38.037899+0000 mon.a (mon.0) 3568 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:39.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:39 vm03 bash[23382]: audit 2026-03-10T07:55:38.037899+0000 mon.a (mon.0) 3568 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:39.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:39 vm03 bash[23382]: audit 2026-03-10T07:55:38.037899+0000 mon.a (mon.0) 3568 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:55:40.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:40 vm00 bash[28005]: cluster 2026-03-10T07:55:38.974120+0000 mgr.y (mgr.24407) 1131 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:40.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:40 vm00 bash[28005]: cluster 2026-03-10T07:55:38.974120+0000 mgr.y (mgr.24407) 1131 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:40.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:40 vm00 bash[20701]: cluster 2026-03-10T07:55:38.974120+0000 mgr.y (mgr.24407) 1131 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:40.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:40 vm00 bash[20701]: cluster 2026-03-10T07:55:38.974120+0000 mgr.y (mgr.24407) 1131 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:40 vm03 bash[23382]: cluster 2026-03-10T07:55:38.974120+0000 mgr.y (mgr.24407) 1131 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:40.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:40 vm03 bash[23382]: cluster 2026-03-10T07:55:38.974120+0000 mgr.y (mgr.24407) 1131 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:41.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:41 vm00 bash[28005]: audit 2026-03-10T07:55:40.389826+0000 mon.c (mon.2) 464 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:41.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:41 vm00 bash[28005]: audit 2026-03-10T07:55:40.389826+0000 mon.c (mon.2) 464 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:41.377 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:55:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:55:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:55:41.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:41 vm00 bash[20701]: audit 2026-03-10T07:55:40.389826+0000 mon.c (mon.2) 464 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:41.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:41 vm00 bash[20701]: audit 2026-03-10T07:55:40.389826+0000 mon.c (mon.2) 464 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:41 vm03 bash[23382]: audit 2026-03-10T07:55:40.389826+0000 mon.c (mon.2) 464 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:41.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:41 vm03 bash[23382]: audit 2026-03-10T07:55:40.389826+0000 mon.c (mon.2) 464 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:42.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:42 vm00 bash[28005]: cluster 2026-03-10T07:55:40.974800+0000 mgr.y (mgr.24407) 1132 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:42.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:42 vm00 bash[28005]: cluster 2026-03-10T07:55:40.974800+0000 mgr.y (mgr.24407) 1132 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:42.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:42 vm00 bash[20701]: cluster 2026-03-10T07:55:40.974800+0000 mgr.y (mgr.24407) 1132 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:42.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:42 vm00 bash[20701]: cluster 2026-03-10T07:55:40.974800+0000 mgr.y (mgr.24407) 1132 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:42.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:42 vm03 bash[23382]: cluster 2026-03-10T07:55:40.974800+0000 mgr.y (mgr.24407) 1132 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:42.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:42 vm03 bash[23382]: cluster 2026-03-10T07:55:40.974800+0000 mgr.y (mgr.24407) 1132 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:55:44.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:44 vm00 bash[28005]: cluster 2026-03-10T07:55:42.975138+0000 mgr.y (mgr.24407) 1133 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:44.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:44 vm00 bash[28005]: cluster 2026-03-10T07:55:42.975138+0000 mgr.y (mgr.24407) 1133 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:44.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:44 vm00 bash[20701]: cluster 2026-03-10T07:55:42.975138+0000 mgr.y (mgr.24407) 1133 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:44.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:44 vm00 bash[20701]: cluster 2026-03-10T07:55:42.975138+0000 mgr.y (mgr.24407) 1133 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:44.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:55:44 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:55:44.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:44 vm03 bash[23382]: cluster 2026-03-10T07:55:42.975138+0000 mgr.y (mgr.24407) 1133 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:44.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:44 vm03 bash[23382]: cluster 2026-03-10T07:55:42.975138+0000 mgr.y (mgr.24407) 1133 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:55:45.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:45 vm00 bash[28005]: audit 2026-03-10T07:55:44.208819+0000 mgr.y (mgr.24407) 1134 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:45.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:45 vm00 bash[28005]: audit 2026-03-10T07:55:44.208819+0000 mgr.y (mgr.24407) 1134 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:45.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:45 vm00 bash[20701]: audit 2026-03-10T07:55:44.208819+0000 mgr.y (mgr.24407) 1134 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:45.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:45 vm00 bash[20701]: audit 2026-03-10T07:55:44.208819+0000 mgr.y (mgr.24407) 1134 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:45 vm03 bash[23382]: audit 2026-03-10T07:55:44.208819+0000 mgr.y (mgr.24407) 1134 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:45 vm03 bash[23382]: audit 2026-03-10T07:55:44.208819+0000 mgr.y (mgr.24407) 1134 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:46.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:46 vm00 bash[28005]: cluster 2026-03-10T07:55:44.975879+0000 mgr.y (mgr.24407) 1135 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:46.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:46 vm00 bash[28005]: cluster 2026-03-10T07:55:44.975879+0000 mgr.y (mgr.24407) 1135 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:46.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:46 vm00 bash[20701]: cluster 2026-03-10T07:55:44.975879+0000 mgr.y (mgr.24407) 1135 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:46.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:46 vm00 bash[20701]: cluster 2026-03-10T07:55:44.975879+0000 mgr.y (mgr.24407) 1135 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:46 vm03 bash[23382]: cluster 2026-03-10T07:55:44.975879+0000 mgr.y (mgr.24407) 1135 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:46 vm03 bash[23382]: cluster 2026-03-10T07:55:44.975879+0000 mgr.y (mgr.24407) 1135 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:48 vm03 bash[23382]: cluster 2026-03-10T07:55:46.976232+0000 mgr.y (mgr.24407) 1136 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:48 vm03 bash[23382]: cluster 2026-03-10T07:55:46.976232+0000 mgr.y (mgr.24407) 1136 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:48.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:48 vm00 bash[28005]: cluster 2026-03-10T07:55:46.976232+0000 mgr.y (mgr.24407) 1136 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:48.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:48 vm00 bash[28005]: cluster 2026-03-10T07:55:46.976232+0000 mgr.y (mgr.24407) 1136 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:48.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:48 vm00 bash[20701]: cluster 2026-03-10T07:55:46.976232+0000 mgr.y (mgr.24407) 1136 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:48.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:48 vm00 bash[20701]: cluster 2026-03-10T07:55:46.976232+0000 mgr.y (mgr.24407) 1136 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:50 vm03 bash[23382]: cluster 2026-03-10T07:55:48.976600+0000 mgr.y (mgr.24407) 1137 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:50 vm03 bash[23382]: cluster 2026-03-10T07:55:48.976600+0000 mgr.y (mgr.24407) 1137 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:50.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:50 vm00 bash[28005]: cluster 2026-03-10T07:55:48.976600+0000 mgr.y (mgr.24407) 1137 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:50.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:50 vm00 bash[28005]: cluster 2026-03-10T07:55:48.976600+0000 mgr.y (mgr.24407) 1137 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:50.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:50 vm00 bash[20701]: cluster 2026-03-10T07:55:48.976600+0000 mgr.y (mgr.24407) 1137 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:50.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:50 vm00 bash[20701]: cluster 2026-03-10T07:55:48.976600+0000 mgr.y (mgr.24407) 1137 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T07:55:51.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:55:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:55:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:55:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:52 vm03 bash[23382]: cluster 2026-03-10T07:55:50.977318+0000 mgr.y (mgr.24407) 1138 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:52 vm03 bash[23382]: cluster 2026-03-10T07:55:50.977318+0000 mgr.y (mgr.24407) 1138 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:52.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:52 vm00 bash[28005]: cluster 2026-03-10T07:55:50.977318+0000 mgr.y (mgr.24407) 1138 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:52.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:52 vm00 bash[28005]: cluster 2026-03-10T07:55:50.977318+0000 mgr.y (mgr.24407) 1138 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:52.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:52 vm00 bash[20701]: cluster 2026-03-10T07:55:50.977318+0000 mgr.y (mgr.24407) 1138 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:52.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:52 vm00 bash[20701]: cluster 2026-03-10T07:55:50.977318+0000 mgr.y (mgr.24407) 1138 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:54.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:55:54 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:55:54.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:54 vm03 bash[23382]: cluster 2026-03-10T07:55:52.977658+0000 mgr.y (mgr.24407) 1139 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:54.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:54 vm03 bash[23382]: cluster 2026-03-10T07:55:52.977658+0000 mgr.y (mgr.24407) 1139 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:54.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:54 vm00 bash[28005]: cluster 2026-03-10T07:55:52.977658+0000 mgr.y (mgr.24407) 1139 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:54.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:54 vm00 bash[28005]: cluster 2026-03-10T07:55:52.977658+0000 mgr.y (mgr.24407) 1139 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:54.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:54 vm00 bash[20701]: cluster 2026-03-10T07:55:52.977658+0000 mgr.y (mgr.24407) 1139 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:54.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:54 vm00 bash[20701]: cluster 2026-03-10T07:55:52.977658+0000 mgr.y (mgr.24407) 1139 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:55 vm03 bash[23382]: audit 2026-03-10T07:55:54.218912+0000 mgr.y (mgr.24407) 1140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:55 vm03 bash[23382]: audit 2026-03-10T07:55:54.218912+0000 mgr.y (mgr.24407) 1140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:55.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:55 vm00 bash[28005]: audit 2026-03-10T07:55:54.218912+0000 mgr.y (mgr.24407) 1140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:55.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:55 vm00 bash[28005]: audit 2026-03-10T07:55:54.218912+0000 mgr.y (mgr.24407) 1140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:55.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:55 vm00 bash[20701]: audit 2026-03-10T07:55:54.218912+0000 mgr.y (mgr.24407) 1140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:55.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:55 vm00 bash[20701]: audit 2026-03-10T07:55:54.218912+0000 mgr.y (mgr.24407) 1140 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:55:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:56 vm03 bash[23382]: cluster 2026-03-10T07:55:54.978427+0000 mgr.y (mgr.24407) 1141 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:56 vm03 bash[23382]: cluster 2026-03-10T07:55:54.978427+0000 mgr.y (mgr.24407) 1141 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:56 vm03 bash[23382]: audit 2026-03-10T07:55:55.396236+0000 mon.c (mon.2) 465 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:56 vm03 bash[23382]: audit 2026-03-10T07:55:55.396236+0000 mon.c (mon.2) 465 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:56.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:56 vm00 bash[28005]: cluster 2026-03-10T07:55:54.978427+0000 mgr.y (mgr.24407) 1141 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:56.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:56 vm00 bash[28005]: cluster 2026-03-10T07:55:54.978427+0000 mgr.y (mgr.24407) 1141 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:56.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:56 vm00 bash[28005]: audit 2026-03-10T07:55:55.396236+0000 mon.c (mon.2) 465 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:56.627 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:56 vm00 bash[28005]: audit 2026-03-10T07:55:55.396236+0000 mon.c (mon.2) 465 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:56.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:56 vm00 bash[20701]: cluster 2026-03-10T07:55:54.978427+0000 mgr.y (mgr.24407) 1141 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:56.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:56 vm00 bash[20701]: cluster 2026-03-10T07:55:54.978427+0000 mgr.y (mgr.24407) 1141 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T07:55:56.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:56 vm00 bash[20701]: audit 2026-03-10T07:55:55.396236+0000 mon.c (mon.2) 465 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:56.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:56 vm00 bash[20701]: audit 2026-03-10T07:55:55.396236+0000 mon.c (mon.2) 465 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:55:58.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:58 vm03 bash[23382]: cluster 2026-03-10T07:55:56.978750+0000 mgr.y (mgr.24407) 1142 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:55:58.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:55:58 vm03 bash[23382]: cluster 2026-03-10T07:55:56.978750+0000 mgr.y (mgr.24407) 1142 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:55:58.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:58 vm00 bash[28005]: cluster 2026-03-10T07:55:56.978750+0000 mgr.y (mgr.24407) 1142 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:55:58.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:55:58 vm00 bash[28005]: cluster 2026-03-10T07:55:56.978750+0000 mgr.y (mgr.24407) 1142 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:55:58.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:58 vm00 bash[20701]: cluster 2026-03-10T07:55:56.978750+0000 mgr.y (mgr.24407) 1142 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:55:58.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:55:58 vm00 bash[20701]: cluster 2026-03-10T07:55:56.978750+0000 mgr.y (mgr.24407) 1142 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:56:00.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:00 vm00 bash[28005]: cluster 2026-03-10T07:55:58.979147+0000 mgr.y (mgr.24407) 1143 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:56:00.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:00 vm00 bash[28005]: cluster 2026-03-10T07:55:58.979147+0000 mgr.y (mgr.24407) 1143 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:56:00.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:00 vm00 bash[20701]: cluster 2026-03-10T07:55:58.979147+0000 mgr.y (mgr.24407) 1143 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:56:00.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:00 vm00 bash[20701]: cluster 2026-03-10T07:55:58.979147+0000 mgr.y (mgr.24407) 1143 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:56:00.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:00 vm03 bash[23382]: cluster 2026-03-10T07:55:58.979147+0000 mgr.y (mgr.24407) 1143 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:56:00.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:00 vm03 bash[23382]: cluster 2026-03-10T07:55:58.979147+0000 mgr.y (mgr.24407) 1143 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s 2026-03-10T07:56:01.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:56:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:56:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:56:02.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:02 vm00 bash[28005]: cluster 2026-03-10T07:56:00.979899+0000 mgr.y (mgr.24407) 1144 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:56:02.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:02 vm00 bash[28005]: cluster 2026-03-10T07:56:00.979899+0000 mgr.y (mgr.24407) 1144 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:56:02.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:02 vm00 bash[20701]: cluster 2026-03-10T07:56:00.979899+0000 mgr.y (mgr.24407) 1144 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:56:02.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:02 vm00 bash[20701]: cluster 2026-03-10T07:56:00.979899+0000 mgr.y (mgr.24407) 1144 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:56:02.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:02 vm03 bash[23382]: cluster 2026-03-10T07:56:00.979899+0000 mgr.y (mgr.24407) 1144 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:56:02.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:02 vm03 bash[23382]: cluster 2026-03-10T07:56:00.979899+0000 mgr.y (mgr.24407) 1144 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s 2026-03-10T07:56:04.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:56:04 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:56:04.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:04 vm03 bash[23382]: cluster 2026-03-10T07:56:02.980227+0000 mgr.y (mgr.24407) 1145 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:04.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:04 vm03 bash[23382]: cluster 2026-03-10T07:56:02.980227+0000 mgr.y (mgr.24407) 1145 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:04.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:04 vm00 bash[28005]: cluster 2026-03-10T07:56:02.980227+0000 mgr.y (mgr.24407) 1145 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:04.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:04 vm00 bash[28005]: cluster 2026-03-10T07:56:02.980227+0000 mgr.y (mgr.24407) 1145 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:04.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:04 vm00 bash[20701]: cluster 2026-03-10T07:56:02.980227+0000 mgr.y (mgr.24407) 1145 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:04.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:04 vm00 bash[20701]: cluster 2026-03-10T07:56:02.980227+0000 mgr.y (mgr.24407) 1145 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:05.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:05 vm00 bash[28005]: audit 2026-03-10T07:56:04.229304+0000 mgr.y (mgr.24407) 1146 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:05.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:05 vm00 bash[28005]: audit 2026-03-10T07:56:04.229304+0000 mgr.y (mgr.24407) 1146 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:05.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:05 vm00 bash[20701]: audit 2026-03-10T07:56:04.229304+0000 mgr.y (mgr.24407) 1146 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:05.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:05 vm00 bash[20701]: audit 2026-03-10T07:56:04.229304+0000 mgr.y (mgr.24407) 1146 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:05 vm03 bash[23382]: audit 2026-03-10T07:56:04.229304+0000 mgr.y (mgr.24407) 1146 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:05.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:05 vm03 bash[23382]: audit 2026-03-10T07:56:04.229304+0000 mgr.y (mgr.24407) 1146 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:06.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:06 vm00 bash[28005]: cluster 2026-03-10T07:56:04.980858+0000 mgr.y (mgr.24407) 1147 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:06.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:06 vm00 bash[28005]: cluster 2026-03-10T07:56:04.980858+0000 mgr.y (mgr.24407) 1147 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:06.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:06 vm00 bash[20701]: cluster 2026-03-10T07:56:04.980858+0000 mgr.y (mgr.24407) 1147 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:06.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:06 vm00 bash[20701]: cluster 2026-03-10T07:56:04.980858+0000 mgr.y (mgr.24407) 1147 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:06 vm03 bash[23382]: cluster 2026-03-10T07:56:04.980858+0000 mgr.y (mgr.24407) 1147 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:06.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:06 vm03 bash[23382]: cluster 2026-03-10T07:56:04.980858+0000 mgr.y (mgr.24407) 1147 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:08.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:08 vm00 bash[28005]: cluster 2026-03-10T07:56:06.981219+0000 mgr.y (mgr.24407) 1148 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:08.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:08 vm00 bash[28005]: cluster 2026-03-10T07:56:06.981219+0000 mgr.y (mgr.24407) 1148 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:08.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:08 vm00 bash[20701]: cluster 2026-03-10T07:56:06.981219+0000 mgr.y (mgr.24407) 1148 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:08.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:08 vm00 bash[20701]: cluster 2026-03-10T07:56:06.981219+0000 mgr.y (mgr.24407) 1148 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:08 vm03 bash[23382]: cluster 2026-03-10T07:56:06.981219+0000 mgr.y (mgr.24407) 1148 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:08.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:08 vm03 bash[23382]: cluster 2026-03-10T07:56:06.981219+0000 mgr.y (mgr.24407) 1148 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:10.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:10 vm00 bash[28005]: cluster 2026-03-10T07:56:08.981554+0000 mgr.y (mgr.24407) 1149 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:10.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:10 vm00 bash[28005]: cluster 2026-03-10T07:56:08.981554+0000 mgr.y (mgr.24407) 1149 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:10.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:10 vm00 bash[20701]: cluster 2026-03-10T07:56:08.981554+0000 mgr.y (mgr.24407) 1149 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:10.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:10 vm00 bash[20701]: cluster 2026-03-10T07:56:08.981554+0000 mgr.y (mgr.24407) 1149 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:10 vm03 bash[23382]: cluster 2026-03-10T07:56:08.981554+0000 mgr.y (mgr.24407) 1149 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:10.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:10 vm03 bash[23382]: cluster 2026-03-10T07:56:08.981554+0000 mgr.y (mgr.24407) 1149 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:11.372 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:56:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:56:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:56:11.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:11 vm00 bash[28005]: audit 2026-03-10T07:56:10.401630+0000 mon.c (mon.2) 466 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:11.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:11 vm00 bash[28005]: audit 2026-03-10T07:56:10.401630+0000 mon.c (mon.2) 466 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:11.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:11 vm00 bash[20701]: audit 2026-03-10T07:56:10.401630+0000 mon.c (mon.2) 466 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:11.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:11 vm00 bash[20701]: audit 2026-03-10T07:56:10.401630+0000 mon.c (mon.2) 466 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:11.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:11 vm03 bash[23382]: audit 2026-03-10T07:56:10.401630+0000 mon.c (mon.2) 466 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:11.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:11 vm03 bash[23382]: audit 2026-03-10T07:56:10.401630+0000 mon.c (mon.2) 466 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:12.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:12 vm00 bash[28005]: cluster 2026-03-10T07:56:10.982274+0000 mgr.y (mgr.24407) 1150 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:12.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:12 vm00 bash[28005]: cluster 2026-03-10T07:56:10.982274+0000 mgr.y (mgr.24407) 1150 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:12.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:12 vm00 bash[20701]: cluster 2026-03-10T07:56:10.982274+0000 mgr.y (mgr.24407) 1150 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:12.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:12 vm00 bash[20701]: cluster 2026-03-10T07:56:10.982274+0000 mgr.y (mgr.24407) 1150 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:12 vm03 bash[23382]: cluster 2026-03-10T07:56:10.982274+0000 mgr.y (mgr.24407) 1150 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:12 vm03 bash[23382]: cluster 2026-03-10T07:56:10.982274+0000 mgr.y (mgr.24407) 1150 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:14.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:56:14 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:56:14.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:14 vm03 bash[23382]: cluster 2026-03-10T07:56:12.982627+0000 mgr.y (mgr.24407) 1151 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:14.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:14 vm03 bash[23382]: cluster 2026-03-10T07:56:12.982627+0000 mgr.y (mgr.24407) 1151 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:14.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:14 vm00 bash[28005]: cluster 2026-03-10T07:56:12.982627+0000 mgr.y (mgr.24407) 1151 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:14.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:14 vm00 bash[28005]: cluster 2026-03-10T07:56:12.982627+0000 mgr.y (mgr.24407) 1151 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:14.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:14 vm00 bash[20701]: cluster 2026-03-10T07:56:12.982627+0000 mgr.y (mgr.24407) 1151 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:14.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:14 vm00 bash[20701]: cluster 2026-03-10T07:56:12.982627+0000 mgr.y (mgr.24407) 1151 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:15.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:15 vm03 bash[23382]: audit 2026-03-10T07:56:14.239040+0000 mgr.y (mgr.24407) 1152 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:15.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:15 vm03 bash[23382]: audit 2026-03-10T07:56:14.239040+0000 mgr.y (mgr.24407) 1152 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:15.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:15 vm00 bash[28005]: audit 2026-03-10T07:56:14.239040+0000 mgr.y (mgr.24407) 1152 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:15.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:15 vm00 bash[28005]: audit 2026-03-10T07:56:14.239040+0000 mgr.y (mgr.24407) 1152 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:15.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:15 vm00 bash[20701]: audit 2026-03-10T07:56:14.239040+0000 mgr.y (mgr.24407) 1152 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:15.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:15 vm00 bash[20701]: audit 2026-03-10T07:56:14.239040+0000 mgr.y (mgr.24407) 1152 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:16 vm03 bash[23382]: cluster 2026-03-10T07:56:14.983378+0000 mgr.y (mgr.24407) 1153 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:16.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:16 vm03 bash[23382]: cluster 2026-03-10T07:56:14.983378+0000 mgr.y (mgr.24407) 1153 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:16.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:16 vm00 bash[28005]: cluster 2026-03-10T07:56:14.983378+0000 mgr.y (mgr.24407) 1153 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:16.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:16 vm00 bash[28005]: cluster 2026-03-10T07:56:14.983378+0000 mgr.y (mgr.24407) 1153 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:16.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:16 vm00 bash[20701]: cluster 2026-03-10T07:56:14.983378+0000 mgr.y (mgr.24407) 1153 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:16.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:16 vm00 bash[20701]: cluster 2026-03-10T07:56:14.983378+0000 mgr.y (mgr.24407) 1153 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:18.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:18 vm00 bash[28005]: cluster 2026-03-10T07:56:16.983759+0000 mgr.y (mgr.24407) 1154 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:18.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:18 vm00 bash[28005]: cluster 2026-03-10T07:56:16.983759+0000 mgr.y (mgr.24407) 1154 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:18.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:18 vm00 bash[20701]: cluster 2026-03-10T07:56:16.983759+0000 mgr.y (mgr.24407) 1154 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:18.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:18 vm00 bash[20701]: cluster 2026-03-10T07:56:16.983759+0000 mgr.y (mgr.24407) 1154 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:18.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:18 vm03 bash[23382]: cluster 2026-03-10T07:56:16.983759+0000 mgr.y (mgr.24407) 1154 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:18.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:18 vm03 bash[23382]: cluster 2026-03-10T07:56:16.983759+0000 mgr.y (mgr.24407) 1154 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:20.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:20 vm03 bash[23382]: cluster 2026-03-10T07:56:18.984110+0000 mgr.y (mgr.24407) 1155 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:20.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:20 vm03 bash[23382]: cluster 2026-03-10T07:56:18.984110+0000 mgr.y (mgr.24407) 1155 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:20 vm00 bash[28005]: cluster 2026-03-10T07:56:18.984110+0000 mgr.y (mgr.24407) 1155 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:20 vm00 bash[28005]: cluster 2026-03-10T07:56:18.984110+0000 mgr.y (mgr.24407) 1155 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:20 vm00 bash[20701]: cluster 2026-03-10T07:56:18.984110+0000 mgr.y (mgr.24407) 1155 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:20 vm00 bash[20701]: cluster 2026-03-10T07:56:18.984110+0000 mgr.y (mgr.24407) 1155 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:21.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:56:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:56:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:56:22.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:22 vm03 bash[23382]: cluster 2026-03-10T07:56:20.984963+0000 mgr.y (mgr.24407) 1156 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:22.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:22 vm03 bash[23382]: cluster 2026-03-10T07:56:20.984963+0000 mgr.y (mgr.24407) 1156 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:22.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:22 vm00 bash[28005]: cluster 2026-03-10T07:56:20.984963+0000 mgr.y (mgr.24407) 1156 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:22.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:22 vm00 bash[28005]: cluster 2026-03-10T07:56:20.984963+0000 mgr.y (mgr.24407) 1156 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:22.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:22 vm00 bash[20701]: cluster 2026-03-10T07:56:20.984963+0000 mgr.y (mgr.24407) 1156 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:22.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:22 vm00 bash[20701]: cluster 2026-03-10T07:56:20.984963+0000 mgr.y (mgr.24407) 1156 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:24.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:56:24 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:56:24.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:24 vm03 bash[23382]: cluster 2026-03-10T07:56:22.985283+0000 mgr.y (mgr.24407) 1157 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:24.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:24 vm03 bash[23382]: cluster 2026-03-10T07:56:22.985283+0000 mgr.y (mgr.24407) 1157 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:24.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:24 vm00 bash[28005]: cluster 2026-03-10T07:56:22.985283+0000 mgr.y (mgr.24407) 1157 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:24.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:24 vm00 bash[28005]: cluster 2026-03-10T07:56:22.985283+0000 mgr.y (mgr.24407) 1157 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:24.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:24 vm00 bash[20701]: cluster 2026-03-10T07:56:22.985283+0000 mgr.y (mgr.24407) 1157 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:24.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:24 vm00 bash[20701]: cluster 2026-03-10T07:56:22.985283+0000 mgr.y (mgr.24407) 1157 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:25.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:25 vm03 bash[23382]: audit 2026-03-10T07:56:24.249242+0000 mgr.y (mgr.24407) 1158 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:25.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:25 vm03 bash[23382]: audit 2026-03-10T07:56:24.249242+0000 mgr.y (mgr.24407) 1158 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:25.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:25 vm03 bash[23382]: audit 2026-03-10T07:56:25.407775+0000 mon.c (mon.2) 467 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:25.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:25 vm03 bash[23382]: audit 2026-03-10T07:56:25.407775+0000 mon.c (mon.2) 467 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:25.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:25 vm00 bash[28005]: audit 2026-03-10T07:56:24.249242+0000 mgr.y (mgr.24407) 1158 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:25.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:25 vm00 bash[28005]: audit 2026-03-10T07:56:24.249242+0000 mgr.y (mgr.24407) 1158 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:25.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:25 vm00 bash[28005]: audit 2026-03-10T07:56:25.407775+0000 mon.c (mon.2) 467 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:25.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:25 vm00 bash[28005]: audit 2026-03-10T07:56:25.407775+0000 mon.c (mon.2) 467 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:25.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:25 vm00 bash[20701]: audit 2026-03-10T07:56:24.249242+0000 mgr.y (mgr.24407) 1158 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:25.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:25 vm00 bash[20701]: audit 2026-03-10T07:56:24.249242+0000 mgr.y (mgr.24407) 1158 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:25.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:25 vm00 bash[20701]: audit 2026-03-10T07:56:25.407775+0000 mon.c (mon.2) 467 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:25.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:25 vm00 bash[20701]: audit 2026-03-10T07:56:25.407775+0000 mon.c (mon.2) 467 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:26.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:26 vm03 bash[23382]: cluster 2026-03-10T07:56:24.985945+0000 mgr.y (mgr.24407) 1159 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:26.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:26 vm03 bash[23382]: cluster 2026-03-10T07:56:24.985945+0000 mgr.y (mgr.24407) 1159 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:26.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:26 vm00 bash[28005]: cluster 2026-03-10T07:56:24.985945+0000 mgr.y (mgr.24407) 1159 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:26.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:26 vm00 bash[28005]: cluster 2026-03-10T07:56:24.985945+0000 mgr.y (mgr.24407) 1159 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:26.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:26 vm00 bash[20701]: cluster 2026-03-10T07:56:24.985945+0000 mgr.y (mgr.24407) 1159 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:26.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:26 vm00 bash[20701]: cluster 2026-03-10T07:56:24.985945+0000 mgr.y (mgr.24407) 1159 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:28.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:28 vm03 bash[23382]: cluster 2026-03-10T07:56:26.986243+0000 mgr.y (mgr.24407) 1160 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:28.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:28 vm03 bash[23382]: cluster 2026-03-10T07:56:26.986243+0000 mgr.y (mgr.24407) 1160 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:28.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:28 vm00 bash[28005]: cluster 2026-03-10T07:56:26.986243+0000 mgr.y (mgr.24407) 1160 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:28.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:28 vm00 bash[28005]: cluster 2026-03-10T07:56:26.986243+0000 mgr.y (mgr.24407) 1160 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:28.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:28 vm00 bash[20701]: cluster 2026-03-10T07:56:26.986243+0000 mgr.y (mgr.24407) 1160 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:28.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:28 vm00 bash[20701]: cluster 2026-03-10T07:56:26.986243+0000 mgr.y (mgr.24407) 1160 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:30.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:30 vm03 bash[23382]: cluster 2026-03-10T07:56:28.986546+0000 mgr.y (mgr.24407) 1161 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:30.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:30 vm03 bash[23382]: cluster 2026-03-10T07:56:28.986546+0000 mgr.y (mgr.24407) 1161 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:30.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:30 vm00 bash[28005]: cluster 2026-03-10T07:56:28.986546+0000 mgr.y (mgr.24407) 1161 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:30.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:30 vm00 bash[28005]: cluster 2026-03-10T07:56:28.986546+0000 mgr.y (mgr.24407) 1161 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:30.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:30 vm00 bash[20701]: cluster 2026-03-10T07:56:28.986546+0000 mgr.y (mgr.24407) 1161 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:30.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:30 vm00 bash[20701]: cluster 2026-03-10T07:56:28.986546+0000 mgr.y (mgr.24407) 1161 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:31.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:56:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:56:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:56:32.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:32 vm03 bash[23382]: cluster 2026-03-10T07:56:30.987151+0000 mgr.y (mgr.24407) 1162 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:32.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:32 vm03 bash[23382]: cluster 2026-03-10T07:56:30.987151+0000 mgr.y (mgr.24407) 1162 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:32.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:32 vm00 bash[28005]: cluster 2026-03-10T07:56:30.987151+0000 mgr.y (mgr.24407) 1162 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:32.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:32 vm00 bash[28005]: cluster 2026-03-10T07:56:30.987151+0000 mgr.y (mgr.24407) 1162 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:32.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:32 vm00 bash[20701]: cluster 2026-03-10T07:56:30.987151+0000 mgr.y (mgr.24407) 1162 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:32.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:32 vm00 bash[20701]: cluster 2026-03-10T07:56:30.987151+0000 mgr.y (mgr.24407) 1162 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:33.511 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:56:33 vm03 bash[51371]: logger=cleanup t=2026-03-10T07:56:33.107795271Z level=info msg="Completed cleanup jobs" duration=1.303964ms 2026-03-10T07:56:33.511 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:56:33 vm03 bash[51371]: logger=plugins.update.checker t=2026-03-10T07:56:33.25707976Z level=info msg="Update check succeeded" duration=55.036415ms 2026-03-10T07:56:34.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:56:34 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:56:34.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:34 vm03 bash[23382]: cluster 2026-03-10T07:56:32.987507+0000 mgr.y (mgr.24407) 1163 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:34.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:34 vm03 bash[23382]: cluster 2026-03-10T07:56:32.987507+0000 mgr.y (mgr.24407) 1163 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:34.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:34 vm00 bash[28005]: cluster 2026-03-10T07:56:32.987507+0000 mgr.y (mgr.24407) 1163 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:34.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:34 vm00 bash[28005]: cluster 2026-03-10T07:56:32.987507+0000 mgr.y (mgr.24407) 1163 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:34.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:34 vm00 bash[20701]: cluster 2026-03-10T07:56:32.987507+0000 mgr.y (mgr.24407) 1163 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:34.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:34 vm00 bash[20701]: cluster 2026-03-10T07:56:32.987507+0000 mgr.y (mgr.24407) 1163 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:35.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:35 vm03 bash[23382]: audit 2026-03-10T07:56:34.256064+0000 mgr.y (mgr.24407) 1164 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:35.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:35 vm03 bash[23382]: audit 2026-03-10T07:56:34.256064+0000 mgr.y (mgr.24407) 1164 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:35.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:35 vm00 bash[28005]: audit 2026-03-10T07:56:34.256064+0000 mgr.y (mgr.24407) 1164 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:35.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:35 vm00 bash[28005]: audit 2026-03-10T07:56:34.256064+0000 mgr.y (mgr.24407) 1164 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:35.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:35 vm00 bash[20701]: audit 2026-03-10T07:56:34.256064+0000 mgr.y (mgr.24407) 1164 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:35.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:35 vm00 bash[20701]: audit 2026-03-10T07:56:34.256064+0000 mgr.y (mgr.24407) 1164 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:36.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:36 vm03 bash[23382]: cluster 2026-03-10T07:56:34.988155+0000 mgr.y (mgr.24407) 1165 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:36 vm03 bash[23382]: cluster 2026-03-10T07:56:34.988155+0000 mgr.y (mgr.24407) 1165 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:36.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:36 vm00 bash[28005]: cluster 2026-03-10T07:56:34.988155+0000 mgr.y (mgr.24407) 1165 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:36.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:36 vm00 bash[28005]: cluster 2026-03-10T07:56:34.988155+0000 mgr.y (mgr.24407) 1165 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:36.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:36 vm00 bash[20701]: cluster 2026-03-10T07:56:34.988155+0000 mgr.y (mgr.24407) 1165 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:36.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:36 vm00 bash[20701]: cluster 2026-03-10T07:56:34.988155+0000 mgr.y (mgr.24407) 1165 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:38.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:38 vm03 bash[23382]: cluster 2026-03-10T07:56:36.988500+0000 mgr.y (mgr.24407) 1166 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:38.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:38 vm03 bash[23382]: cluster 2026-03-10T07:56:36.988500+0000 mgr.y (mgr.24407) 1166 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:38 vm03 bash[23382]: audit 2026-03-10T07:56:38.078490+0000 mon.c (mon.2) 468 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:56:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:38 vm03 bash[23382]: audit 2026-03-10T07:56:38.078490+0000 mon.c (mon.2) 468 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:56:38.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:38 vm00 bash[28005]: cluster 2026-03-10T07:56:36.988500+0000 mgr.y (mgr.24407) 1166 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:38.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:38 vm00 bash[28005]: cluster 2026-03-10T07:56:36.988500+0000 mgr.y (mgr.24407) 1166 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:38.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:38 vm00 bash[28005]: audit 2026-03-10T07:56:38.078490+0000 mon.c (mon.2) 468 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:56:38.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:38 vm00 bash[28005]: audit 2026-03-10T07:56:38.078490+0000 mon.c (mon.2) 468 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:56:38.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:38 vm00 bash[20701]: cluster 2026-03-10T07:56:36.988500+0000 mgr.y (mgr.24407) 1166 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:38.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:38 vm00 bash[20701]: cluster 2026-03-10T07:56:36.988500+0000 mgr.y (mgr.24407) 1166 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:38.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:38 vm00 bash[20701]: audit 2026-03-10T07:56:38.078490+0000 mon.c (mon.2) 468 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:56:38.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:38 vm00 bash[20701]: audit 2026-03-10T07:56:38.078490+0000 mon.c (mon.2) 468 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:56:40.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.638844+0000 mon.a (mon.0) 3569 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.638844+0000 mon.a (mon.0) 3569 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.643343+0000 mon.a (mon.0) 3570 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.643343+0000 mon.a (mon.0) 3570 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.648117+0000 mon.a (mon.0) 3571 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.648117+0000 mon.a (mon.0) 3571 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.652177+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.652177+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.653254+0000 mon.c (mon.2) 469 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.653254+0000 mon.c (mon.2) 469 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.653867+0000 mon.c (mon.2) 470 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.653867+0000 mon.c (mon.2) 470 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.657816+0000 mon.a (mon.0) 3573 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: audit 2026-03-10T07:56:38.657816+0000 mon.a (mon.0) 3573 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: cluster 2026-03-10T07:56:38.988842+0000 mgr.y (mgr.24407) 1167 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:40.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:39 vm03 bash[23382]: cluster 2026-03-10T07:56:38.988842+0000 mgr.y (mgr.24407) 1167 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.638844+0000 mon.a (mon.0) 3569 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.638844+0000 mon.a (mon.0) 3569 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.643343+0000 mon.a (mon.0) 3570 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.643343+0000 mon.a (mon.0) 3570 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.648117+0000 mon.a (mon.0) 3571 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.648117+0000 mon.a (mon.0) 3571 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.652177+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.652177+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.653254+0000 mon.c (mon.2) 469 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.653254+0000 mon.c (mon.2) 469 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.653867+0000 mon.c (mon.2) 470 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.653867+0000 mon.c (mon.2) 470 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.657816+0000 mon.a (mon.0) 3573 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: audit 2026-03-10T07:56:38.657816+0000 mon.a (mon.0) 3573 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: cluster 2026-03-10T07:56:38.988842+0000 mgr.y (mgr.24407) 1167 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:40.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:39 vm00 bash[28005]: cluster 2026-03-10T07:56:38.988842+0000 mgr.y (mgr.24407) 1167 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.638844+0000 mon.a (mon.0) 3569 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.638844+0000 mon.a (mon.0) 3569 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.643343+0000 mon.a (mon.0) 3570 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.643343+0000 mon.a (mon.0) 3570 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.648117+0000 mon.a (mon.0) 3571 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.648117+0000 mon.a (mon.0) 3571 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.652177+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.652177+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.653254+0000 mon.c (mon.2) 469 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.653254+0000 mon.c (mon.2) 469 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.653867+0000 mon.c (mon.2) 470 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.653867+0000 mon.c (mon.2) 470 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.657816+0000 mon.a (mon.0) 3573 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: audit 2026-03-10T07:56:38.657816+0000 mon.a (mon.0) 3573 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: cluster 2026-03-10T07:56:38.988842+0000 mgr.y (mgr.24407) 1167 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:40.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:39 vm00 bash[20701]: cluster 2026-03-10T07:56:38.988842+0000 mgr.y (mgr.24407) 1167 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:41.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:40 vm03 bash[23382]: audit 2026-03-10T07:56:40.413781+0000 mon.c (mon.2) 471 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:41.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:40 vm03 bash[23382]: audit 2026-03-10T07:56:40.413781+0000 mon.c (mon.2) 471 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:41.076 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:40 vm00 bash[28005]: audit 2026-03-10T07:56:40.413781+0000 mon.c (mon.2) 471 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:41.076 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:40 vm00 bash[28005]: audit 2026-03-10T07:56:40.413781+0000 mon.c (mon.2) 471 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:41.076 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:40 vm00 bash[20701]: audit 2026-03-10T07:56:40.413781+0000 mon.c (mon.2) 471 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:41.076 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:40 vm00 bash[20701]: audit 2026-03-10T07:56:40.413781+0000 mon.c (mon.2) 471 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:41.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:56:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:56:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:56:42.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:41 vm03 bash[23382]: cluster 2026-03-10T07:56:40.989523+0000 mgr.y (mgr.24407) 1168 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:42.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:41 vm03 bash[23382]: cluster 2026-03-10T07:56:40.989523+0000 mgr.y (mgr.24407) 1168 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:42.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:41 vm00 bash[28005]: cluster 2026-03-10T07:56:40.989523+0000 mgr.y (mgr.24407) 1168 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:42.126 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:41 vm00 bash[28005]: cluster 2026-03-10T07:56:40.989523+0000 mgr.y (mgr.24407) 1168 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:42.126 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:41 vm00 bash[20701]: cluster 2026-03-10T07:56:40.989523+0000 mgr.y (mgr.24407) 1168 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:42.126 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:41 vm00 bash[20701]: cluster 2026-03-10T07:56:40.989523+0000 mgr.y (mgr.24407) 1168 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:44.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:44 vm00 bash[28005]: cluster 2026-03-10T07:56:42.989834+0000 mgr.y (mgr.24407) 1169 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:44.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:44 vm00 bash[28005]: cluster 2026-03-10T07:56:42.989834+0000 mgr.y (mgr.24407) 1169 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:44.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:44 vm00 bash[20701]: cluster 2026-03-10T07:56:42.989834+0000 mgr.y (mgr.24407) 1169 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:44.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:44 vm00 bash[20701]: cluster 2026-03-10T07:56:42.989834+0000 mgr.y (mgr.24407) 1169 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:44.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:56:44 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:56:44.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:44 vm03 bash[23382]: cluster 2026-03-10T07:56:42.989834+0000 mgr.y (mgr.24407) 1169 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:44.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:44 vm03 bash[23382]: cluster 2026-03-10T07:56:42.989834+0000 mgr.y (mgr.24407) 1169 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:45.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:45 vm00 bash[28005]: audit 2026-03-10T07:56:44.265496+0000 mgr.y (mgr.24407) 1170 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:45.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:45 vm00 bash[28005]: audit 2026-03-10T07:56:44.265496+0000 mgr.y (mgr.24407) 1170 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:45.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:45 vm00 bash[20701]: audit 2026-03-10T07:56:44.265496+0000 mgr.y (mgr.24407) 1170 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:45.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:45 vm00 bash[20701]: audit 2026-03-10T07:56:44.265496+0000 mgr.y (mgr.24407) 1170 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:45 vm03 bash[23382]: audit 2026-03-10T07:56:44.265496+0000 mgr.y (mgr.24407) 1170 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:45.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:45 vm03 bash[23382]: audit 2026-03-10T07:56:44.265496+0000 mgr.y (mgr.24407) 1170 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:46.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:46 vm00 bash[28005]: cluster 2026-03-10T07:56:44.990468+0000 mgr.y (mgr.24407) 1171 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:46.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:46 vm00 bash[28005]: cluster 2026-03-10T07:56:44.990468+0000 mgr.y (mgr.24407) 1171 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:46.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:46 vm00 bash[20701]: cluster 2026-03-10T07:56:44.990468+0000 mgr.y (mgr.24407) 1171 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:46.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:46 vm00 bash[20701]: cluster 2026-03-10T07:56:44.990468+0000 mgr.y (mgr.24407) 1171 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:46 vm03 bash[23382]: cluster 2026-03-10T07:56:44.990468+0000 mgr.y (mgr.24407) 1171 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:46.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:46 vm03 bash[23382]: cluster 2026-03-10T07:56:44.990468+0000 mgr.y (mgr.24407) 1171 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:48.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:48 vm00 bash[28005]: cluster 2026-03-10T07:56:46.990756+0000 mgr.y (mgr.24407) 1172 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:48.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:48 vm00 bash[28005]: cluster 2026-03-10T07:56:46.990756+0000 mgr.y (mgr.24407) 1172 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:48.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:48 vm00 bash[20701]: cluster 2026-03-10T07:56:46.990756+0000 mgr.y (mgr.24407) 1172 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:48.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:48 vm00 bash[20701]: cluster 2026-03-10T07:56:46.990756+0000 mgr.y (mgr.24407) 1172 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:48 vm03 bash[23382]: cluster 2026-03-10T07:56:46.990756+0000 mgr.y (mgr.24407) 1172 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:48.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:48 vm03 bash[23382]: cluster 2026-03-10T07:56:46.990756+0000 mgr.y (mgr.24407) 1172 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:50.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:50 vm00 bash[28005]: cluster 2026-03-10T07:56:48.991104+0000 mgr.y (mgr.24407) 1173 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:50.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:50 vm00 bash[28005]: cluster 2026-03-10T07:56:48.991104+0000 mgr.y (mgr.24407) 1173 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:50.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:50 vm00 bash[20701]: cluster 2026-03-10T07:56:48.991104+0000 mgr.y (mgr.24407) 1173 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:50.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:50 vm00 bash[20701]: cluster 2026-03-10T07:56:48.991104+0000 mgr.y (mgr.24407) 1173 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:50.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:50 vm03 bash[23382]: cluster 2026-03-10T07:56:48.991104+0000 mgr.y (mgr.24407) 1173 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:50.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:50 vm03 bash[23382]: cluster 2026-03-10T07:56:48.991104+0000 mgr.y (mgr.24407) 1173 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:51.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:56:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:56:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:56:52.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:52 vm00 bash[28005]: cluster 2026-03-10T07:56:50.991855+0000 mgr.y (mgr.24407) 1174 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:52.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:52 vm00 bash[28005]: cluster 2026-03-10T07:56:50.991855+0000 mgr.y (mgr.24407) 1174 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:52.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:52 vm00 bash[20701]: cluster 2026-03-10T07:56:50.991855+0000 mgr.y (mgr.24407) 1174 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:52.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:52 vm00 bash[20701]: cluster 2026-03-10T07:56:50.991855+0000 mgr.y (mgr.24407) 1174 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:52 vm03 bash[23382]: cluster 2026-03-10T07:56:50.991855+0000 mgr.y (mgr.24407) 1174 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:52.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:52 vm03 bash[23382]: cluster 2026-03-10T07:56:50.991855+0000 mgr.y (mgr.24407) 1174 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:54.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:54 vm00 bash[28005]: cluster 2026-03-10T07:56:52.992147+0000 mgr.y (mgr.24407) 1175 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:54.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:54 vm00 bash[28005]: cluster 2026-03-10T07:56:52.992147+0000 mgr.y (mgr.24407) 1175 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:54.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:54 vm00 bash[20701]: cluster 2026-03-10T07:56:52.992147+0000 mgr.y (mgr.24407) 1175 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:54.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:54 vm00 bash[20701]: cluster 2026-03-10T07:56:52.992147+0000 mgr.y (mgr.24407) 1175 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:54.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:56:54 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:56:54.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:54 vm03 bash[23382]: cluster 2026-03-10T07:56:52.992147+0000 mgr.y (mgr.24407) 1175 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:54.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:54 vm03 bash[23382]: cluster 2026-03-10T07:56:52.992147+0000 mgr.y (mgr.24407) 1175 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:55.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:55 vm00 bash[28005]: audit 2026-03-10T07:56:54.275850+0000 mgr.y (mgr.24407) 1176 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:55.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:55 vm00 bash[28005]: audit 2026-03-10T07:56:54.275850+0000 mgr.y (mgr.24407) 1176 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:55.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:55 vm00 bash[20701]: audit 2026-03-10T07:56:54.275850+0000 mgr.y (mgr.24407) 1176 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:55.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:55 vm00 bash[20701]: audit 2026-03-10T07:56:54.275850+0000 mgr.y (mgr.24407) 1176 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:55.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:55 vm03 bash[23382]: audit 2026-03-10T07:56:54.275850+0000 mgr.y (mgr.24407) 1176 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:55.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:55 vm03 bash[23382]: audit 2026-03-10T07:56:54.275850+0000 mgr.y (mgr.24407) 1176 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:56:56.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:56 vm00 bash[28005]: cluster 2026-03-10T07:56:54.992779+0000 mgr.y (mgr.24407) 1177 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:56.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:56 vm00 bash[28005]: cluster 2026-03-10T07:56:54.992779+0000 mgr.y (mgr.24407) 1177 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:56.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:56 vm00 bash[28005]: audit 2026-03-10T07:56:55.419773+0000 mon.c (mon.2) 472 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:56.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:56 vm00 bash[28005]: audit 2026-03-10T07:56:55.419773+0000 mon.c (mon.2) 472 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:56.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:56 vm00 bash[20701]: cluster 2026-03-10T07:56:54.992779+0000 mgr.y (mgr.24407) 1177 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:56.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:56 vm00 bash[20701]: cluster 2026-03-10T07:56:54.992779+0000 mgr.y (mgr.24407) 1177 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:56.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:56 vm00 bash[20701]: audit 2026-03-10T07:56:55.419773+0000 mon.c (mon.2) 472 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:56.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:56 vm00 bash[20701]: audit 2026-03-10T07:56:55.419773+0000 mon.c (mon.2) 472 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:56 vm03 bash[23382]: cluster 2026-03-10T07:56:54.992779+0000 mgr.y (mgr.24407) 1177 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:56.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:56 vm03 bash[23382]: cluster 2026-03-10T07:56:54.992779+0000 mgr.y (mgr.24407) 1177 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:56:56.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:56 vm03 bash[23382]: audit 2026-03-10T07:56:55.419773+0000 mon.c (mon.2) 472 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:56.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:56 vm03 bash[23382]: audit 2026-03-10T07:56:55.419773+0000 mon.c (mon.2) 472 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:56:58.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:58 vm00 bash[28005]: cluster 2026-03-10T07:56:56.993170+0000 mgr.y (mgr.24407) 1178 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:58.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:56:58 vm00 bash[28005]: cluster 2026-03-10T07:56:56.993170+0000 mgr.y (mgr.24407) 1178 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:58.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:58 vm00 bash[20701]: cluster 2026-03-10T07:56:56.993170+0000 mgr.y (mgr.24407) 1178 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:58.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:56:58 vm00 bash[20701]: cluster 2026-03-10T07:56:56.993170+0000 mgr.y (mgr.24407) 1178 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:58.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:58 vm03 bash[23382]: cluster 2026-03-10T07:56:56.993170+0000 mgr.y (mgr.24407) 1178 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:56:58.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:56:58 vm03 bash[23382]: cluster 2026-03-10T07:56:56.993170+0000 mgr.y (mgr.24407) 1178 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:00.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:00 vm00 bash[28005]: cluster 2026-03-10T07:56:58.993519+0000 mgr.y (mgr.24407) 1179 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:00.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:00 vm00 bash[28005]: cluster 2026-03-10T07:56:58.993519+0000 mgr.y (mgr.24407) 1179 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:00.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:00 vm00 bash[20701]: cluster 2026-03-10T07:56:58.993519+0000 mgr.y (mgr.24407) 1179 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:00.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:00 vm00 bash[20701]: cluster 2026-03-10T07:56:58.993519+0000 mgr.y (mgr.24407) 1179 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:00 vm03 bash[23382]: cluster 2026-03-10T07:56:58.993519+0000 mgr.y (mgr.24407) 1179 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:00.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:00 vm03 bash[23382]: cluster 2026-03-10T07:56:58.993519+0000 mgr.y (mgr.24407) 1179 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:01.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:57:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:57:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:57:02.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:02 vm00 bash[28005]: cluster 2026-03-10T07:57:00.994174+0000 mgr.y (mgr.24407) 1180 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:02.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:02 vm00 bash[28005]: cluster 2026-03-10T07:57:00.994174+0000 mgr.y (mgr.24407) 1180 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:02.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:02 vm00 bash[20701]: cluster 2026-03-10T07:57:00.994174+0000 mgr.y (mgr.24407) 1180 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:02.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:02 vm00 bash[20701]: cluster 2026-03-10T07:57:00.994174+0000 mgr.y (mgr.24407) 1180 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:02 vm03 bash[23382]: cluster 2026-03-10T07:57:00.994174+0000 mgr.y (mgr.24407) 1180 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:02.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:02 vm03 bash[23382]: cluster 2026-03-10T07:57:00.994174+0000 mgr.y (mgr.24407) 1180 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:04.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:04 vm00 bash[28005]: cluster 2026-03-10T07:57:02.994506+0000 mgr.y (mgr.24407) 1181 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:04.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:04 vm00 bash[28005]: cluster 2026-03-10T07:57:02.994506+0000 mgr.y (mgr.24407) 1181 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:04.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:04 vm00 bash[20701]: cluster 2026-03-10T07:57:02.994506+0000 mgr.y (mgr.24407) 1181 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:04.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:04 vm00 bash[20701]: cluster 2026-03-10T07:57:02.994506+0000 mgr.y (mgr.24407) 1181 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:04.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:57:04 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:57:04.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:04 vm03 bash[23382]: cluster 2026-03-10T07:57:02.994506+0000 mgr.y (mgr.24407) 1181 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:04.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:04 vm03 bash[23382]: cluster 2026-03-10T07:57:02.994506+0000 mgr.y (mgr.24407) 1181 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:05.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:05 vm00 bash[28005]: audit 2026-03-10T07:57:04.285798+0000 mgr.y (mgr.24407) 1182 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:05.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:05 vm00 bash[28005]: audit 2026-03-10T07:57:04.285798+0000 mgr.y (mgr.24407) 1182 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:05.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:05 vm00 bash[20701]: audit 2026-03-10T07:57:04.285798+0000 mgr.y (mgr.24407) 1182 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:05.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:05 vm00 bash[20701]: audit 2026-03-10T07:57:04.285798+0000 mgr.y (mgr.24407) 1182 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:05.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:05 vm03 bash[23382]: audit 2026-03-10T07:57:04.285798+0000 mgr.y (mgr.24407) 1182 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:05.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:05 vm03 bash[23382]: audit 2026-03-10T07:57:04.285798+0000 mgr.y (mgr.24407) 1182 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:06.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:06 vm00 bash[28005]: cluster 2026-03-10T07:57:04.996189+0000 mgr.y (mgr.24407) 1183 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:06.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:06 vm00 bash[28005]: cluster 2026-03-10T07:57:04.996189+0000 mgr.y (mgr.24407) 1183 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:06.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:06 vm00 bash[20701]: cluster 2026-03-10T07:57:04.996189+0000 mgr.y (mgr.24407) 1183 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:06.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:06 vm00 bash[20701]: cluster 2026-03-10T07:57:04.996189+0000 mgr.y (mgr.24407) 1183 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:06.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:06 vm03 bash[23382]: cluster 2026-03-10T07:57:04.996189+0000 mgr.y (mgr.24407) 1183 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:06.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:06 vm03 bash[23382]: cluster 2026-03-10T07:57:04.996189+0000 mgr.y (mgr.24407) 1183 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:08.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:08 vm03 bash[23382]: cluster 2026-03-10T07:57:06.996437+0000 mgr.y (mgr.24407) 1184 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:08.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:08 vm03 bash[23382]: cluster 2026-03-10T07:57:06.996437+0000 mgr.y (mgr.24407) 1184 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:08.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:08 vm00 bash[28005]: cluster 2026-03-10T07:57:06.996437+0000 mgr.y (mgr.24407) 1184 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:08.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:08 vm00 bash[28005]: cluster 2026-03-10T07:57:06.996437+0000 mgr.y (mgr.24407) 1184 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:08.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:08 vm00 bash[20701]: cluster 2026-03-10T07:57:06.996437+0000 mgr.y (mgr.24407) 1184 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:08.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:08 vm00 bash[20701]: cluster 2026-03-10T07:57:06.996437+0000 mgr.y (mgr.24407) 1184 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:10.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:10 vm03 bash[23382]: cluster 2026-03-10T07:57:08.996906+0000 mgr.y (mgr.24407) 1185 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:10.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:10 vm03 bash[23382]: cluster 2026-03-10T07:57:08.996906+0000 mgr.y (mgr.24407) 1185 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:10.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:10 vm00 bash[28005]: cluster 2026-03-10T07:57:08.996906+0000 mgr.y (mgr.24407) 1185 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:10.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:10 vm00 bash[28005]: cluster 2026-03-10T07:57:08.996906+0000 mgr.y (mgr.24407) 1185 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:10.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:10 vm00 bash[20701]: cluster 2026-03-10T07:57:08.996906+0000 mgr.y (mgr.24407) 1185 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:10.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:10 vm00 bash[20701]: cluster 2026-03-10T07:57:08.996906+0000 mgr.y (mgr.24407) 1185 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:11.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:11 vm00 bash[28005]: audit 2026-03-10T07:57:10.425937+0000 mon.c (mon.2) 473 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:11.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:11 vm00 bash[28005]: audit 2026-03-10T07:57:10.425937+0000 mon.c (mon.2) 473 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:11.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:57:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:57:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:57:11.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:11 vm00 bash[20701]: audit 2026-03-10T07:57:10.425937+0000 mon.c (mon.2) 473 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:11.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:11 vm00 bash[20701]: audit 2026-03-10T07:57:10.425937+0000 mon.c (mon.2) 473 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:11.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:11 vm03 bash[23382]: audit 2026-03-10T07:57:10.425937+0000 mon.c (mon.2) 473 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:11.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:11 vm03 bash[23382]: audit 2026-03-10T07:57:10.425937+0000 mon.c (mon.2) 473 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:12.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:12 vm03 bash[23382]: cluster 2026-03-10T07:57:10.997796+0000 mgr.y (mgr.24407) 1186 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:12.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:12 vm03 bash[23382]: cluster 2026-03-10T07:57:10.997796+0000 mgr.y (mgr.24407) 1186 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:12.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:12 vm00 bash[28005]: cluster 2026-03-10T07:57:10.997796+0000 mgr.y (mgr.24407) 1186 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:12.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:12 vm00 bash[28005]: cluster 2026-03-10T07:57:10.997796+0000 mgr.y (mgr.24407) 1186 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:12.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:12 vm00 bash[20701]: cluster 2026-03-10T07:57:10.997796+0000 mgr.y (mgr.24407) 1186 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:12.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:12 vm00 bash[20701]: cluster 2026-03-10T07:57:10.997796+0000 mgr.y (mgr.24407) 1186 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:14.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:57:14 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:57:14.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:14 vm03 bash[23382]: cluster 2026-03-10T07:57:12.998534+0000 mgr.y (mgr.24407) 1187 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T07:57:14.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:14 vm03 bash[23382]: cluster 2026-03-10T07:57:12.998534+0000 mgr.y (mgr.24407) 1187 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T07:57:14.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:14 vm00 bash[28005]: cluster 2026-03-10T07:57:12.998534+0000 mgr.y (mgr.24407) 1187 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T07:57:14.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:14 vm00 bash[28005]: cluster 2026-03-10T07:57:12.998534+0000 mgr.y (mgr.24407) 1187 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T07:57:14.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:14 vm00 bash[20701]: cluster 2026-03-10T07:57:12.998534+0000 mgr.y (mgr.24407) 1187 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T07:57:14.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:14 vm00 bash[20701]: cluster 2026-03-10T07:57:12.998534+0000 mgr.y (mgr.24407) 1187 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T07:57:15.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:15 vm03 bash[23382]: audit 2026-03-10T07:57:14.294761+0000 mgr.y (mgr.24407) 1188 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:15.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:15 vm03 bash[23382]: audit 2026-03-10T07:57:14.294761+0000 mgr.y (mgr.24407) 1188 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:15.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:15 vm00 bash[28005]: audit 2026-03-10T07:57:14.294761+0000 mgr.y (mgr.24407) 1188 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:15.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:15 vm00 bash[28005]: audit 2026-03-10T07:57:14.294761+0000 mgr.y (mgr.24407) 1188 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:15.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:15 vm00 bash[20701]: audit 2026-03-10T07:57:14.294761+0000 mgr.y (mgr.24407) 1188 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:15.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:15 vm00 bash[20701]: audit 2026-03-10T07:57:14.294761+0000 mgr.y (mgr.24407) 1188 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:16.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:16 vm03 bash[23382]: cluster 2026-03-10T07:57:14.999108+0000 mgr.y (mgr.24407) 1189 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:16.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:16 vm03 bash[23382]: cluster 2026-03-10T07:57:14.999108+0000 mgr.y (mgr.24407) 1189 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:16.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:16 vm00 bash[28005]: cluster 2026-03-10T07:57:14.999108+0000 mgr.y (mgr.24407) 1189 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:16.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:16 vm00 bash[28005]: cluster 2026-03-10T07:57:14.999108+0000 mgr.y (mgr.24407) 1189 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:16.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:16 vm00 bash[20701]: cluster 2026-03-10T07:57:14.999108+0000 mgr.y (mgr.24407) 1189 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:16.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:16 vm00 bash[20701]: cluster 2026-03-10T07:57:14.999108+0000 mgr.y (mgr.24407) 1189 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:18.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:18 vm03 bash[23382]: cluster 2026-03-10T07:57:16.999353+0000 mgr.y (mgr.24407) 1190 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:18 vm03 bash[23382]: cluster 2026-03-10T07:57:16.999353+0000 mgr.y (mgr.24407) 1190 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:18.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:18 vm00 bash[28005]: cluster 2026-03-10T07:57:16.999353+0000 mgr.y (mgr.24407) 1190 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:18.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:18 vm00 bash[28005]: cluster 2026-03-10T07:57:16.999353+0000 mgr.y (mgr.24407) 1190 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:18.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:18 vm00 bash[20701]: cluster 2026-03-10T07:57:16.999353+0000 mgr.y (mgr.24407) 1190 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:18.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:18 vm00 bash[20701]: cluster 2026-03-10T07:57:16.999353+0000 mgr.y (mgr.24407) 1190 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:20.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:20 vm03 bash[23382]: cluster 2026-03-10T07:57:18.999882+0000 mgr.y (mgr.24407) 1191 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:20 vm03 bash[23382]: cluster 2026-03-10T07:57:18.999882+0000 mgr.y (mgr.24407) 1191 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:20.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:20 vm00 bash[28005]: cluster 2026-03-10T07:57:18.999882+0000 mgr.y (mgr.24407) 1191 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:20.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:20 vm00 bash[28005]: cluster 2026-03-10T07:57:18.999882+0000 mgr.y (mgr.24407) 1191 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:20.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:20 vm00 bash[20701]: cluster 2026-03-10T07:57:18.999882+0000 mgr.y (mgr.24407) 1191 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:20.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:20 vm00 bash[20701]: cluster 2026-03-10T07:57:18.999882+0000 mgr.y (mgr.24407) 1191 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:21.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:57:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:57:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:57:22.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:22 vm03 bash[23382]: cluster 2026-03-10T07:57:21.000451+0000 mgr.y (mgr.24407) 1192 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:22.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:22 vm03 bash[23382]: cluster 2026-03-10T07:57:21.000451+0000 mgr.y (mgr.24407) 1192 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:22.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:22 vm00 bash[28005]: cluster 2026-03-10T07:57:21.000451+0000 mgr.y (mgr.24407) 1192 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:22.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:22 vm00 bash[28005]: cluster 2026-03-10T07:57:21.000451+0000 mgr.y (mgr.24407) 1192 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:22.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:22 vm00 bash[20701]: cluster 2026-03-10T07:57:21.000451+0000 mgr.y (mgr.24407) 1192 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:22.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:22 vm00 bash[20701]: cluster 2026-03-10T07:57:21.000451+0000 mgr.y (mgr.24407) 1192 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:24.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:57:24 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:57:24.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:24 vm03 bash[23382]: cluster 2026-03-10T07:57:23.001069+0000 mgr.y (mgr.24407) 1193 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:24.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:24 vm03 bash[23382]: cluster 2026-03-10T07:57:23.001069+0000 mgr.y (mgr.24407) 1193 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:24.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:24 vm00 bash[28005]: cluster 2026-03-10T07:57:23.001069+0000 mgr.y (mgr.24407) 1193 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:24.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:24 vm00 bash[28005]: cluster 2026-03-10T07:57:23.001069+0000 mgr.y (mgr.24407) 1193 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:24.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:24 vm00 bash[20701]: cluster 2026-03-10T07:57:23.001069+0000 mgr.y (mgr.24407) 1193 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:24.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:24 vm00 bash[20701]: cluster 2026-03-10T07:57:23.001069+0000 mgr.y (mgr.24407) 1193 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:25 vm03 bash[23382]: audit 2026-03-10T07:57:24.305432+0000 mgr.y (mgr.24407) 1194 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:25 vm03 bash[23382]: audit 2026-03-10T07:57:24.305432+0000 mgr.y (mgr.24407) 1194 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:25.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:25 vm00 bash[28005]: audit 2026-03-10T07:57:24.305432+0000 mgr.y (mgr.24407) 1194 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:25.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:25 vm00 bash[28005]: audit 2026-03-10T07:57:24.305432+0000 mgr.y (mgr.24407) 1194 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:25.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:25 vm00 bash[20701]: audit 2026-03-10T07:57:24.305432+0000 mgr.y (mgr.24407) 1194 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:25.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:25 vm00 bash[20701]: audit 2026-03-10T07:57:24.305432+0000 mgr.y (mgr.24407) 1194 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:26.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:26 vm03 bash[23382]: cluster 2026-03-10T07:57:25.001998+0000 mgr.y (mgr.24407) 1195 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:26 vm03 bash[23382]: cluster 2026-03-10T07:57:25.001998+0000 mgr.y (mgr.24407) 1195 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:26 vm03 bash[23382]: audit 2026-03-10T07:57:25.432736+0000 mon.c (mon.2) 474 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:26 vm03 bash[23382]: audit 2026-03-10T07:57:25.432736+0000 mon.c (mon.2) 474 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:26.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:26 vm00 bash[28005]: cluster 2026-03-10T07:57:25.001998+0000 mgr.y (mgr.24407) 1195 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:26.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:26 vm00 bash[28005]: cluster 2026-03-10T07:57:25.001998+0000 mgr.y (mgr.24407) 1195 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:26.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:26 vm00 bash[28005]: audit 2026-03-10T07:57:25.432736+0000 mon.c (mon.2) 474 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:26.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:26 vm00 bash[28005]: audit 2026-03-10T07:57:25.432736+0000 mon.c (mon.2) 474 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:26.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:26 vm00 bash[20701]: cluster 2026-03-10T07:57:25.001998+0000 mgr.y (mgr.24407) 1195 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:26.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:26 vm00 bash[20701]: cluster 2026-03-10T07:57:25.001998+0000 mgr.y (mgr.24407) 1195 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:26.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:26 vm00 bash[20701]: audit 2026-03-10T07:57:25.432736+0000 mon.c (mon.2) 474 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:26.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:26 vm00 bash[20701]: audit 2026-03-10T07:57:25.432736+0000 mon.c (mon.2) 474 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:28.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:28 vm03 bash[23382]: cluster 2026-03-10T07:57:27.002299+0000 mgr.y (mgr.24407) 1196 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:28.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:28 vm03 bash[23382]: cluster 2026-03-10T07:57:27.002299+0000 mgr.y (mgr.24407) 1196 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:28.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:28 vm00 bash[28005]: cluster 2026-03-10T07:57:27.002299+0000 mgr.y (mgr.24407) 1196 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:28.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:28 vm00 bash[28005]: cluster 2026-03-10T07:57:27.002299+0000 mgr.y (mgr.24407) 1196 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:28.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:28 vm00 bash[20701]: cluster 2026-03-10T07:57:27.002299+0000 mgr.y (mgr.24407) 1196 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:28.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:28 vm00 bash[20701]: cluster 2026-03-10T07:57:27.002299+0000 mgr.y (mgr.24407) 1196 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:30 vm03 bash[23382]: cluster 2026-03-10T07:57:29.002612+0000 mgr.y (mgr.24407) 1197 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:30.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:30 vm03 bash[23382]: cluster 2026-03-10T07:57:29.002612+0000 mgr.y (mgr.24407) 1197 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:30.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:30 vm00 bash[28005]: cluster 2026-03-10T07:57:29.002612+0000 mgr.y (mgr.24407) 1197 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:30.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:30 vm00 bash[28005]: cluster 2026-03-10T07:57:29.002612+0000 mgr.y (mgr.24407) 1197 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:30.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:30 vm00 bash[20701]: cluster 2026-03-10T07:57:29.002612+0000 mgr.y (mgr.24407) 1197 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:30.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:30 vm00 bash[20701]: cluster 2026-03-10T07:57:29.002612+0000 mgr.y (mgr.24407) 1197 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:31.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:57:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:57:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:57:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:32 vm03 bash[23382]: cluster 2026-03-10T07:57:31.003178+0000 mgr.y (mgr.24407) 1198 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:32.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:32 vm03 bash[23382]: cluster 2026-03-10T07:57:31.003178+0000 mgr.y (mgr.24407) 1198 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:32.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:32 vm00 bash[28005]: cluster 2026-03-10T07:57:31.003178+0000 mgr.y (mgr.24407) 1198 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:32.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:32 vm00 bash[28005]: cluster 2026-03-10T07:57:31.003178+0000 mgr.y (mgr.24407) 1198 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:32.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:32 vm00 bash[20701]: cluster 2026-03-10T07:57:31.003178+0000 mgr.y (mgr.24407) 1198 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:32.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:32 vm00 bash[20701]: cluster 2026-03-10T07:57:31.003178+0000 mgr.y (mgr.24407) 1198 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:34.511 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:57:34 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:57:34.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:34 vm03 bash[23382]: cluster 2026-03-10T07:57:33.003482+0000 mgr.y (mgr.24407) 1199 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:34.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:34 vm03 bash[23382]: cluster 2026-03-10T07:57:33.003482+0000 mgr.y (mgr.24407) 1199 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:34.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:34 vm00 bash[28005]: cluster 2026-03-10T07:57:33.003482+0000 mgr.y (mgr.24407) 1199 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:34.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:34 vm00 bash[28005]: cluster 2026-03-10T07:57:33.003482+0000 mgr.y (mgr.24407) 1199 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:34.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:34 vm00 bash[20701]: cluster 2026-03-10T07:57:33.003482+0000 mgr.y (mgr.24407) 1199 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:34.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:34 vm00 bash[20701]: cluster 2026-03-10T07:57:33.003482+0000 mgr.y (mgr.24407) 1199 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:35.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:35 vm03 bash[23382]: audit 2026-03-10T07:57:34.313241+0000 mgr.y (mgr.24407) 1200 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:35.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:35 vm03 bash[23382]: audit 2026-03-10T07:57:34.313241+0000 mgr.y (mgr.24407) 1200 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:35.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:35 vm00 bash[28005]: audit 2026-03-10T07:57:34.313241+0000 mgr.y (mgr.24407) 1200 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:35.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:35 vm00 bash[28005]: audit 2026-03-10T07:57:34.313241+0000 mgr.y (mgr.24407) 1200 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:35.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:35 vm00 bash[20701]: audit 2026-03-10T07:57:34.313241+0000 mgr.y (mgr.24407) 1200 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:35.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:35 vm00 bash[20701]: audit 2026-03-10T07:57:34.313241+0000 mgr.y (mgr.24407) 1200 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:36.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:36 vm03 bash[23382]: cluster 2026-03-10T07:57:35.004148+0000 mgr.y (mgr.24407) 1201 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:36.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:36 vm03 bash[23382]: cluster 2026-03-10T07:57:35.004148+0000 mgr.y (mgr.24407) 1201 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:36 vm00 bash[28005]: cluster 2026-03-10T07:57:35.004148+0000 mgr.y (mgr.24407) 1201 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:36 vm00 bash[28005]: cluster 2026-03-10T07:57:35.004148+0000 mgr.y (mgr.24407) 1201 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:36 vm00 bash[20701]: cluster 2026-03-10T07:57:35.004148+0000 mgr.y (mgr.24407) 1201 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:36 vm00 bash[20701]: cluster 2026-03-10T07:57:35.004148+0000 mgr.y (mgr.24407) 1201 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:38 vm00 bash[28005]: cluster 2026-03-10T07:57:37.004463+0000 mgr.y (mgr.24407) 1202 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:38 vm00 bash[28005]: cluster 2026-03-10T07:57:37.004463+0000 mgr.y (mgr.24407) 1202 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:38.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:38 vm00 bash[20701]: cluster 2026-03-10T07:57:37.004463+0000 mgr.y (mgr.24407) 1202 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:38.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:38 vm00 bash[20701]: cluster 2026-03-10T07:57:37.004463+0000 mgr.y (mgr.24407) 1202 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:38 vm03 bash[23382]: cluster 2026-03-10T07:57:37.004463+0000 mgr.y (mgr.24407) 1202 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:38 vm03 bash[23382]: cluster 2026-03-10T07:57:37.004463+0000 mgr.y (mgr.24407) 1202 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:39.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:38.694846+0000 mon.c (mon.2) 475 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:57:39.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:38.694846+0000 mon.c (mon.2) 475 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:57:39.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.011076+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.011076+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.011407+0000 mon.a (mon.0) 3574 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.011407+0000 mon.a (mon.0) 3574 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.013116+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.013116+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.013353+0000 mon.a (mon.0) 3575 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.013353+0000 mon.a (mon.0) 3575 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.014156+0000 mon.c (mon.2) 478 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.014156+0000 mon.c (mon.2) 478 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.014621+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.014621+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.021241+0000 mon.a (mon.0) 3576 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:39 vm00 bash[28005]: audit 2026-03-10T07:57:39.021241+0000 mon.a (mon.0) 3576 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:38.694846+0000 mon.c (mon.2) 475 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:38.694846+0000 mon.c (mon.2) 475 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.011076+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.011076+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.011407+0000 mon.a (mon.0) 3574 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.011407+0000 mon.a (mon.0) 3574 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.013116+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.013116+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.013353+0000 mon.a (mon.0) 3575 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.013353+0000 mon.a (mon.0) 3575 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.014156+0000 mon.c (mon.2) 478 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.014156+0000 mon.c (mon.2) 478 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.014621+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.014621+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.021241+0000 mon.a (mon.0) 3576 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:57:39.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:39 vm00 bash[20701]: audit 2026-03-10T07:57:39.021241+0000 mon.a (mon.0) 3576 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:38.694846+0000 mon.c (mon.2) 475 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:38.694846+0000 mon.c (mon.2) 475 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.011076+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.011076+0000 mon.c (mon.2) 476 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.011407+0000 mon.a (mon.0) 3574 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.011407+0000 mon.a (mon.0) 3574 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.013116+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.013116+0000 mon.c (mon.2) 477 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.013353+0000 mon.a (mon.0) 3575 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.013353+0000 mon.a (mon.0) 3575 : audit [INF] from='mgr.24407 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.014156+0000 mon.c (mon.2) 478 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.014156+0000 mon.c (mon.2) 478 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.014621+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.014621+0000 mon.c (mon.2) 479 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.021241+0000 mon.a (mon.0) 3576 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:57:39.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:39 vm03 bash[23382]: audit 2026-03-10T07:57:39.021241+0000 mon.a (mon.0) 3576 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:57:40.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:40 vm00 bash[28005]: cluster 2026-03-10T07:57:39.004989+0000 mgr.y (mgr.24407) 1203 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:40.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:40 vm00 bash[28005]: cluster 2026-03-10T07:57:39.004989+0000 mgr.y (mgr.24407) 1203 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:40.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:40 vm00 bash[20701]: cluster 2026-03-10T07:57:39.004989+0000 mgr.y (mgr.24407) 1203 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:40.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:40 vm00 bash[20701]: cluster 2026-03-10T07:57:39.004989+0000 mgr.y (mgr.24407) 1203 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:40 vm03 bash[23382]: cluster 2026-03-10T07:57:39.004989+0000 mgr.y (mgr.24407) 1203 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:40.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:40 vm03 bash[23382]: cluster 2026-03-10T07:57:39.004989+0000 mgr.y (mgr.24407) 1203 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:41.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:41 vm00 bash[28005]: audit 2026-03-10T07:57:40.438766+0000 mon.c (mon.2) 480 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:41.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:41 vm00 bash[28005]: audit 2026-03-10T07:57:40.438766+0000 mon.c (mon.2) 480 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:41.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:57:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:57:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:57:41.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:41 vm00 bash[20701]: audit 2026-03-10T07:57:40.438766+0000 mon.c (mon.2) 480 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:41 vm00 bash[20701]: audit 2026-03-10T07:57:40.438766+0000 mon.c (mon.2) 480 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:41.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:41 vm03 bash[23382]: audit 2026-03-10T07:57:40.438766+0000 mon.c (mon.2) 480 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:41.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:41 vm03 bash[23382]: audit 2026-03-10T07:57:40.438766+0000 mon.c (mon.2) 480 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:42.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:42 vm00 bash[28005]: cluster 2026-03-10T07:57:41.005623+0000 mgr.y (mgr.24407) 1204 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:42.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:42 vm00 bash[28005]: cluster 2026-03-10T07:57:41.005623+0000 mgr.y (mgr.24407) 1204 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:42.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:42 vm00 bash[20701]: cluster 2026-03-10T07:57:41.005623+0000 mgr.y (mgr.24407) 1204 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:42.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:42 vm00 bash[20701]: cluster 2026-03-10T07:57:41.005623+0000 mgr.y (mgr.24407) 1204 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:42.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:42 vm03 bash[23382]: cluster 2026-03-10T07:57:41.005623+0000 mgr.y (mgr.24407) 1204 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:42.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:42 vm03 bash[23382]: cluster 2026-03-10T07:57:41.005623+0000 mgr.y (mgr.24407) 1204 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:44.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:44 vm00 bash[28005]: cluster 2026-03-10T07:57:43.005877+0000 mgr.y (mgr.24407) 1205 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:44.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:44 vm00 bash[28005]: cluster 2026-03-10T07:57:43.005877+0000 mgr.y (mgr.24407) 1205 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:44.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:44 vm00 bash[20701]: cluster 2026-03-10T07:57:43.005877+0000 mgr.y (mgr.24407) 1205 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:44.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:44 vm00 bash[20701]: cluster 2026-03-10T07:57:43.005877+0000 mgr.y (mgr.24407) 1205 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:44.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:57:44 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:57:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:44 vm03 bash[23382]: cluster 2026-03-10T07:57:43.005877+0000 mgr.y (mgr.24407) 1205 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:44 vm03 bash[23382]: cluster 2026-03-10T07:57:43.005877+0000 mgr.y (mgr.24407) 1205 : cluster [DBG] pgmap v1624: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:45 vm00 bash[28005]: audit 2026-03-10T07:57:44.322411+0000 mgr.y (mgr.24407) 1206 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:45 vm00 bash[28005]: audit 2026-03-10T07:57:44.322411+0000 mgr.y (mgr.24407) 1206 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:45 vm00 bash[20701]: audit 2026-03-10T07:57:44.322411+0000 mgr.y (mgr.24407) 1206 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:45 vm00 bash[20701]: audit 2026-03-10T07:57:44.322411+0000 mgr.y (mgr.24407) 1206 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:45 vm03 bash[23382]: audit 2026-03-10T07:57:44.322411+0000 mgr.y (mgr.24407) 1206 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:45 vm03 bash[23382]: audit 2026-03-10T07:57:44.322411+0000 mgr.y (mgr.24407) 1206 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:46.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:46 vm00 bash[28005]: cluster 2026-03-10T07:57:45.006716+0000 mgr.y (mgr.24407) 1207 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:46.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:46 vm00 bash[28005]: cluster 2026-03-10T07:57:45.006716+0000 mgr.y (mgr.24407) 1207 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:46.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:46 vm00 bash[20701]: cluster 2026-03-10T07:57:45.006716+0000 mgr.y (mgr.24407) 1207 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:46.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:46 vm00 bash[20701]: cluster 2026-03-10T07:57:45.006716+0000 mgr.y (mgr.24407) 1207 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:46.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:46 vm03 bash[23382]: cluster 2026-03-10T07:57:45.006716+0000 mgr.y (mgr.24407) 1207 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:46.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:46 vm03 bash[23382]: cluster 2026-03-10T07:57:45.006716+0000 mgr.y (mgr.24407) 1207 : cluster [DBG] pgmap v1625: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:48.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:48 vm00 bash[28005]: cluster 2026-03-10T07:57:47.006995+0000 mgr.y (mgr.24407) 1208 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:48.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:48 vm00 bash[28005]: cluster 2026-03-10T07:57:47.006995+0000 mgr.y (mgr.24407) 1208 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:48.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:48 vm00 bash[20701]: cluster 2026-03-10T07:57:47.006995+0000 mgr.y (mgr.24407) 1208 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:48.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:48 vm00 bash[20701]: cluster 2026-03-10T07:57:47.006995+0000 mgr.y (mgr.24407) 1208 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:48.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:48 vm03 bash[23382]: cluster 2026-03-10T07:57:47.006995+0000 mgr.y (mgr.24407) 1208 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:48.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:48 vm03 bash[23382]: cluster 2026-03-10T07:57:47.006995+0000 mgr.y (mgr.24407) 1208 : cluster [DBG] pgmap v1626: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:49.511 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 07:57:49 vm03 bash[51371]: logger=infra.usagestats t=2026-03-10T07:57:49.133660618Z level=info msg="Usage stats are ready to report" 2026-03-10T07:57:50.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:50 vm00 bash[28005]: cluster 2026-03-10T07:57:49.007590+0000 mgr.y (mgr.24407) 1209 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:50.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:50 vm00 bash[28005]: cluster 2026-03-10T07:57:49.007590+0000 mgr.y (mgr.24407) 1209 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:50.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:50 vm00 bash[20701]: cluster 2026-03-10T07:57:49.007590+0000 mgr.y (mgr.24407) 1209 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:50.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:50 vm00 bash[20701]: cluster 2026-03-10T07:57:49.007590+0000 mgr.y (mgr.24407) 1209 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:50.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:50 vm03 bash[23382]: cluster 2026-03-10T07:57:49.007590+0000 mgr.y (mgr.24407) 1209 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:50.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:50 vm03 bash[23382]: cluster 2026-03-10T07:57:49.007590+0000 mgr.y (mgr.24407) 1209 : cluster [DBG] pgmap v1627: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:51.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:57:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:57:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:57:52.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:52 vm00 bash[28005]: cluster 2026-03-10T07:57:51.008177+0000 mgr.y (mgr.24407) 1210 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:52.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:52 vm00 bash[28005]: cluster 2026-03-10T07:57:51.008177+0000 mgr.y (mgr.24407) 1210 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:52.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:52 vm00 bash[20701]: cluster 2026-03-10T07:57:51.008177+0000 mgr.y (mgr.24407) 1210 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:52.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:52 vm00 bash[20701]: cluster 2026-03-10T07:57:51.008177+0000 mgr.y (mgr.24407) 1210 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:52.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:52 vm03 bash[23382]: cluster 2026-03-10T07:57:51.008177+0000 mgr.y (mgr.24407) 1210 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:52.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:52 vm03 bash[23382]: cluster 2026-03-10T07:57:51.008177+0000 mgr.y (mgr.24407) 1210 : cluster [DBG] pgmap v1628: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:54.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:54 vm00 bash[28005]: cluster 2026-03-10T07:57:53.008436+0000 mgr.y (mgr.24407) 1211 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:54.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:54 vm00 bash[28005]: cluster 2026-03-10T07:57:53.008436+0000 mgr.y (mgr.24407) 1211 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:54.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:54 vm00 bash[20701]: cluster 2026-03-10T07:57:53.008436+0000 mgr.y (mgr.24407) 1211 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:54.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:54 vm00 bash[20701]: cluster 2026-03-10T07:57:53.008436+0000 mgr.y (mgr.24407) 1211 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:54.761 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:57:54 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:57:54.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:54 vm03 bash[23382]: cluster 2026-03-10T07:57:53.008436+0000 mgr.y (mgr.24407) 1211 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:54.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:54 vm03 bash[23382]: cluster 2026-03-10T07:57:53.008436+0000 mgr.y (mgr.24407) 1211 : cluster [DBG] pgmap v1629: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:55.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:55 vm00 bash[28005]: audit 2026-03-10T07:57:54.332797+0000 mgr.y (mgr.24407) 1212 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:55.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:55 vm00 bash[28005]: audit 2026-03-10T07:57:54.332797+0000 mgr.y (mgr.24407) 1212 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:55.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:55 vm00 bash[20701]: audit 2026-03-10T07:57:54.332797+0000 mgr.y (mgr.24407) 1212 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:55.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:55 vm00 bash[20701]: audit 2026-03-10T07:57:54.332797+0000 mgr.y (mgr.24407) 1212 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:55.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:55 vm03 bash[23382]: audit 2026-03-10T07:57:54.332797+0000 mgr.y (mgr.24407) 1212 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:55.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:55 vm03 bash[23382]: audit 2026-03-10T07:57:54.332797+0000 mgr.y (mgr.24407) 1212 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:57:56.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:56 vm03 bash[23382]: cluster 2026-03-10T07:57:55.008983+0000 mgr.y (mgr.24407) 1213 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:56.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:56 vm03 bash[23382]: cluster 2026-03-10T07:57:55.008983+0000 mgr.y (mgr.24407) 1213 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:56.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:56 vm03 bash[23382]: audit 2026-03-10T07:57:55.444568+0000 mon.c (mon.2) 481 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:56.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:56 vm03 bash[23382]: audit 2026-03-10T07:57:55.444568+0000 mon.c (mon.2) 481 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:56.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:56 vm00 bash[28005]: cluster 2026-03-10T07:57:55.008983+0000 mgr.y (mgr.24407) 1213 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:56.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:56 vm00 bash[28005]: cluster 2026-03-10T07:57:55.008983+0000 mgr.y (mgr.24407) 1213 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:56.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:56 vm00 bash[28005]: audit 2026-03-10T07:57:55.444568+0000 mon.c (mon.2) 481 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:56.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:56 vm00 bash[28005]: audit 2026-03-10T07:57:55.444568+0000 mon.c (mon.2) 481 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:56.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:56 vm00 bash[20701]: cluster 2026-03-10T07:57:55.008983+0000 mgr.y (mgr.24407) 1213 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:56.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:56 vm00 bash[20701]: cluster 2026-03-10T07:57:55.008983+0000 mgr.y (mgr.24407) 1213 : cluster [DBG] pgmap v1630: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:57:56.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:56 vm00 bash[20701]: audit 2026-03-10T07:57:55.444568+0000 mon.c (mon.2) 481 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:56.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:56 vm00 bash[20701]: audit 2026-03-10T07:57:55.444568+0000 mon.c (mon.2) 481 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:57:58.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:58 vm03 bash[23382]: cluster 2026-03-10T07:57:57.009261+0000 mgr.y (mgr.24407) 1214 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:58.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:57:58 vm03 bash[23382]: cluster 2026-03-10T07:57:57.009261+0000 mgr.y (mgr.24407) 1214 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:58.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:58 vm00 bash[28005]: cluster 2026-03-10T07:57:57.009261+0000 mgr.y (mgr.24407) 1214 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:58.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:57:58 vm00 bash[28005]: cluster 2026-03-10T07:57:57.009261+0000 mgr.y (mgr.24407) 1214 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:58.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:58 vm00 bash[20701]: cluster 2026-03-10T07:57:57.009261+0000 mgr.y (mgr.24407) 1214 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:57:58.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:57:58 vm00 bash[20701]: cluster 2026-03-10T07:57:57.009261+0000 mgr.y (mgr.24407) 1214 : cluster [DBG] pgmap v1631: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:00.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:00 vm03 bash[23382]: cluster 2026-03-10T07:57:59.009535+0000 mgr.y (mgr.24407) 1215 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:00.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:00 vm03 bash[23382]: cluster 2026-03-10T07:57:59.009535+0000 mgr.y (mgr.24407) 1215 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:00.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:00 vm00 bash[28005]: cluster 2026-03-10T07:57:59.009535+0000 mgr.y (mgr.24407) 1215 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:00.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:00 vm00 bash[28005]: cluster 2026-03-10T07:57:59.009535+0000 mgr.y (mgr.24407) 1215 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:00.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:00 vm00 bash[20701]: cluster 2026-03-10T07:57:59.009535+0000 mgr.y (mgr.24407) 1215 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:00.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:00 vm00 bash[20701]: cluster 2026-03-10T07:57:59.009535+0000 mgr.y (mgr.24407) 1215 : cluster [DBG] pgmap v1632: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:01.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:58:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:58:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:58:02.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:02 vm03 bash[23382]: cluster 2026-03-10T07:58:01.010566+0000 mgr.y (mgr.24407) 1216 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:02.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:02 vm03 bash[23382]: cluster 2026-03-10T07:58:01.010566+0000 mgr.y (mgr.24407) 1216 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:02.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:02 vm00 bash[28005]: cluster 2026-03-10T07:58:01.010566+0000 mgr.y (mgr.24407) 1216 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:02.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:02 vm00 bash[28005]: cluster 2026-03-10T07:58:01.010566+0000 mgr.y (mgr.24407) 1216 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:02.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:02 vm00 bash[20701]: cluster 2026-03-10T07:58:01.010566+0000 mgr.y (mgr.24407) 1216 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:02.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:02 vm00 bash[20701]: cluster 2026-03-10T07:58:01.010566+0000 mgr.y (mgr.24407) 1216 : cluster [DBG] pgmap v1633: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:04.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:58:04 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:58:04.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:04 vm03 bash[23382]: cluster 2026-03-10T07:58:03.010821+0000 mgr.y (mgr.24407) 1217 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:04.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:04 vm03 bash[23382]: cluster 2026-03-10T07:58:03.010821+0000 mgr.y (mgr.24407) 1217 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:04.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:04 vm00 bash[28005]: cluster 2026-03-10T07:58:03.010821+0000 mgr.y (mgr.24407) 1217 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:04.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:04 vm00 bash[28005]: cluster 2026-03-10T07:58:03.010821+0000 mgr.y (mgr.24407) 1217 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:04.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:04 vm00 bash[20701]: cluster 2026-03-10T07:58:03.010821+0000 mgr.y (mgr.24407) 1217 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:04.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:04 vm00 bash[20701]: cluster 2026-03-10T07:58:03.010821+0000 mgr.y (mgr.24407) 1217 : cluster [DBG] pgmap v1634: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:05.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:05 vm03 bash[23382]: audit 2026-03-10T07:58:04.343141+0000 mgr.y (mgr.24407) 1218 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:05.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:05 vm03 bash[23382]: audit 2026-03-10T07:58:04.343141+0000 mgr.y (mgr.24407) 1218 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:05.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:05 vm00 bash[28005]: audit 2026-03-10T07:58:04.343141+0000 mgr.y (mgr.24407) 1218 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:05.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:05 vm00 bash[28005]: audit 2026-03-10T07:58:04.343141+0000 mgr.y (mgr.24407) 1218 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:05.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:05 vm00 bash[20701]: audit 2026-03-10T07:58:04.343141+0000 mgr.y (mgr.24407) 1218 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:05.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:05 vm00 bash[20701]: audit 2026-03-10T07:58:04.343141+0000 mgr.y (mgr.24407) 1218 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:06.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:06 vm03 bash[23382]: cluster 2026-03-10T07:58:05.011388+0000 mgr.y (mgr.24407) 1219 : cluster [DBG] pgmap v1635: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:06.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:06 vm03 bash[23382]: cluster 2026-03-10T07:58:05.011388+0000 mgr.y (mgr.24407) 1219 : cluster [DBG] pgmap v1635: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:06.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:06 vm00 bash[28005]: cluster 2026-03-10T07:58:05.011388+0000 mgr.y (mgr.24407) 1219 : cluster [DBG] pgmap v1635: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:06.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:06 vm00 bash[28005]: cluster 2026-03-10T07:58:05.011388+0000 mgr.y (mgr.24407) 1219 : cluster [DBG] pgmap v1635: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:06.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:06 vm00 bash[20701]: cluster 2026-03-10T07:58:05.011388+0000 mgr.y (mgr.24407) 1219 : cluster [DBG] pgmap v1635: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:06.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:06 vm00 bash[20701]: cluster 2026-03-10T07:58:05.011388+0000 mgr.y (mgr.24407) 1219 : cluster [DBG] pgmap v1635: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:07.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:07 vm03 bash[23382]: cluster 2026-03-10T07:58:06.427905+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T07:58:07.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:07 vm03 bash[23382]: cluster 2026-03-10T07:58:06.427905+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T07:58:07.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:07 vm00 bash[28005]: cluster 2026-03-10T07:58:06.427905+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T07:58:07.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:07 vm00 bash[28005]: cluster 2026-03-10T07:58:06.427905+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T07:58:07.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:07 vm00 bash[20701]: cluster 2026-03-10T07:58:06.427905+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T07:58:07.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:07 vm00 bash[20701]: cluster 2026-03-10T07:58:06.427905+0000 mon.a (mon.0) 3577 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T07:58:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:08 vm03 bash[23382]: cluster 2026-03-10T07:58:07.011714+0000 mgr.y (mgr.24407) 1220 : cluster [DBG] pgmap v1637: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:08 vm03 bash[23382]: cluster 2026-03-10T07:58:07.011714+0000 mgr.y (mgr.24407) 1220 : cluster [DBG] pgmap v1637: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:08 vm03 bash[23382]: cluster 2026-03-10T07:58:07.425528+0000 mon.a (mon.0) 3578 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:08 vm03 bash[23382]: cluster 2026-03-10T07:58:07.425528+0000 mon.a (mon.0) 3578 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:08 vm03 bash[23382]: cluster 2026-03-10T07:58:07.448326+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T07:58:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:08 vm03 bash[23382]: cluster 2026-03-10T07:58:07.448326+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:08 vm00 bash[28005]: cluster 2026-03-10T07:58:07.011714+0000 mgr.y (mgr.24407) 1220 : cluster [DBG] pgmap v1637: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:08 vm00 bash[28005]: cluster 2026-03-10T07:58:07.011714+0000 mgr.y (mgr.24407) 1220 : cluster [DBG] pgmap v1637: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:08 vm00 bash[28005]: cluster 2026-03-10T07:58:07.425528+0000 mon.a (mon.0) 3578 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:08 vm00 bash[28005]: cluster 2026-03-10T07:58:07.425528+0000 mon.a (mon.0) 3578 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:08 vm00 bash[28005]: cluster 2026-03-10T07:58:07.448326+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:08 vm00 bash[28005]: cluster 2026-03-10T07:58:07.448326+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:08 vm00 bash[20701]: cluster 2026-03-10T07:58:07.011714+0000 mgr.y (mgr.24407) 1220 : cluster [DBG] pgmap v1637: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:08 vm00 bash[20701]: cluster 2026-03-10T07:58:07.011714+0000 mgr.y (mgr.24407) 1220 : cluster [DBG] pgmap v1637: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:08 vm00 bash[20701]: cluster 2026-03-10T07:58:07.425528+0000 mon.a (mon.0) 3578 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:08 vm00 bash[20701]: cluster 2026-03-10T07:58:07.425528+0000 mon.a (mon.0) 3578 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:08 vm00 bash[20701]: cluster 2026-03-10T07:58:07.448326+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T07:58:08.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:08 vm00 bash[20701]: cluster 2026-03-10T07:58:07.448326+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T07:58:09.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:09 vm03 bash[23382]: cluster 2026-03-10T07:58:08.454000+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T07:58:09.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:09 vm03 bash[23382]: cluster 2026-03-10T07:58:08.454000+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T07:58:09.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:09 vm00 bash[28005]: cluster 2026-03-10T07:58:08.454000+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T07:58:09.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:09 vm00 bash[28005]: cluster 2026-03-10T07:58:08.454000+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T07:58:09.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:09 vm00 bash[20701]: cluster 2026-03-10T07:58:08.454000+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T07:58:09.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:09 vm00 bash[20701]: cluster 2026-03-10T07:58:08.454000+0000 mon.a (mon.0) 3580 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T07:58:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:10 vm03 bash[23382]: cluster 2026-03-10T07:58:09.011949+0000 mgr.y (mgr.24407) 1221 : cluster [DBG] pgmap v1640: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:10 vm03 bash[23382]: cluster 2026-03-10T07:58:09.011949+0000 mgr.y (mgr.24407) 1221 : cluster [DBG] pgmap v1640: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:10 vm03 bash[23382]: cluster 2026-03-10T07:58:09.466116+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T07:58:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:10 vm03 bash[23382]: cluster 2026-03-10T07:58:09.466116+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T07:58:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:10 vm03 bash[23382]: audit 2026-03-10T07:58:10.450238+0000 mon.c (mon.2) 482 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:10 vm03 bash[23382]: audit 2026-03-10T07:58:10.450238+0000 mon.c (mon.2) 482 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:10 vm00 bash[28005]: cluster 2026-03-10T07:58:09.011949+0000 mgr.y (mgr.24407) 1221 : cluster [DBG] pgmap v1640: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:10 vm00 bash[28005]: cluster 2026-03-10T07:58:09.011949+0000 mgr.y (mgr.24407) 1221 : cluster [DBG] pgmap v1640: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:10 vm00 bash[28005]: cluster 2026-03-10T07:58:09.466116+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:10 vm00 bash[28005]: cluster 2026-03-10T07:58:09.466116+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:10 vm00 bash[28005]: audit 2026-03-10T07:58:10.450238+0000 mon.c (mon.2) 482 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:10 vm00 bash[28005]: audit 2026-03-10T07:58:10.450238+0000 mon.c (mon.2) 482 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:10 vm00 bash[20701]: cluster 2026-03-10T07:58:09.011949+0000 mgr.y (mgr.24407) 1221 : cluster [DBG] pgmap v1640: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:10 vm00 bash[20701]: cluster 2026-03-10T07:58:09.011949+0000 mgr.y (mgr.24407) 1221 : cluster [DBG] pgmap v1640: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:10 vm00 bash[20701]: cluster 2026-03-10T07:58:09.466116+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:10 vm00 bash[20701]: cluster 2026-03-10T07:58:09.466116+0000 mon.a (mon.0) 3581 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:10 vm00 bash[20701]: audit 2026-03-10T07:58:10.450238+0000 mon.c (mon.2) 482 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:10.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:10 vm00 bash[20701]: audit 2026-03-10T07:58:10.450238+0000 mon.c (mon.2) 482 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:11.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:58:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:58:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:58:11.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:11 vm03 bash[23382]: cluster 2026-03-10T07:58:10.484636+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T07:58:11.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:11 vm03 bash[23382]: cluster 2026-03-10T07:58:10.484636+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T07:58:11.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:11 vm00 bash[28005]: cluster 2026-03-10T07:58:10.484636+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T07:58:11.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:11 vm00 bash[28005]: cluster 2026-03-10T07:58:10.484636+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T07:58:11.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:11 vm00 bash[20701]: cluster 2026-03-10T07:58:10.484636+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T07:58:11.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:11 vm00 bash[20701]: cluster 2026-03-10T07:58:10.484636+0000 mon.a (mon.0) 3582 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T07:58:12.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:12 vm00 bash[28005]: cluster 2026-03-10T07:58:11.012206+0000 mgr.y (mgr.24407) 1222 : cluster [DBG] pgmap v1643: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:12.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:12 vm00 bash[28005]: cluster 2026-03-10T07:58:11.012206+0000 mgr.y (mgr.24407) 1222 : cluster [DBG] pgmap v1643: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:12.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:12 vm00 bash[28005]: cluster 2026-03-10T07:58:11.475528+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T07:58:12.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:12 vm00 bash[28005]: cluster 2026-03-10T07:58:11.475528+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T07:58:12.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:12 vm00 bash[20701]: cluster 2026-03-10T07:58:11.012206+0000 mgr.y (mgr.24407) 1222 : cluster [DBG] pgmap v1643: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:12.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:12 vm00 bash[20701]: cluster 2026-03-10T07:58:11.012206+0000 mgr.y (mgr.24407) 1222 : cluster [DBG] pgmap v1643: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:12.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:12 vm00 bash[20701]: cluster 2026-03-10T07:58:11.475528+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T07:58:12.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:12 vm00 bash[20701]: cluster 2026-03-10T07:58:11.475528+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T07:58:13.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:12 vm03 bash[23382]: cluster 2026-03-10T07:58:11.012206+0000 mgr.y (mgr.24407) 1222 : cluster [DBG] pgmap v1643: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:13.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:12 vm03 bash[23382]: cluster 2026-03-10T07:58:11.012206+0000 mgr.y (mgr.24407) 1222 : cluster [DBG] pgmap v1643: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:13.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:12 vm03 bash[23382]: cluster 2026-03-10T07:58:11.475528+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T07:58:13.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:12 vm03 bash[23382]: cluster 2026-03-10T07:58:11.475528+0000 mon.a (mon.0) 3583 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T07:58:13.575 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Running main() from gmock_main.cc 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [==========] Running 2 tests from 1 test suite. 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [----------] Global test environment set-up. 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotify 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: handle_notify cookie 94077594635664 notify_id 3186865733637 notifier_gid 24770 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotify (1801532 ms) 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotifyTimeout 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Trying... 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: handle_notify cookie 94077607499056 notify_id 3199750635525 notifier_gid 44423 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Waiting for 3.000000000s 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Timed out. 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Flushing... 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: Flushed... 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotifyTimeout (7138 ms) 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify (1808670 ms total) 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [----------] Global test environment tear-down 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [==========] 2 tests from 1 test suite ran. (1808670 ms total) 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stdout: watch_notify: [ PASSED ] 2 tests. 2026-03-10T07:58:13.576 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59959 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59959 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60200 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60200 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60535 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60535 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60395 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60395 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60591 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60591 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60152 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60152 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59787 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59787 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60619 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60619 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60035 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60035 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59624 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59624 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59696 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59696 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60012 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60012 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=59729 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 59729 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60086 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60086 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60111 2026-03-10T07:58:13.577 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60111 2026-03-10T07:58:13.578 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.578 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60513 2026-03-10T07:58:13.578 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60513 2026-03-10T07:58:13.578 INFO:tasks.workunit.client.0.vm00.stderr:+ for t in "${!pids[@]}" 2026-03-10T07:58:13.578 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=60671 2026-03-10T07:58:13.578 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 60671 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:13 vm00 bash[28005]: cluster 2026-03-10T07:58:12.573324+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:13 vm00 bash[28005]: cluster 2026-03-10T07:58:12.573324+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:13 vm00 bash[28005]: cluster 2026-03-10T07:58:13.012434+0000 mgr.y (mgr.24407) 1223 : cluster [DBG] pgmap v1646: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:13 vm00 bash[28005]: cluster 2026-03-10T07:58:13.012434+0000 mgr.y (mgr.24407) 1223 : cluster [DBG] pgmap v1646: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:13 vm00 bash[28005]: cluster 2026-03-10T07:58:13.579376+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:13 vm00 bash[28005]: cluster 2026-03-10T07:58:13.579376+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:13 vm00 bash[20701]: cluster 2026-03-10T07:58:12.573324+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:13 vm00 bash[20701]: cluster 2026-03-10T07:58:12.573324+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:13 vm00 bash[20701]: cluster 2026-03-10T07:58:13.012434+0000 mgr.y (mgr.24407) 1223 : cluster [DBG] pgmap v1646: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:13 vm00 bash[20701]: cluster 2026-03-10T07:58:13.012434+0000 mgr.y (mgr.24407) 1223 : cluster [DBG] pgmap v1646: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:13 vm00 bash[20701]: cluster 2026-03-10T07:58:13.579376+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T07:58:13.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:13 vm00 bash[20701]: cluster 2026-03-10T07:58:13.579376+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T07:58:14.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:13 vm03 bash[23382]: cluster 2026-03-10T07:58:12.573324+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T07:58:14.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:13 vm03 bash[23382]: cluster 2026-03-10T07:58:12.573324+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T07:58:14.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:13 vm03 bash[23382]: cluster 2026-03-10T07:58:13.012434+0000 mgr.y (mgr.24407) 1223 : cluster [DBG] pgmap v1646: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:14.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:13 vm03 bash[23382]: cluster 2026-03-10T07:58:13.012434+0000 mgr.y (mgr.24407) 1223 : cluster [DBG] pgmap v1646: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:14.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:13 vm03 bash[23382]: cluster 2026-03-10T07:58:13.579376+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T07:58:14.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:13 vm03 bash[23382]: cluster 2026-03-10T07:58:13.579376+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T07:58:14.618 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:58:14 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:14 vm00 bash[28005]: cluster 2026-03-10T07:58:13.593019+0000 mon.a (mon.0) 3586 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:14 vm00 bash[28005]: cluster 2026-03-10T07:58:13.593019+0000 mon.a (mon.0) 3586 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:14 vm00 bash[28005]: audit 2026-03-10T07:58:14.352918+0000 mgr.y (mgr.24407) 1224 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:14 vm00 bash[28005]: audit 2026-03-10T07:58:14.352918+0000 mgr.y (mgr.24407) 1224 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:14 vm00 bash[28005]: cluster 2026-03-10T07:58:14.578932+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:14 vm00 bash[28005]: cluster 2026-03-10T07:58:14.578932+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:14 vm00 bash[20701]: cluster 2026-03-10T07:58:13.593019+0000 mon.a (mon.0) 3586 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:14 vm00 bash[20701]: cluster 2026-03-10T07:58:13.593019+0000 mon.a (mon.0) 3586 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:14 vm00 bash[20701]: audit 2026-03-10T07:58:14.352918+0000 mgr.y (mgr.24407) 1224 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:14 vm00 bash[20701]: audit 2026-03-10T07:58:14.352918+0000 mgr.y (mgr.24407) 1224 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:14 vm00 bash[20701]: cluster 2026-03-10T07:58:14.578932+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T07:58:14.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:14 vm00 bash[20701]: cluster 2026-03-10T07:58:14.578932+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T07:58:15.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:14 vm03 bash[23382]: cluster 2026-03-10T07:58:13.593019+0000 mon.a (mon.0) 3586 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:15.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:14 vm03 bash[23382]: cluster 2026-03-10T07:58:13.593019+0000 mon.a (mon.0) 3586 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:15.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:14 vm03 bash[23382]: audit 2026-03-10T07:58:14.352918+0000 mgr.y (mgr.24407) 1224 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:15.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:14 vm03 bash[23382]: audit 2026-03-10T07:58:14.352918+0000 mgr.y (mgr.24407) 1224 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:15.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:14 vm03 bash[23382]: cluster 2026-03-10T07:58:14.578932+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T07:58:15.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:14 vm03 bash[23382]: cluster 2026-03-10T07:58:14.578932+0000 mon.a (mon.0) 3587 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T07:58:15.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:15 vm00 bash[28005]: cluster 2026-03-10T07:58:15.013059+0000 mgr.y (mgr.24407) 1225 : cluster [DBG] pgmap v1649: 196 pgs: 14 creating+peering, 16 creating+activating, 166 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:15.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:15 vm00 bash[28005]: cluster 2026-03-10T07:58:15.013059+0000 mgr.y (mgr.24407) 1225 : cluster [DBG] pgmap v1649: 196 pgs: 14 creating+peering, 16 creating+activating, 166 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:15.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:15 vm00 bash[28005]: cluster 2026-03-10T07:58:15.583319+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T07:58:15.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:15 vm00 bash[28005]: cluster 2026-03-10T07:58:15.583319+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T07:58:15.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:15 vm00 bash[20701]: cluster 2026-03-10T07:58:15.013059+0000 mgr.y (mgr.24407) 1225 : cluster [DBG] pgmap v1649: 196 pgs: 14 creating+peering, 16 creating+activating, 166 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:15.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:15 vm00 bash[20701]: cluster 2026-03-10T07:58:15.013059+0000 mgr.y (mgr.24407) 1225 : cluster [DBG] pgmap v1649: 196 pgs: 14 creating+peering, 16 creating+activating, 166 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:15.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:15 vm00 bash[20701]: cluster 2026-03-10T07:58:15.583319+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T07:58:15.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:15 vm00 bash[20701]: cluster 2026-03-10T07:58:15.583319+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T07:58:16.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:15 vm03 bash[23382]: cluster 2026-03-10T07:58:15.013059+0000 mgr.y (mgr.24407) 1225 : cluster [DBG] pgmap v1649: 196 pgs: 14 creating+peering, 16 creating+activating, 166 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:16.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:15 vm03 bash[23382]: cluster 2026-03-10T07:58:15.013059+0000 mgr.y (mgr.24407) 1225 : cluster [DBG] pgmap v1649: 196 pgs: 14 creating+peering, 16 creating+activating, 166 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:16.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:15 vm03 bash[23382]: cluster 2026-03-10T07:58:15.583319+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T07:58:16.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:15 vm03 bash[23382]: cluster 2026-03-10T07:58:15.583319+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T07:58:17.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:17 vm00 bash[28005]: cluster 2026-03-10T07:58:16.588965+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T07:58:17.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:17 vm00 bash[28005]: cluster 2026-03-10T07:58:16.588965+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T07:58:17.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:17 vm00 bash[28005]: cluster 2026-03-10T07:58:17.013296+0000 mgr.y (mgr.24407) 1226 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:17.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:17 vm00 bash[28005]: cluster 2026-03-10T07:58:17.013296+0000 mgr.y (mgr.24407) 1226 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:17.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:17 vm00 bash[20701]: cluster 2026-03-10T07:58:16.588965+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T07:58:17.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:17 vm00 bash[20701]: cluster 2026-03-10T07:58:16.588965+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T07:58:17.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:17 vm00 bash[20701]: cluster 2026-03-10T07:58:17.013296+0000 mgr.y (mgr.24407) 1226 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:17.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:17 vm00 bash[20701]: cluster 2026-03-10T07:58:17.013296+0000 mgr.y (mgr.24407) 1226 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:18.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:17 vm03 bash[23382]: cluster 2026-03-10T07:58:16.588965+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T07:58:18.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:17 vm03 bash[23382]: cluster 2026-03-10T07:58:16.588965+0000 mon.a (mon.0) 3589 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T07:58:18.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:17 vm03 bash[23382]: cluster 2026-03-10T07:58:17.013296+0000 mgr.y (mgr.24407) 1226 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:18.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:17 vm03 bash[23382]: cluster 2026-03-10T07:58:17.013296+0000 mgr.y (mgr.24407) 1226 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T07:58:18.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:18 vm00 bash[28005]: cluster 2026-03-10T07:58:17.594144+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T07:58:18.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:18 vm00 bash[28005]: cluster 2026-03-10T07:58:17.594144+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T07:58:18.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:18 vm00 bash[20701]: cluster 2026-03-10T07:58:17.594144+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T07:58:18.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:18 vm00 bash[20701]: cluster 2026-03-10T07:58:17.594144+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T07:58:19.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:18 vm03 bash[23382]: cluster 2026-03-10T07:58:17.594144+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T07:58:19.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:18 vm03 bash[23382]: cluster 2026-03-10T07:58:17.594144+0000 mon.a (mon.0) 3590 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T07:58:19.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:19 vm00 bash[28005]: cluster 2026-03-10T07:58:18.601043+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T07:58:19.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:19 vm00 bash[28005]: cluster 2026-03-10T07:58:18.601043+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T07:58:19.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:19 vm00 bash[28005]: cluster 2026-03-10T07:58:19.013534+0000 mgr.y (mgr.24407) 1227 : cluster [DBG] pgmap v1655: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:19.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:19 vm00 bash[28005]: cluster 2026-03-10T07:58:19.013534+0000 mgr.y (mgr.24407) 1227 : cluster [DBG] pgmap v1655: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:19.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:19 vm00 bash[20701]: cluster 2026-03-10T07:58:18.601043+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T07:58:19.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:19 vm00 bash[20701]: cluster 2026-03-10T07:58:18.601043+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T07:58:19.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:19 vm00 bash[20701]: cluster 2026-03-10T07:58:19.013534+0000 mgr.y (mgr.24407) 1227 : cluster [DBG] pgmap v1655: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:19.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:19 vm00 bash[20701]: cluster 2026-03-10T07:58:19.013534+0000 mgr.y (mgr.24407) 1227 : cluster [DBG] pgmap v1655: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:19 vm03 bash[23382]: cluster 2026-03-10T07:58:18.601043+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T07:58:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:19 vm03 bash[23382]: cluster 2026-03-10T07:58:18.601043+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T07:58:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:19 vm03 bash[23382]: cluster 2026-03-10T07:58:19.013534+0000 mgr.y (mgr.24407) 1227 : cluster [DBG] pgmap v1655: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:19 vm03 bash[23382]: cluster 2026-03-10T07:58:19.013534+0000 mgr.y (mgr.24407) 1227 : cluster [DBG] pgmap v1655: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:20.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:20 vm00 bash[28005]: cluster 2026-03-10T07:58:19.597138+0000 mon.a (mon.0) 3592 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:20.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:20 vm00 bash[28005]: cluster 2026-03-10T07:58:19.597138+0000 mon.a (mon.0) 3592 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:20.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:20 vm00 bash[28005]: cluster 2026-03-10T07:58:19.618373+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T07:58:20.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:20 vm00 bash[28005]: cluster 2026-03-10T07:58:19.618373+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T07:58:20.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:20 vm00 bash[20701]: cluster 2026-03-10T07:58:19.597138+0000 mon.a (mon.0) 3592 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:20.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:20 vm00 bash[20701]: cluster 2026-03-10T07:58:19.597138+0000 mon.a (mon.0) 3592 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:20.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:20 vm00 bash[20701]: cluster 2026-03-10T07:58:19.618373+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T07:58:20.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:20 vm00 bash[20701]: cluster 2026-03-10T07:58:19.618373+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T07:58:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:20 vm03 bash[23382]: cluster 2026-03-10T07:58:19.597138+0000 mon.a (mon.0) 3592 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:20 vm03 bash[23382]: cluster 2026-03-10T07:58:19.597138+0000 mon.a (mon.0) 3592 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:20 vm03 bash[23382]: cluster 2026-03-10T07:58:19.618373+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T07:58:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:20 vm03 bash[23382]: cluster 2026-03-10T07:58:19.618373+0000 mon.a (mon.0) 3593 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T07:58:21.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:58:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:58:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:58:21.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:21 vm00 bash[28005]: cluster 2026-03-10T07:58:20.620239+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T07:58:21.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:21 vm00 bash[28005]: cluster 2026-03-10T07:58:20.620239+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T07:58:21.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:21 vm00 bash[28005]: cluster 2026-03-10T07:58:21.013853+0000 mgr.y (mgr.24407) 1228 : cluster [DBG] pgmap v1658: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:21.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:21 vm00 bash[28005]: cluster 2026-03-10T07:58:21.013853+0000 mgr.y (mgr.24407) 1228 : cluster [DBG] pgmap v1658: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:21.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:21 vm00 bash[20701]: cluster 2026-03-10T07:58:20.620239+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T07:58:21.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:21 vm00 bash[20701]: cluster 2026-03-10T07:58:20.620239+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T07:58:21.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:21 vm00 bash[20701]: cluster 2026-03-10T07:58:21.013853+0000 mgr.y (mgr.24407) 1228 : cluster [DBG] pgmap v1658: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:21.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:21 vm00 bash[20701]: cluster 2026-03-10T07:58:21.013853+0000 mgr.y (mgr.24407) 1228 : cluster [DBG] pgmap v1658: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:22.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:21 vm03 bash[23382]: cluster 2026-03-10T07:58:20.620239+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T07:58:22.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:21 vm03 bash[23382]: cluster 2026-03-10T07:58:20.620239+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T07:58:22.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:21 vm03 bash[23382]: cluster 2026-03-10T07:58:21.013853+0000 mgr.y (mgr.24407) 1228 : cluster [DBG] pgmap v1658: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:22.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:21 vm03 bash[23382]: cluster 2026-03-10T07:58:21.013853+0000 mgr.y (mgr.24407) 1228 : cluster [DBG] pgmap v1658: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:22.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:22 vm00 bash[28005]: cluster 2026-03-10T07:58:21.641915+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T07:58:22.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:22 vm00 bash[28005]: cluster 2026-03-10T07:58:21.641915+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T07:58:22.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:22 vm00 bash[20701]: cluster 2026-03-10T07:58:21.641915+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T07:58:22.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:22 vm00 bash[20701]: cluster 2026-03-10T07:58:21.641915+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T07:58:23.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:22 vm03 bash[23382]: cluster 2026-03-10T07:58:21.641915+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T07:58:23.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:22 vm03 bash[23382]: cluster 2026-03-10T07:58:21.641915+0000 mon.a (mon.0) 3595 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T07:58:24.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:23 vm03 bash[23382]: cluster 2026-03-10T07:58:22.645044+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T07:58:24.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:23 vm03 bash[23382]: cluster 2026-03-10T07:58:22.645044+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T07:58:24.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:23 vm03 bash[23382]: cluster 2026-03-10T07:58:23.014152+0000 mgr.y (mgr.24407) 1229 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:24.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:23 vm03 bash[23382]: cluster 2026-03-10T07:58:23.014152+0000 mgr.y (mgr.24407) 1229 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:24.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:23 vm00 bash[28005]: cluster 2026-03-10T07:58:22.645044+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T07:58:24.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:23 vm00 bash[28005]: cluster 2026-03-10T07:58:22.645044+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T07:58:24.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:23 vm00 bash[28005]: cluster 2026-03-10T07:58:23.014152+0000 mgr.y (mgr.24407) 1229 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:24.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:23 vm00 bash[28005]: cluster 2026-03-10T07:58:23.014152+0000 mgr.y (mgr.24407) 1229 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:24.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:23 vm00 bash[20701]: cluster 2026-03-10T07:58:22.645044+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T07:58:24.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:23 vm00 bash[20701]: cluster 2026-03-10T07:58:22.645044+0000 mon.a (mon.0) 3596 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T07:58:24.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:23 vm00 bash[20701]: cluster 2026-03-10T07:58:23.014152+0000 mgr.y (mgr.24407) 1229 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:24.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:23 vm00 bash[20701]: cluster 2026-03-10T07:58:23.014152+0000 mgr.y (mgr.24407) 1229 : cluster [DBG] pgmap v1661: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: Running main() from gmock_main.cc 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [==========] Running 7 tests from 1 test suite. 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [----------] Global test environment set-up. 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertExists 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertExists (1801531 ms) 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertVersion 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertVersion (3016 ms) 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Xattrs 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.Xattrs (3109 ms) 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Write 2026-03-10T07:58:24.660 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.Write (3012 ms) 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Exec 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.Exec (3029 ms) 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.WriteSame 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.WriteSame (3027 ms) 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ RUN ] NeoRadosWriteOps.CmpExt 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ OK ] NeoRadosWriteOps.CmpExt (3025 ms) 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps (1819749 ms total) 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [----------] Global test environment tear-down 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [==========] 7 tests from 1 test suite ran. (1819749 ms total) 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stdout: write_operations: [ PASSED ] 7 tests. 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stderr:+ exit 0 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stderr:+ cleanup 2026-03-10T07:58:24.661 INFO:tasks.workunit.client.0.vm00.stderr:+ pkill -P 59618 2026-03-10T07:58:24.665 INFO:tasks.workunit.client.0.vm00.stderr:+ true 2026-03-10T07:58:24.666 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-10T07:58:24.666 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T07:58:24.668 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:58:24 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:58:24.675 INFO:tasks.workunit:Running workunits matching rados/test_pool_quota.sh on client.0... 2026-03-10T07:58:24.675 INFO:tasks.workunit:Running workunit rados/test_pool_quota.sh... 2026-03-10T07:58:24.676 DEBUG:teuthology.orchestra.run.vm00:workunit test rados/test_pool_quota.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_pool_quota.sh 2026-03-10T07:58:24.723 INFO:tasks.workunit.client.0.vm00.stderr:+ uuidgen 2026-03-10T07:58:24.724 INFO:tasks.workunit.client.0.vm00.stderr:+ p=848a4d04-314a-4289-950b-2472b7cc83f9 2026-03-10T07:58:24.724 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool create 848a4d04-314a-4289-950b-2472b7cc83f9 12 2026-03-10T07:58:24.782 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.788+0000 7f9b422ea640 1 -- 192.168.123.100:0/174127598 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9b3c102390 msgr2=0x7f9b3c102810 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:24.782 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.788+0000 7f9b422ea640 1 --2- 192.168.123.100:0/174127598 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9b3c102390 0x7f9b3c102810 secure :-1 s=READY pgs=3115 cs=0 l=1 rev1=1 crypto rx=0x7f9b2c0099d0 tx=0x7f9b2c01c890 comp rx=0 tx=0).stop 2026-03-10T07:58:24.782 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 -- 192.168.123.100:0/174127598 shutdown_connections 2026-03-10T07:58:24.782 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 --2- 192.168.123.100:0/174127598 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9b3c102d50 0x7f9b3c10e6b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:24.782 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 --2- 192.168.123.100:0/174127598 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9b3c102390 0x7f9b3c102810 unknown :-1 s=CLOSED pgs=3115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:24.782 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 --2- 192.168.123.100:0/174127598 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9b3c106810 0x7f9b3c106bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:24.782 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 -- 192.168.123.100:0/174127598 >> 192.168.123.100:0/174127598 conn(0x7f9b3c0fc820 msgr2=0x7f9b3c0fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:58:24.782 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 -- 192.168.123.100:0/174127598 shutdown_connections 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 -- 192.168.123.100:0/174127598 wait complete. 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 Processor -- start 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 -- start start 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9b3c102390 0x7f9b3c1a0800 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9b3c102d50 0x7f9b3c1a0d40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9b3c106810 0x7f9b3c19a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f9b3c112730 con 0x7f9b3c102d50 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f9b3c1125b0 con 0x7f9b3c102390 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f9b3c1128b0 con 0x7f9b3c106810 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b40860640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9b3c106810 0x7f9b3c19a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b40860640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9b3c106810 0x7f9b3c19a9c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:44562/0 (socket says 192.168.123.100:44562) 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b40860640 1 -- 192.168.123.100:0/73512458 learned_addr learned my addr 192.168.123.100:0/73512458 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b3b7fe640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9b3c102d50 0x7f9b3c1a0d40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b3bfff640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9b3c102390 0x7f9b3c1a0800 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b3bfff640 1 -- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9b3c106810 msgr2=0x7f9b3c19a9c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b3bfff640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9b3c106810 0x7f9b3c19a9c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b3bfff640 1 -- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9b3c102d50 msgr2=0x7f9b3c1a0d40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b3bfff640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9b3c102d50 0x7f9b3c1a0d40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b3bfff640 1 -- 192.168.123.100:0/73512458 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9b3c19b190 con 0x7f9b3c102390 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b3b7fe640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9b3c102d50 0x7f9b3c1a0d40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b40860640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9b3c106810 0x7f9b3c19a9c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:58:24.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b3bfff640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9b3c102390 0x7f9b3c1a0800 secure :-1 s=READY pgs=2912 cs=0 l=1 rev1=1 crypto rx=0x7f9b2800bcb0 tx=0x7f9b28007590 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:58:24.784 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b397fa640 1 -- 192.168.123.100:0/73512458 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9b28007e40 con 0x7f9b3c102390 2026-03-10T07:58:24.784 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 -- 192.168.123.100:0/73512458 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f9b3c19b420 con 0x7f9b3c102390 2026-03-10T07:58:24.784 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 -- 192.168.123.100:0/73512458 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f9b3c1a76d0 con 0x7f9b3c102390 2026-03-10T07:58:24.784 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b397fa640 1 -- 192.168.123.100:0/73512458 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f9b28004510 con 0x7f9b3c102390 2026-03-10T07:58:24.784 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b397fa640 1 -- 192.168.123.100:0/73512458 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9b28002e00 con 0x7f9b3c102390 2026-03-10T07:58:24.784 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b422ea640 1 -- 192.168.123.100:0/73512458 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9b3c102810 con 0x7f9b3c102390 2026-03-10T07:58:24.786 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b397fa640 1 -- 192.168.123.100:0/73512458 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f9b280040d0 con 0x7f9b3c102390 2026-03-10T07:58:24.786 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b397fa640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9b10077710 0x7f9b10079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:24.786 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.792+0000 7f9b397fa640 1 -- 192.168.123.100:0/73512458 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(761..761 src has 258..761) ==== 8685+0+0 (secure 0 0 0) 0x7f9b2809e8f0 con 0x7f9b3c102390 2026-03-10T07:58:24.786 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.796+0000 7f9b3b7fe640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9b10077710 0x7f9b10079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:58:24.786 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.796+0000 7f9b3b7fe640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9b10077710 0x7f9b10079bd0 secure :-1 s=READY pgs=4304 cs=0 l=1 rev1=1 crypto rx=0x7f9b2c0098f0 tx=0x7f9b2c01b0d0 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:58:24.788 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.796+0000 7f9b397fa640 1 -- 192.168.123.100:0/73512458 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f9b2806ad80 con 0x7f9b3c102390 2026-03-10T07:58:24.877 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:24.884+0000 7f9b422ea640 1 -- 192.168.123.100:0/73512458 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12} v 0) -- 0x7f9b3c109710 con 0x7f9b3c102390 2026-03-10T07:58:25.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:24 vm03 bash[23382]: cluster 2026-03-10T07:58:23.652560+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T07:58:25.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:24 vm03 bash[23382]: cluster 2026-03-10T07:58:23.652560+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T07:58:25.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:24 vm03 bash[23382]: audit 2026-03-10T07:58:24.360595+0000 mgr.y (mgr.24407) 1230 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:25.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:24 vm03 bash[23382]: audit 2026-03-10T07:58:24.360595+0000 mgr.y (mgr.24407) 1230 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:25.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:24 vm00 bash[28005]: cluster 2026-03-10T07:58:23.652560+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T07:58:25.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:24 vm00 bash[28005]: cluster 2026-03-10T07:58:23.652560+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T07:58:25.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:24 vm00 bash[28005]: audit 2026-03-10T07:58:24.360595+0000 mgr.y (mgr.24407) 1230 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:25.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:24 vm00 bash[28005]: audit 2026-03-10T07:58:24.360595+0000 mgr.y (mgr.24407) 1230 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:25.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:24 vm00 bash[20701]: cluster 2026-03-10T07:58:23.652560+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T07:58:25.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:24 vm00 bash[20701]: cluster 2026-03-10T07:58:23.652560+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T07:58:25.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:24 vm00 bash[20701]: audit 2026-03-10T07:58:24.360595+0000 mgr.y (mgr.24407) 1230 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:25.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:24 vm00 bash[20701]: audit 2026-03-10T07:58:24.360595+0000 mgr.y (mgr.24407) 1230 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:25.693 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.700+0000 7f9b397fa640 1 -- 192.168.123.100:0/73512458 <== mon.1 v2:192.168.123.103:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]=0 pool '848a4d04-314a-4289-950b-2472b7cc83f9' created v762) ==== 176+0+0 (secure 0 0 0) 0x7f9b2806fc30 con 0x7f9b3c102390 2026-03-10T07:58:25.741 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.748+0000 7f9b422ea640 1 -- 192.168.123.100:0/73512458 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12} v 0) -- 0x7f9b3c109a30 con 0x7f9b3c102390 2026-03-10T07:58:25.743 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b397fa640 1 -- 192.168.123.100:0/73512458 <== mon.1 v2:192.168.123.103:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]=0 pool '848a4d04-314a-4289-950b-2472b7cc83f9' already exists v762) ==== 183+0+0 (secure 0 0 0) 0x7f9b28062e20 con 0x7f9b3c102390 2026-03-10T07:58:25.743 INFO:tasks.workunit.client.0.vm00.stderr:pool '848a4d04-314a-4289-950b-2472b7cc83f9' already exists 2026-03-10T07:58:25.744 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 -- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9b10077710 msgr2=0x7f9b10079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:25.744 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9b10077710 0x7f9b10079bd0 secure :-1 s=READY pgs=4304 cs=0 l=1 rev1=1 crypto rx=0x7f9b2c0098f0 tx=0x7f9b2c01b0d0 comp rx=0 tx=0).stop 2026-03-10T07:58:25.744 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 -- 192.168.123.100:0/73512458 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9b3c102390 msgr2=0x7f9b3c1a0800 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:25.744 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9b3c102390 0x7f9b3c1a0800 secure :-1 s=READY pgs=2912 cs=0 l=1 rev1=1 crypto rx=0x7f9b2800bcb0 tx=0x7f9b28007590 comp rx=0 tx=0).stop 2026-03-10T07:58:25.744 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 -- 192.168.123.100:0/73512458 shutdown_connections 2026-03-10T07:58:25.744 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9b10077710 0x7f9b10079bd0 unknown :-1 s=CLOSED pgs=4304 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:25.744 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9b3c106810 0x7f9b3c19a9c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:25.744 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9b3c102d50 0x7f9b3c1a0d40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:25.744 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 --2- 192.168.123.100:0/73512458 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9b3c102390 0x7f9b3c1a0800 unknown :-1 s=CLOSED pgs=2912 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:25.744 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 -- 192.168.123.100:0/73512458 >> 192.168.123.100:0/73512458 conn(0x7f9b3c0fc820 msgr2=0x7f9b3c0fe1d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:58:25.745 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 -- 192.168.123.100:0/73512458 shutdown_connections 2026-03-10T07:58:25.745 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.752+0000 7f9b422ea640 1 -- 192.168.123.100:0/73512458 wait complete. 2026-03-10T07:58:25.755 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota 848a4d04-314a-4289-950b-2472b7cc83f9 max_objects 10 2026-03-10T07:58:25.815 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 -- 192.168.123.100:0/3946948218 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7dec1018b0 msgr2=0x7f7dec1118d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:25.815 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 --2- 192.168.123.100:0/3946948218 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7dec1018b0 0x7f7dec1118d0 secure :-1 s=READY pgs=3116 cs=0 l=1 rev1=1 crypto rx=0x7f7ddc00b0a0 tx=0x7f7ddc01cb30 comp rx=0 tx=0).stop 2026-03-10T07:58:25.815 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 -- 192.168.123.100:0/3946948218 shutdown_connections 2026-03-10T07:58:25.815 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 --2- 192.168.123.100:0/3946948218 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7dec1018b0 0x7f7dec1118d0 unknown :-1 s=CLOSED pgs=3116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:25.815 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 --2- 192.168.123.100:0/3946948218 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7dec100ef0 0x7f7dec101370 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:25.815 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 --2- 192.168.123.100:0/3946948218 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7dec1034e0 0x7f7dec1038c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:25.815 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 -- 192.168.123.100:0/3946948218 >> 192.168.123.100:0/3946948218 conn(0x7f7dec078070 msgr2=0x7f7dec078480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:58:25.815 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 -- 192.168.123.100:0/3946948218 shutdown_connections 2026-03-10T07:58:25.815 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 -- 192.168.123.100:0/3946948218 wait complete. 2026-03-10T07:58:25.815 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 Processor -- start 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 -- start start 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7dec100ef0 0x7f7dec19ede0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7dec1018b0 0x7f7dec19f320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7dec1034e0 0x7f7dec1a36b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f7dec1168e0 con 0x7f7dec1018b0 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f7dec116760 con 0x7f7dec1034e0 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f7dec116a60 con 0x7f7dec100ef0 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7deb7fe640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7dec1034e0 0x7f7dec1a36b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7dea7fc640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7dec1018b0 0x7f7dec19f320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7deb7fe640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7dec1034e0 0x7f7dec1a36b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:34340/0 (socket says 192.168.123.100:34340) 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7deb7fe640 1 -- 192.168.123.100:0/1368685176 learned_addr learned my addr 192.168.123.100:0/1368685176 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7dea7fc640 1 -- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7dec100ef0 msgr2=0x7f7dec19ede0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7dea7fc640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7dec100ef0 0x7f7dec19ede0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7dea7fc640 1 -- 192.168.123.100:0/1368685176 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7dec1034e0 msgr2=0x7f7dec1a36b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7dea7fc640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7dec1034e0 0x7f7dec1a36b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7dea7fc640 1 -- 192.168.123.100:0/1368685176 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7dec1a3d90 con 0x7f7dec1018b0 2026-03-10T07:58:25.816 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7dea7fc640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7dec1018b0 0x7f7dec19f320 secure :-1 s=READY pgs=3117 cs=0 l=1 rev1=1 crypto rx=0x7f7de000dac0 tx=0x7f7de000df80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:58:25.817 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7df0a49640 1 -- 192.168.123.100:0/1368685176 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7de0014070 con 0x7f7dec1018b0 2026-03-10T07:58:25.817 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7df0a49640 1 -- 192.168.123.100:0/1368685176 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f7de00040d0 con 0x7f7dec1018b0 2026-03-10T07:58:25.817 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7df0a49640 1 -- 192.168.123.100:0/1368685176 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7de0004e00 con 0x7f7dec1018b0 2026-03-10T07:58:25.817 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 -- 192.168.123.100:0/1368685176 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f7dec1a4080 con 0x7f7dec1018b0 2026-03-10T07:58:25.817 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.824+0000 7f7debfff640 1 -- 192.168.123.100:0/1368685176 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f7dec1ab960 con 0x7f7dec1018b0 2026-03-10T07:58:25.818 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.828+0000 7f7df0a49640 1 -- 192.168.123.100:0/1368685176 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f7de0020020 con 0x7f7dec1018b0 2026-03-10T07:58:25.821 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.828+0000 7f7debfff640 1 -- 192.168.123.100:0/1368685176 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7db0005190 con 0x7f7dec1018b0 2026-03-10T07:58:25.821 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.828+0000 7f7df0a49640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f7dc0077710 0x7f7dc0079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:25.821 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.828+0000 7f7df0a49640 1 -- 192.168.123.100:0/1368685176 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(762..762 src has 258..762) ==== 9060+0+0 (secure 0 0 0) 0x7f7de009a080 con 0x7f7dec1018b0 2026-03-10T07:58:25.821 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.828+0000 7f7df0a49640 1 -- 192.168.123.100:0/1368685176 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f7de0066310 con 0x7f7dec1018b0 2026-03-10T07:58:25.821 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.828+0000 7f7deaffd640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f7dc0077710 0x7f7dc0079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:58:25.822 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.828+0000 7f7deaffd640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f7dc0077710 0x7f7dc0079bd0 secure :-1 s=READY pgs=4305 cs=0 l=1 rev1=1 crypto rx=0x7f7dd4005a50 tx=0x7f7dd4005ca0 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:58:25.903 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:25.912+0000 7f7debfff640 1 -- 192.168.123.100:0/1368685176 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"} v 0) -- 0x7f7db0005480 con 0x7f7dec1018b0 2026-03-10T07:58:26.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:25 vm03 bash[23382]: cluster 2026-03-10T07:58:24.666792+0000 mon.a (mon.0) 3598 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T07:58:26.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:25 vm03 bash[23382]: cluster 2026-03-10T07:58:24.666792+0000 mon.a (mon.0) 3598 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T07:58:26.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:25 vm03 bash[23382]: audit 2026-03-10T07:58:24.885012+0000 mon.b (mon.1) 681 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:25 vm03 bash[23382]: audit 2026-03-10T07:58:24.885012+0000 mon.b (mon.1) 681 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:25 vm03 bash[23382]: audit 2026-03-10T07:58:24.889570+0000 mon.a (mon.0) 3599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:25 vm03 bash[23382]: audit 2026-03-10T07:58:24.889570+0000 mon.a (mon.0) 3599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:25 vm03 bash[23382]: cluster 2026-03-10T07:58:25.014377+0000 mgr.y (mgr.24407) 1231 : cluster [DBG] pgmap v1664: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:26.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:25 vm03 bash[23382]: cluster 2026-03-10T07:58:25.014377+0000 mgr.y (mgr.24407) 1231 : cluster [DBG] pgmap v1664: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:26.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:25 vm03 bash[23382]: audit 2026-03-10T07:58:25.455974+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:26.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:25 vm03 bash[23382]: audit 2026-03-10T07:58:25.455974+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:25 vm00 bash[28005]: cluster 2026-03-10T07:58:24.666792+0000 mon.a (mon.0) 3598 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:25 vm00 bash[28005]: cluster 2026-03-10T07:58:24.666792+0000 mon.a (mon.0) 3598 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:25 vm00 bash[28005]: audit 2026-03-10T07:58:24.885012+0000 mon.b (mon.1) 681 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:25 vm00 bash[28005]: audit 2026-03-10T07:58:24.885012+0000 mon.b (mon.1) 681 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:25 vm00 bash[28005]: audit 2026-03-10T07:58:24.889570+0000 mon.a (mon.0) 3599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:25 vm00 bash[28005]: audit 2026-03-10T07:58:24.889570+0000 mon.a (mon.0) 3599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:25 vm00 bash[28005]: cluster 2026-03-10T07:58:25.014377+0000 mgr.y (mgr.24407) 1231 : cluster [DBG] pgmap v1664: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:25 vm00 bash[28005]: cluster 2026-03-10T07:58:25.014377+0000 mgr.y (mgr.24407) 1231 : cluster [DBG] pgmap v1664: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:25 vm00 bash[28005]: audit 2026-03-10T07:58:25.455974+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:25 vm00 bash[28005]: audit 2026-03-10T07:58:25.455974+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:25 vm00 bash[20701]: cluster 2026-03-10T07:58:24.666792+0000 mon.a (mon.0) 3598 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:25 vm00 bash[20701]: cluster 2026-03-10T07:58:24.666792+0000 mon.a (mon.0) 3598 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:25 vm00 bash[20701]: audit 2026-03-10T07:58:24.885012+0000 mon.b (mon.1) 681 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:25 vm00 bash[20701]: audit 2026-03-10T07:58:24.885012+0000 mon.b (mon.1) 681 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:25 vm00 bash[20701]: audit 2026-03-10T07:58:24.889570+0000 mon.a (mon.0) 3599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:25 vm00 bash[20701]: audit 2026-03-10T07:58:24.889570+0000 mon.a (mon.0) 3599 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:25 vm00 bash[20701]: cluster 2026-03-10T07:58:25.014377+0000 mgr.y (mgr.24407) 1231 : cluster [DBG] pgmap v1664: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:25 vm00 bash[20701]: cluster 2026-03-10T07:58:25.014377+0000 mgr.y (mgr.24407) 1231 : cluster [DBG] pgmap v1664: 164 pgs: 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:25 vm00 bash[20701]: audit 2026-03-10T07:58:25.455974+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:26.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:25 vm00 bash[20701]: audit 2026-03-10T07:58:25.455974+0000 mon.c (mon.2) 483 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:26.682 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:26.692+0000 7f7df0a49640 1 -- 192.168.123.100:0/1368685176 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v763) ==== 223+0+0 (secure 0 0 0) 0x7f7de006b1c0 con 0x7f7dec1018b0 2026-03-10T07:58:26.738 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:26.748+0000 7f7debfff640 1 -- 192.168.123.100:0/1368685176 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"} v 0) -- 0x7f7db0004910 con 0x7f7dec1018b0 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: cluster 2026-03-10T07:58:25.678503+0000 mon.a (mon.0) 3600 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: cluster 2026-03-10T07:58:25.678503+0000 mon.a (mon.0) 3600 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: audit 2026-03-10T07:58:25.689069+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]': finished 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: audit 2026-03-10T07:58:25.689069+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]': finished 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: cluster 2026-03-10T07:58:25.699006+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: cluster 2026-03-10T07:58:25.699006+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: audit 2026-03-10T07:58:25.749497+0000 mon.b (mon.1) 682 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: audit 2026-03-10T07:58:25.749497+0000 mon.b (mon.1) 682 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: audit 2026-03-10T07:58:25.753856+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: audit 2026-03-10T07:58:25.753856+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: audit 2026-03-10T07:58:25.914829+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:27.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:26 vm03 bash[23382]: audit 2026-03-10T07:58:25.914829+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: cluster 2026-03-10T07:58:25.678503+0000 mon.a (mon.0) 3600 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: cluster 2026-03-10T07:58:25.678503+0000 mon.a (mon.0) 3600 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: audit 2026-03-10T07:58:25.689069+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]': finished 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: audit 2026-03-10T07:58:25.689069+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]': finished 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: cluster 2026-03-10T07:58:25.699006+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: cluster 2026-03-10T07:58:25.699006+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: audit 2026-03-10T07:58:25.749497+0000 mon.b (mon.1) 682 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: audit 2026-03-10T07:58:25.749497+0000 mon.b (mon.1) 682 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: audit 2026-03-10T07:58:25.753856+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: audit 2026-03-10T07:58:25.753856+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: audit 2026-03-10T07:58:25.914829+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:26 vm00 bash[28005]: audit 2026-03-10T07:58:25.914829+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: cluster 2026-03-10T07:58:25.678503+0000 mon.a (mon.0) 3600 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: cluster 2026-03-10T07:58:25.678503+0000 mon.a (mon.0) 3600 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: audit 2026-03-10T07:58:25.689069+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]': finished 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: audit 2026-03-10T07:58:25.689069+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]': finished 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: cluster 2026-03-10T07:58:25.699006+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: cluster 2026-03-10T07:58:25.699006+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: audit 2026-03-10T07:58:25.749497+0000 mon.b (mon.1) 682 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: audit 2026-03-10T07:58:25.749497+0000 mon.b (mon.1) 682 : audit [INF] from='client.? 192.168.123.100:0/73512458' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: audit 2026-03-10T07:58:25.753856+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: audit 2026-03-10T07:58:25.753856+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pg_num": 12}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: audit 2026-03-10T07:58:25.914829+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:27.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:26 vm00 bash[20701]: audit 2026-03-10T07:58:25.914829+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:27.777 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.784+0000 7f7df0a49640 1 -- 192.168.123.100:0/1368685176 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v764) ==== 223+0+0 (secure 0 0 0) 0x7f7de005e3b0 con 0x7f7dec1018b0 2026-03-10T07:58:27.777 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_objects = 10 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 2026-03-10T07:58:27.779 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 -- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f7dc0077710 msgr2=0x7f7dc0079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:27.779 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f7dc0077710 0x7f7dc0079bd0 secure :-1 s=READY pgs=4305 cs=0 l=1 rev1=1 crypto rx=0x7f7dd4005a50 tx=0x7f7dd4005ca0 comp rx=0 tx=0).stop 2026-03-10T07:58:27.779 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 -- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7dec1018b0 msgr2=0x7f7dec19f320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:27.779 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7dec1018b0 0x7f7dec19f320 secure :-1 s=READY pgs=3117 cs=0 l=1 rev1=1 crypto rx=0x7f7de000dac0 tx=0x7f7de000df80 comp rx=0 tx=0).stop 2026-03-10T07:58:27.779 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 -- 192.168.123.100:0/1368685176 shutdown_connections 2026-03-10T07:58:27.779 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f7dc0077710 0x7f7dc0079bd0 unknown :-1 s=CLOSED pgs=4305 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:27.779 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7dec1034e0 0x7f7dec1a36b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:27.779 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7dec1018b0 0x7f7dec19f320 unknown :-1 s=CLOSED pgs=3117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:27.780 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 --2- 192.168.123.100:0/1368685176 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7dec100ef0 0x7f7dec19ede0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:27.780 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 -- 192.168.123.100:0/1368685176 >> 192.168.123.100:0/1368685176 conn(0x7f7dec078070 msgr2=0x7f7dec0fef40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:58:27.780 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 -- 192.168.123.100:0/1368685176 shutdown_connections 2026-03-10T07:58:27.780 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.788+0000 7f7debfff640 1 -- 192.168.123.100:0/1368685176 wait complete. 2026-03-10T07:58:27.795 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool application enable 848a4d04-314a-4289-950b-2472b7cc83f9 rados 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 -- 192.168.123.100:0/1278781600 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a70113a60 msgr2=0x7f9a70115e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 --2- 192.168.123.100:0/1278781600 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a70113a60 0x7f9a70115e50 secure :-1 s=READY pgs=3118 cs=0 l=1 rev1=1 crypto rx=0x7f9a6000b0a0 tx=0x7f9a6001cb30 comp rx=0 tx=0).stop 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 -- 192.168.123.100:0/1278781600 shutdown_connections 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 --2- 192.168.123.100:0/1278781600 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a70113a60 0x7f9a70115e50 unknown :-1 s=CLOSED pgs=3118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 --2- 192.168.123.100:0/1278781600 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9a70077f50 0x7f9a70113520 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 --2- 192.168.123.100:0/1278781600 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9a70077630 0x7f9a70077a10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 -- 192.168.123.100:0/1278781600 >> 192.168.123.100:0/1278781600 conn(0x7f9a70100880 msgr2=0x7f9a70102ca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 -- 192.168.123.100:0/1278781600 shutdown_connections 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 -- 192.168.123.100:0/1278781600 wait complete. 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 Processor -- start 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 -- start start 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a70077630 0x7f9a701a4060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9a70077f50 0x7f9a701a45a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9a70113a60 0x7f9a701a8930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f9a7011bb40 con 0x7f9a70077630 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f9a7011b9c0 con 0x7f9a70113a60 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f9a7011bcc0 con 0x7f9a70077f50 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a70077630 0x7f9a701a4060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a70077630 0x7f9a701a4060 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:40618/0 (socket says 192.168.123.100:40618) 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6e575640 1 -- 192.168.123.100:0/3378897077 learned_addr learned my addr 192.168.123.100:0/3378897077 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6ed76640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9a70113a60 0x7f9a701a8930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:58:27.851 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6e575640 1 -- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9a70077f50 msgr2=0x7f9a701a45a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:27.852 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6dd74640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9a70077f50 0x7f9a701a45a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:58:27.852 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6e575640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9a70077f50 0x7f9a701a45a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:27.852 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6e575640 1 -- 192.168.123.100:0/3378897077 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9a70113a60 msgr2=0x7f9a701a8930 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:27.852 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6e575640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9a70113a60 0x7f9a701a8930 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:27.852 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6e575640 1 -- 192.168.123.100:0/3378897077 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9a701a9010 con 0x7f9a70077630 2026-03-10T07:58:27.852 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6ed76640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9a70113a60 0x7f9a701a8930 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:58:27.852 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6e575640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a70077630 0x7f9a701a4060 secure :-1 s=READY pgs=3119 cs=0 l=1 rev1=1 crypto rx=0x7f9a5800d980 tx=0x7f9a5800dd80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:58:27.852 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a577fe640 1 -- 192.168.123.100:0/3378897077 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9a58014070 con 0x7f9a70077630 2026-03-10T07:58:27.852 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a6dd74640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9a70077f50 0x7f9a701a45a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:58:27.852 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a577fe640 1 -- 192.168.123.100:0/3378897077 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f9a580044e0 con 0x7f9a70077630 2026-03-10T07:58:27.853 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a577fe640 1 -- 192.168.123.100:0/3378897077 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9a58005020 con 0x7f9a70077630 2026-03-10T07:58:27.853 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 -- 192.168.123.100:0/3378897077 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f9a701a9300 con 0x7f9a70077630 2026-03-10T07:58:27.853 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a74c4f640 1 -- 192.168.123.100:0/3378897077 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f9a701b0be0 con 0x7f9a70077630 2026-03-10T07:58:27.853 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.860+0000 7f9a577fe640 1 -- 192.168.123.100:0/3378897077 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f9a58004790 con 0x7f9a70077630 2026-03-10T07:58:27.854 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.864+0000 7f9a74c4f640 1 -- 192.168.123.100:0/3378897077 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9a70078d70 con 0x7f9a70077630 2026-03-10T07:58:27.856 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.864+0000 7f9a577fe640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9a40077680 0x7f9a40079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:58:27.856 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.864+0000 7f9a577fe640 1 -- 192.168.123.100:0/3378897077 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(764..764 src has 258..764) ==== 9060+0+0 (secure 0 0 0) 0x7f9a5809d170 con 0x7f9a70077630 2026-03-10T07:58:27.856 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.864+0000 7f9a6dd74640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9a40077680 0x7f9a40079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:58:27.857 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.864+0000 7f9a577fe640 1 -- 192.168.123.100:0/3378897077 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f9a58065a70 con 0x7f9a70077630 2026-03-10T07:58:27.857 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.864+0000 7f9a6dd74640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9a40077680 0x7f9a40079b40 secure :-1 s=READY pgs=4306 cs=0 l=1 rev1=1 crypto rx=0x7f9a70102d50 tx=0x7f9a64005e30 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:58:27.938 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:27.948+0000 7f9a74c4f640 1 -- 192.168.123.100:0/3378897077 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"} v 0) -- 0x7f9a70077a10 con 0x7f9a70077630 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:27 vm00 bash[28005]: audit 2026-03-10T07:58:26.693695+0000 mon.a (mon.0) 3605 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:27 vm00 bash[28005]: audit 2026-03-10T07:58:26.693695+0000 mon.a (mon.0) 3605 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:27 vm00 bash[28005]: cluster 2026-03-10T07:58:26.700097+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:27 vm00 bash[28005]: cluster 2026-03-10T07:58:26.700097+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:27 vm00 bash[28005]: audit 2026-03-10T07:58:26.750232+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:27 vm00 bash[28005]: audit 2026-03-10T07:58:26.750232+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:27 vm00 bash[28005]: cluster 2026-03-10T07:58:27.014599+0000 mgr.y (mgr.24407) 1232 : cluster [DBG] pgmap v1667: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:27 vm00 bash[28005]: cluster 2026-03-10T07:58:27.014599+0000 mgr.y (mgr.24407) 1232 : cluster [DBG] pgmap v1667: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:27 vm00 bash[20701]: audit 2026-03-10T07:58:26.693695+0000 mon.a (mon.0) 3605 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:27 vm00 bash[20701]: audit 2026-03-10T07:58:26.693695+0000 mon.a (mon.0) 3605 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:27 vm00 bash[20701]: cluster 2026-03-10T07:58:26.700097+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:27 vm00 bash[20701]: cluster 2026-03-10T07:58:26.700097+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:27 vm00 bash[20701]: audit 2026-03-10T07:58:26.750232+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:27 vm00 bash[20701]: audit 2026-03-10T07:58:26.750232+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:27 vm00 bash[20701]: cluster 2026-03-10T07:58:27.014599+0000 mgr.y (mgr.24407) 1232 : cluster [DBG] pgmap v1667: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:28.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:27 vm00 bash[20701]: cluster 2026-03-10T07:58:27.014599+0000 mgr.y (mgr.24407) 1232 : cluster [DBG] pgmap v1667: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:28.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:27 vm03 bash[23382]: audit 2026-03-10T07:58:26.693695+0000 mon.a (mon.0) 3605 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:28.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:27 vm03 bash[23382]: audit 2026-03-10T07:58:26.693695+0000 mon.a (mon.0) 3605 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:28.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:27 vm03 bash[23382]: cluster 2026-03-10T07:58:26.700097+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T07:58:28.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:27 vm03 bash[23382]: cluster 2026-03-10T07:58:26.700097+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T07:58:28.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:27 vm03 bash[23382]: audit 2026-03-10T07:58:26.750232+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:28.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:27 vm03 bash[23382]: audit 2026-03-10T07:58:26.750232+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:58:28.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:27 vm03 bash[23382]: cluster 2026-03-10T07:58:27.014599+0000 mgr.y (mgr.24407) 1232 : cluster [DBG] pgmap v1667: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:28.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:27 vm03 bash[23382]: cluster 2026-03-10T07:58:27.014599+0000 mgr.y (mgr.24407) 1232 : cluster [DBG] pgmap v1667: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:28.810 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:28.820+0000 7f9a577fe640 1 -- 192.168.123.100:0/3378897077 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]=0 enabled application 'rados' on pool '848a4d04-314a-4289-950b-2472b7cc83f9' v765) ==== 213+0+0 (secure 0 0 0) 0x7f9a5806a920 con 0x7f9a70077630 2026-03-10T07:58:28.862 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:28.872+0000 7f9a74c4f640 1 -- 192.168.123.100:0/3378897077 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"} v 0) -- 0x7f9a7010e660 con 0x7f9a70077630 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:28 vm00 bash[28005]: audit 2026-03-10T07:58:27.788395+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:28 vm00 bash[28005]: audit 2026-03-10T07:58:27.788395+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:28 vm00 bash[28005]: cluster 2026-03-10T07:58:27.795448+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:28 vm00 bash[28005]: cluster 2026-03-10T07:58:27.795448+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:28 vm00 bash[28005]: audit 2026-03-10T07:58:27.949846+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:28 vm00 bash[28005]: audit 2026-03-10T07:58:27.949846+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:28 vm00 bash[20701]: audit 2026-03-10T07:58:27.788395+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:28 vm00 bash[20701]: audit 2026-03-10T07:58:27.788395+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:28 vm00 bash[20701]: cluster 2026-03-10T07:58:27.795448+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:28 vm00 bash[20701]: cluster 2026-03-10T07:58:27.795448+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:28 vm00 bash[20701]: audit 2026-03-10T07:58:27.949846+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:29.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:28 vm00 bash[20701]: audit 2026-03-10T07:58:27.949846+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:29.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:28 vm03 bash[23382]: audit 2026-03-10T07:58:27.788395+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:29.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:28 vm03 bash[23382]: audit 2026-03-10T07:58:27.788395+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.100:0/1368685176' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:58:29.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:28 vm03 bash[23382]: cluster 2026-03-10T07:58:27.795448+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T07:58:29.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:28 vm03 bash[23382]: cluster 2026-03-10T07:58:27.795448+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T07:58:29.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:28 vm03 bash[23382]: audit 2026-03-10T07:58:27.949846+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:29.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:28 vm03 bash[23382]: audit 2026-03-10T07:58:27.949846+0000 mon.a (mon.0) 3610 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:29.836 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.844+0000 7f9a577fe640 1 -- 192.168.123.100:0/3378897077 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]=0 enabled application 'rados' on pool '848a4d04-314a-4289-950b-2472b7cc83f9' v766) ==== 213+0+0 (secure 0 0 0) 0x7f9a5805da60 con 0x7f9a70077630 2026-03-10T07:58:29.836 INFO:tasks.workunit.client.0.vm00.stderr:enabled application 'rados' on pool '848a4d04-314a-4289-950b-2472b7cc83f9' 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 -- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9a40077680 msgr2=0x7f9a40079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9a40077680 0x7f9a40079b40 secure :-1 s=READY pgs=4306 cs=0 l=1 rev1=1 crypto rx=0x7f9a70102d50 tx=0x7f9a64005e30 comp rx=0 tx=0).stop 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 -- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a70077630 msgr2=0x7f9a701a4060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a70077630 0x7f9a701a4060 secure :-1 s=READY pgs=3119 cs=0 l=1 rev1=1 crypto rx=0x7f9a5800d980 tx=0x7f9a5800dd80 comp rx=0 tx=0).stop 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 -- 192.168.123.100:0/3378897077 shutdown_connections 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f9a40077680 0x7f9a40079b40 unknown :-1 s=CLOSED pgs=4306 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f9a70113a60 0x7f9a701a8930 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f9a70077f50 0x7f9a701a45a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 --2- 192.168.123.100:0/3378897077 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f9a70077630 0x7f9a701a4060 unknown :-1 s=CLOSED pgs=3119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 -- 192.168.123.100:0/3378897077 >> 192.168.123.100:0/3378897077 conn(0x7f9a70100880 msgr2=0x7f9a701140a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 -- 192.168.123.100:0/3378897077 shutdown_connections 2026-03-10T07:58:29.838 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:58:29.848+0000 7f9a74c4f640 1 -- 192.168.123.100:0/3378897077 wait complete. 2026-03-10T07:58:29.853 INFO:tasks.workunit.client.0.vm00.stderr:+ seq 1 10 2026-03-10T07:58:29.854 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put obj1 /etc/passwd 2026-03-10T07:58:29.884 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put obj2 /etc/passwd 2026-03-10T07:58:29.910 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put obj3 /etc/passwd 2026-03-10T07:58:29.933 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put obj4 /etc/passwd 2026-03-10T07:58:29.955 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put obj5 /etc/passwd 2026-03-10T07:58:29.977 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put obj6 /etc/passwd 2026-03-10T07:58:30.000 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put obj7 /etc/passwd 2026-03-10T07:58:30.023 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put obj8 /etc/passwd 2026-03-10T07:58:30.048 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put obj9 /etc/passwd 2026-03-10T07:58:30.071 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put obj10 /etc/passwd 2026-03-10T07:58:30.095 INFO:tasks.workunit.client.0.vm00.stderr:+ sleep 30 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:29 vm00 bash[28005]: audit 2026-03-10T07:58:28.821836+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:29 vm00 bash[28005]: audit 2026-03-10T07:58:28.821836+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:29 vm00 bash[28005]: cluster 2026-03-10T07:58:28.830979+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:29 vm00 bash[28005]: cluster 2026-03-10T07:58:28.830979+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:29 vm00 bash[28005]: audit 2026-03-10T07:58:28.873969+0000 mon.a (mon.0) 3613 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:29 vm00 bash[28005]: audit 2026-03-10T07:58:28.873969+0000 mon.a (mon.0) 3613 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:29 vm00 bash[28005]: cluster 2026-03-10T07:58:29.014888+0000 mgr.y (mgr.24407) 1233 : cluster [DBG] pgmap v1670: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:29 vm00 bash[28005]: cluster 2026-03-10T07:58:29.014888+0000 mgr.y (mgr.24407) 1233 : cluster [DBG] pgmap v1670: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:29 vm00 bash[20701]: audit 2026-03-10T07:58:28.821836+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:29 vm00 bash[20701]: audit 2026-03-10T07:58:28.821836+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:29 vm00 bash[20701]: cluster 2026-03-10T07:58:28.830979+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:29 vm00 bash[20701]: cluster 2026-03-10T07:58:28.830979+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:29 vm00 bash[20701]: audit 2026-03-10T07:58:28.873969+0000 mon.a (mon.0) 3613 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:29 vm00 bash[20701]: audit 2026-03-10T07:58:28.873969+0000 mon.a (mon.0) 3613 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:29 vm00 bash[20701]: cluster 2026-03-10T07:58:29.014888+0000 mgr.y (mgr.24407) 1233 : cluster [DBG] pgmap v1670: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:30.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:29 vm00 bash[20701]: cluster 2026-03-10T07:58:29.014888+0000 mgr.y (mgr.24407) 1233 : cluster [DBG] pgmap v1670: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:30.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:29 vm03 bash[23382]: audit 2026-03-10T07:58:28.821836+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:30.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:29 vm03 bash[23382]: audit 2026-03-10T07:58:28.821836+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:30.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:29 vm03 bash[23382]: cluster 2026-03-10T07:58:28.830979+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T07:58:30.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:29 vm03 bash[23382]: cluster 2026-03-10T07:58:28.830979+0000 mon.a (mon.0) 3612 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T07:58:30.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:29 vm03 bash[23382]: audit 2026-03-10T07:58:28.873969+0000 mon.a (mon.0) 3613 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:30.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:29 vm03 bash[23382]: audit 2026-03-10T07:58:28.873969+0000 mon.a (mon.0) 3613 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]: dispatch 2026-03-10T07:58:30.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:29 vm03 bash[23382]: cluster 2026-03-10T07:58:29.014888+0000 mgr.y (mgr.24407) 1233 : cluster [DBG] pgmap v1670: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:30.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:29 vm03 bash[23382]: cluster 2026-03-10T07:58:29.014888+0000 mgr.y (mgr.24407) 1233 : cluster [DBG] pgmap v1670: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:58:31.125 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:58:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:58:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:58:31.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:30 vm00 bash[28005]: audit 2026-03-10T07:58:29.847089+0000 mon.a (mon.0) 3614 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:31.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:30 vm00 bash[28005]: audit 2026-03-10T07:58:29.847089+0000 mon.a (mon.0) 3614 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:31.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:30 vm00 bash[28005]: cluster 2026-03-10T07:58:29.865974+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T07:58:31.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:30 vm00 bash[28005]: cluster 2026-03-10T07:58:29.865974+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T07:58:31.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:30 vm00 bash[20701]: audit 2026-03-10T07:58:29.847089+0000 mon.a (mon.0) 3614 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:31.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:30 vm00 bash[20701]: audit 2026-03-10T07:58:29.847089+0000 mon.a (mon.0) 3614 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:31.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:30 vm00 bash[20701]: cluster 2026-03-10T07:58:29.865974+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T07:58:31.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:30 vm00 bash[20701]: cluster 2026-03-10T07:58:29.865974+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T07:58:31.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:30 vm03 bash[23382]: audit 2026-03-10T07:58:29.847089+0000 mon.a (mon.0) 3614 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:31.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:30 vm03 bash[23382]: audit 2026-03-10T07:58:29.847089+0000 mon.a (mon.0) 3614 : audit [INF] from='client.? 192.168.123.100:0/3378897077' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "app": "rados"}]': finished 2026-03-10T07:58:31.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:30 vm03 bash[23382]: cluster 2026-03-10T07:58:29.865974+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T07:58:31.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:30 vm03 bash[23382]: cluster 2026-03-10T07:58:29.865974+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T07:58:32.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:31 vm00 bash[20701]: cluster 2026-03-10T07:58:31.015259+0000 mgr.y (mgr.24407) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:32.125 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:31 vm00 bash[20701]: cluster 2026-03-10T07:58:31.015259+0000 mgr.y (mgr.24407) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:32.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:31 vm00 bash[28005]: cluster 2026-03-10T07:58:31.015259+0000 mgr.y (mgr.24407) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:32.125 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:31 vm00 bash[28005]: cluster 2026-03-10T07:58:31.015259+0000 mgr.y (mgr.24407) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:32.261 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:31 vm03 bash[23382]: cluster 2026-03-10T07:58:31.015259+0000 mgr.y (mgr.24407) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:32.262 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:31 vm03 bash[23382]: cluster 2026-03-10T07:58:31.015259+0000 mgr.y (mgr.24407) 1234 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:34.359 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:34 vm03 bash[23382]: cluster 2026-03-10T07:58:33.015508+0000 mgr.y (mgr.24407) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:34.360 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:34 vm03 bash[23382]: cluster 2026-03-10T07:58:33.015508+0000 mgr.y (mgr.24407) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:34.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:34 vm00 bash[28005]: cluster 2026-03-10T07:58:33.015508+0000 mgr.y (mgr.24407) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:34.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:34 vm00 bash[28005]: cluster 2026-03-10T07:58:33.015508+0000 mgr.y (mgr.24407) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:34.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:34 vm00 bash[20701]: cluster 2026-03-10T07:58:33.015508+0000 mgr.y (mgr.24407) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:34.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:34 vm00 bash[20701]: cluster 2026-03-10T07:58:33.015508+0000 mgr.y (mgr.24407) 1235 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:34.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:58:34 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:58:35.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:35 vm00 bash[28005]: audit 2026-03-10T07:58:34.370635+0000 mgr.y (mgr.24407) 1236 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:35.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:35 vm00 bash[28005]: audit 2026-03-10T07:58:34.370635+0000 mgr.y (mgr.24407) 1236 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:35.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:35 vm00 bash[20701]: audit 2026-03-10T07:58:34.370635+0000 mgr.y (mgr.24407) 1236 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:35.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:35 vm00 bash[20701]: audit 2026-03-10T07:58:34.370635+0000 mgr.y (mgr.24407) 1236 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:35.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:35 vm03 bash[23382]: audit 2026-03-10T07:58:34.370635+0000 mgr.y (mgr.24407) 1236 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:35.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:35 vm03 bash[23382]: audit 2026-03-10T07:58:34.370635+0000 mgr.y (mgr.24407) 1236 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:36.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:36 vm00 bash[28005]: cluster 2026-03-10T07:58:35.016002+0000 mgr.y (mgr.24407) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.2 KiB/s wr, 2 op/s 2026-03-10T07:58:36.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:36 vm00 bash[28005]: cluster 2026-03-10T07:58:35.016002+0000 mgr.y (mgr.24407) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.2 KiB/s wr, 2 op/s 2026-03-10T07:58:36.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:36 vm00 bash[20701]: cluster 2026-03-10T07:58:35.016002+0000 mgr.y (mgr.24407) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.2 KiB/s wr, 2 op/s 2026-03-10T07:58:36.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:36 vm00 bash[20701]: cluster 2026-03-10T07:58:35.016002+0000 mgr.y (mgr.24407) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.2 KiB/s wr, 2 op/s 2026-03-10T07:58:36.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:36 vm03 bash[23382]: cluster 2026-03-10T07:58:35.016002+0000 mgr.y (mgr.24407) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.2 KiB/s wr, 2 op/s 2026-03-10T07:58:36.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:36 vm03 bash[23382]: cluster 2026-03-10T07:58:35.016002+0000 mgr.y (mgr.24407) 1237 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.2 KiB/s wr, 2 op/s 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:37 vm00 bash[28005]: cluster 2026-03-10T07:58:37.043165+0000 mon.a (mon.0) 3616 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_objects: 10) 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:37 vm00 bash[28005]: cluster 2026-03-10T07:58:37.043165+0000 mon.a (mon.0) 3616 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_objects: 10) 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:37 vm00 bash[28005]: cluster 2026-03-10T07:58:37.043326+0000 mon.a (mon.0) 3617 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:37 vm00 bash[28005]: cluster 2026-03-10T07:58:37.043326+0000 mon.a (mon.0) 3617 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:37 vm00 bash[28005]: cluster 2026-03-10T07:58:37.056847+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:37 vm00 bash[28005]: cluster 2026-03-10T07:58:37.056847+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:37 vm00 bash[20701]: cluster 2026-03-10T07:58:37.043165+0000 mon.a (mon.0) 3616 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_objects: 10) 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:37 vm00 bash[20701]: cluster 2026-03-10T07:58:37.043165+0000 mon.a (mon.0) 3616 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_objects: 10) 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:37 vm00 bash[20701]: cluster 2026-03-10T07:58:37.043326+0000 mon.a (mon.0) 3617 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:37 vm00 bash[20701]: cluster 2026-03-10T07:58:37.043326+0000 mon.a (mon.0) 3617 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:37 vm00 bash[20701]: cluster 2026-03-10T07:58:37.056847+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T07:58:37.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:37 vm00 bash[20701]: cluster 2026-03-10T07:58:37.056847+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T07:58:37.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:37 vm03 bash[23382]: cluster 2026-03-10T07:58:37.043165+0000 mon.a (mon.0) 3616 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_objects: 10) 2026-03-10T07:58:37.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:37 vm03 bash[23382]: cluster 2026-03-10T07:58:37.043165+0000 mon.a (mon.0) 3616 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_objects: 10) 2026-03-10T07:58:37.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:37 vm03 bash[23382]: cluster 2026-03-10T07:58:37.043326+0000 mon.a (mon.0) 3617 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:58:37.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:37 vm03 bash[23382]: cluster 2026-03-10T07:58:37.043326+0000 mon.a (mon.0) 3617 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:58:37.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:37 vm03 bash[23382]: cluster 2026-03-10T07:58:37.056847+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T07:58:37.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:37 vm03 bash[23382]: cluster 2026-03-10T07:58:37.056847+0000 mon.a (mon.0) 3618 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T07:58:38.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:38 vm03 bash[23382]: cluster 2026-03-10T07:58:37.016616+0000 mgr.y (mgr.24407) 1238 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T07:58:38.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:38 vm03 bash[23382]: cluster 2026-03-10T07:58:37.016616+0000 mgr.y (mgr.24407) 1238 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T07:58:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:38 vm00 bash[28005]: cluster 2026-03-10T07:58:37.016616+0000 mgr.y (mgr.24407) 1238 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T07:58:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:38 vm00 bash[28005]: cluster 2026-03-10T07:58:37.016616+0000 mgr.y (mgr.24407) 1238 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T07:58:38.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:38 vm00 bash[20701]: cluster 2026-03-10T07:58:37.016616+0000 mgr.y (mgr.24407) 1238 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T07:58:38.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:38 vm00 bash[20701]: cluster 2026-03-10T07:58:37.016616+0000 mgr.y (mgr.24407) 1238 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T07:58:39.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:39 vm03 bash[23382]: audit 2026-03-10T07:58:39.063408+0000 mon.c (mon.2) 484 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:58:39.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:39 vm03 bash[23382]: audit 2026-03-10T07:58:39.063408+0000 mon.c (mon.2) 484 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:58:39.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:39 vm00 bash[28005]: audit 2026-03-10T07:58:39.063408+0000 mon.c (mon.2) 484 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:58:39.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:39 vm00 bash[28005]: audit 2026-03-10T07:58:39.063408+0000 mon.c (mon.2) 484 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:58:39.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:39 vm00 bash[20701]: audit 2026-03-10T07:58:39.063408+0000 mon.c (mon.2) 484 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:58:39.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:39 vm00 bash[20701]: audit 2026-03-10T07:58:39.063408+0000 mon.c (mon.2) 484 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:58:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:40 vm03 bash[23382]: cluster 2026-03-10T07:58:39.017087+0000 mgr.y (mgr.24407) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 558 B/s rd, 3.3 KiB/s wr, 1 op/s 2026-03-10T07:58:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:40 vm03 bash[23382]: cluster 2026-03-10T07:58:39.017087+0000 mgr.y (mgr.24407) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 558 B/s rd, 3.3 KiB/s wr, 1 op/s 2026-03-10T07:58:40.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:40 vm00 bash[28005]: cluster 2026-03-10T07:58:39.017087+0000 mgr.y (mgr.24407) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 558 B/s rd, 3.3 KiB/s wr, 1 op/s 2026-03-10T07:58:40.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:40 vm00 bash[28005]: cluster 2026-03-10T07:58:39.017087+0000 mgr.y (mgr.24407) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 558 B/s rd, 3.3 KiB/s wr, 1 op/s 2026-03-10T07:58:40.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:40 vm00 bash[20701]: cluster 2026-03-10T07:58:39.017087+0000 mgr.y (mgr.24407) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 558 B/s rd, 3.3 KiB/s wr, 1 op/s 2026-03-10T07:58:40.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:40 vm00 bash[20701]: cluster 2026-03-10T07:58:39.017087+0000 mgr.y (mgr.24407) 1239 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 558 B/s rd, 3.3 KiB/s wr, 1 op/s 2026-03-10T07:58:41.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:58:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:58:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:58:41.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:41 vm03 bash[23382]: audit 2026-03-10T07:58:40.466726+0000 mon.a (mon.0) 3619 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:41.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:41 vm03 bash[23382]: audit 2026-03-10T07:58:40.466726+0000 mon.a (mon.0) 3619 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:41.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:41 vm03 bash[23382]: audit 2026-03-10T07:58:40.467332+0000 mon.c (mon.2) 485 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:41.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:41 vm03 bash[23382]: audit 2026-03-10T07:58:40.467332+0000 mon.c (mon.2) 485 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:41.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:41 vm00 bash[28005]: audit 2026-03-10T07:58:40.466726+0000 mon.a (mon.0) 3619 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:41.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:41 vm00 bash[28005]: audit 2026-03-10T07:58:40.466726+0000 mon.a (mon.0) 3619 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:41.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:41 vm00 bash[28005]: audit 2026-03-10T07:58:40.467332+0000 mon.c (mon.2) 485 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:41.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:41 vm00 bash[28005]: audit 2026-03-10T07:58:40.467332+0000 mon.c (mon.2) 485 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:41.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:41 vm00 bash[20701]: audit 2026-03-10T07:58:40.466726+0000 mon.a (mon.0) 3619 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:41.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:41 vm00 bash[20701]: audit 2026-03-10T07:58:40.466726+0000 mon.a (mon.0) 3619 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:41.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:41 vm00 bash[20701]: audit 2026-03-10T07:58:40.467332+0000 mon.c (mon.2) 485 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:41.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:41 vm00 bash[20701]: audit 2026-03-10T07:58:40.467332+0000 mon.c (mon.2) 485 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:42.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:42 vm03 bash[23382]: cluster 2026-03-10T07:58:41.017689+0000 mgr.y (mgr.24407) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:42.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:42 vm03 bash[23382]: cluster 2026-03-10T07:58:41.017689+0000 mgr.y (mgr.24407) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:42.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:42 vm00 bash[28005]: cluster 2026-03-10T07:58:41.017689+0000 mgr.y (mgr.24407) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:42.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:42 vm00 bash[28005]: cluster 2026-03-10T07:58:41.017689+0000 mgr.y (mgr.24407) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:42.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:42 vm00 bash[20701]: cluster 2026-03-10T07:58:41.017689+0000 mgr.y (mgr.24407) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:42.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:42 vm00 bash[20701]: cluster 2026-03-10T07:58:41.017689+0000 mgr.y (mgr.24407) 1240 : cluster [DBG] pgmap v1678: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:44.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:58:44 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:58:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:44 vm03 bash[23382]: cluster 2026-03-10T07:58:43.017935+0000 mgr.y (mgr.24407) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:44 vm03 bash[23382]: cluster 2026-03-10T07:58:43.017935+0000 mgr.y (mgr.24407) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:44 vm03 bash[23382]: audit 2026-03-10T07:58:44.165312+0000 mon.a (mon.0) 3620 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:44 vm03 bash[23382]: audit 2026-03-10T07:58:44.165312+0000 mon.a (mon.0) 3620 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:44 vm03 bash[23382]: audit 2026-03-10T07:58:44.171669+0000 mon.a (mon.0) 3621 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:44 vm03 bash[23382]: audit 2026-03-10T07:58:44.171669+0000 mon.a (mon.0) 3621 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:44 vm03 bash[23382]: audit 2026-03-10T07:58:44.332533+0000 mon.a (mon.0) 3622 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:44 vm03 bash[23382]: audit 2026-03-10T07:58:44.332533+0000 mon.a (mon.0) 3622 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:44 vm03 bash[23382]: audit 2026-03-10T07:58:44.341071+0000 mon.a (mon.0) 3623 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:44 vm03 bash[23382]: audit 2026-03-10T07:58:44.341071+0000 mon.a (mon.0) 3623 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:44 vm00 bash[28005]: cluster 2026-03-10T07:58:43.017935+0000 mgr.y (mgr.24407) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:44 vm00 bash[28005]: cluster 2026-03-10T07:58:43.017935+0000 mgr.y (mgr.24407) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:44 vm00 bash[28005]: audit 2026-03-10T07:58:44.165312+0000 mon.a (mon.0) 3620 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:44 vm00 bash[28005]: audit 2026-03-10T07:58:44.165312+0000 mon.a (mon.0) 3620 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:44 vm00 bash[28005]: audit 2026-03-10T07:58:44.171669+0000 mon.a (mon.0) 3621 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:44 vm00 bash[28005]: audit 2026-03-10T07:58:44.171669+0000 mon.a (mon.0) 3621 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:44 vm00 bash[28005]: audit 2026-03-10T07:58:44.332533+0000 mon.a (mon.0) 3622 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:44 vm00 bash[28005]: audit 2026-03-10T07:58:44.332533+0000 mon.a (mon.0) 3622 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:44 vm00 bash[28005]: audit 2026-03-10T07:58:44.341071+0000 mon.a (mon.0) 3623 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:44 vm00 bash[28005]: audit 2026-03-10T07:58:44.341071+0000 mon.a (mon.0) 3623 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:44 vm00 bash[20701]: cluster 2026-03-10T07:58:43.017935+0000 mgr.y (mgr.24407) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:44 vm00 bash[20701]: cluster 2026-03-10T07:58:43.017935+0000 mgr.y (mgr.24407) 1241 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:44 vm00 bash[20701]: audit 2026-03-10T07:58:44.165312+0000 mon.a (mon.0) 3620 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:44 vm00 bash[20701]: audit 2026-03-10T07:58:44.165312+0000 mon.a (mon.0) 3620 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:44 vm00 bash[20701]: audit 2026-03-10T07:58:44.171669+0000 mon.a (mon.0) 3621 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:44 vm00 bash[20701]: audit 2026-03-10T07:58:44.171669+0000 mon.a (mon.0) 3621 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:44 vm00 bash[20701]: audit 2026-03-10T07:58:44.332533+0000 mon.a (mon.0) 3622 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:44 vm00 bash[20701]: audit 2026-03-10T07:58:44.332533+0000 mon.a (mon.0) 3622 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:44 vm00 bash[20701]: audit 2026-03-10T07:58:44.341071+0000 mon.a (mon.0) 3623 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:44.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:44 vm00 bash[20701]: audit 2026-03-10T07:58:44.341071+0000 mon.a (mon.0) 3623 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:45 vm03 bash[23382]: audit 2026-03-10T07:58:44.380859+0000 mgr.y (mgr.24407) 1242 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:45 vm03 bash[23382]: audit 2026-03-10T07:58:44.380859+0000 mgr.y (mgr.24407) 1242 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:45 vm03 bash[23382]: audit 2026-03-10T07:58:44.621074+0000 mon.c (mon.2) 486 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:58:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:45 vm03 bash[23382]: audit 2026-03-10T07:58:44.621074+0000 mon.c (mon.2) 486 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:58:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:45 vm03 bash[23382]: audit 2026-03-10T07:58:44.621959+0000 mon.c (mon.2) 487 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:58:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:45 vm03 bash[23382]: audit 2026-03-10T07:58:44.621959+0000 mon.c (mon.2) 487 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:58:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:45 vm03 bash[23382]: audit 2026-03-10T07:58:44.627503+0000 mon.a (mon.0) 3624 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:45.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:45 vm03 bash[23382]: audit 2026-03-10T07:58:44.627503+0000 mon.a (mon.0) 3624 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:45 vm00 bash[28005]: audit 2026-03-10T07:58:44.380859+0000 mgr.y (mgr.24407) 1242 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:45 vm00 bash[28005]: audit 2026-03-10T07:58:44.380859+0000 mgr.y (mgr.24407) 1242 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:45 vm00 bash[28005]: audit 2026-03-10T07:58:44.621074+0000 mon.c (mon.2) 486 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:45 vm00 bash[28005]: audit 2026-03-10T07:58:44.621074+0000 mon.c (mon.2) 486 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:45 vm00 bash[28005]: audit 2026-03-10T07:58:44.621959+0000 mon.c (mon.2) 487 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:45 vm00 bash[28005]: audit 2026-03-10T07:58:44.621959+0000 mon.c (mon.2) 487 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:45 vm00 bash[28005]: audit 2026-03-10T07:58:44.627503+0000 mon.a (mon.0) 3624 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:45 vm00 bash[28005]: audit 2026-03-10T07:58:44.627503+0000 mon.a (mon.0) 3624 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:45 vm00 bash[20701]: audit 2026-03-10T07:58:44.380859+0000 mgr.y (mgr.24407) 1242 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:45 vm00 bash[20701]: audit 2026-03-10T07:58:44.380859+0000 mgr.y (mgr.24407) 1242 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:45 vm00 bash[20701]: audit 2026-03-10T07:58:44.621074+0000 mon.c (mon.2) 486 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:45 vm00 bash[20701]: audit 2026-03-10T07:58:44.621074+0000 mon.c (mon.2) 486 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:45 vm00 bash[20701]: audit 2026-03-10T07:58:44.621959+0000 mon.c (mon.2) 487 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:45 vm00 bash[20701]: audit 2026-03-10T07:58:44.621959+0000 mon.c (mon.2) 487 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:45 vm00 bash[20701]: audit 2026-03-10T07:58:44.627503+0000 mon.a (mon.0) 3624 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:45.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:45 vm00 bash[20701]: audit 2026-03-10T07:58:44.627503+0000 mon.a (mon.0) 3624 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:58:46.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:46 vm03 bash[23382]: cluster 2026-03-10T07:58:45.018754+0000 mgr.y (mgr.24407) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:46.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:46 vm03 bash[23382]: cluster 2026-03-10T07:58:45.018754+0000 mgr.y (mgr.24407) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:46.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:46 vm00 bash[28005]: cluster 2026-03-10T07:58:45.018754+0000 mgr.y (mgr.24407) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:46.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:46 vm00 bash[28005]: cluster 2026-03-10T07:58:45.018754+0000 mgr.y (mgr.24407) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:46.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:46 vm00 bash[20701]: cluster 2026-03-10T07:58:45.018754+0000 mgr.y (mgr.24407) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:46.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:46 vm00 bash[20701]: cluster 2026-03-10T07:58:45.018754+0000 mgr.y (mgr.24407) 1243 : cluster [DBG] pgmap v1680: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:48.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:48 vm03 bash[23382]: cluster 2026-03-10T07:58:47.019002+0000 mgr.y (mgr.24407) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:48.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:48 vm03 bash[23382]: cluster 2026-03-10T07:58:47.019002+0000 mgr.y (mgr.24407) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:48.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:48 vm00 bash[28005]: cluster 2026-03-10T07:58:47.019002+0000 mgr.y (mgr.24407) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:48.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:48 vm00 bash[28005]: cluster 2026-03-10T07:58:47.019002+0000 mgr.y (mgr.24407) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:48.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:48 vm00 bash[20701]: cluster 2026-03-10T07:58:47.019002+0000 mgr.y (mgr.24407) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:48.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:48 vm00 bash[20701]: cluster 2026-03-10T07:58:47.019002+0000 mgr.y (mgr.24407) 1244 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:58:50.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:50 vm03 bash[23382]: cluster 2026-03-10T07:58:49.019271+0000 mgr.y (mgr.24407) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:58:50.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:50 vm03 bash[23382]: cluster 2026-03-10T07:58:49.019271+0000 mgr.y (mgr.24407) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:58:50.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:50 vm00 bash[28005]: cluster 2026-03-10T07:58:49.019271+0000 mgr.y (mgr.24407) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:58:50.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:50 vm00 bash[28005]: cluster 2026-03-10T07:58:49.019271+0000 mgr.y (mgr.24407) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:58:50.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:50 vm00 bash[20701]: cluster 2026-03-10T07:58:49.019271+0000 mgr.y (mgr.24407) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:58:50.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:50 vm00 bash[20701]: cluster 2026-03-10T07:58:49.019271+0000 mgr.y (mgr.24407) 1245 : cluster [DBG] pgmap v1682: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:58:51.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:58:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:58:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:58:52.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:52 vm03 bash[23382]: cluster 2026-03-10T07:58:51.019785+0000 mgr.y (mgr.24407) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:52.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:52 vm03 bash[23382]: cluster 2026-03-10T07:58:51.019785+0000 mgr.y (mgr.24407) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:52.874 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:52 vm00 bash[28005]: cluster 2026-03-10T07:58:51.019785+0000 mgr.y (mgr.24407) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:52.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:52 vm00 bash[28005]: cluster 2026-03-10T07:58:51.019785+0000 mgr.y (mgr.24407) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:52.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:52 vm00 bash[20701]: cluster 2026-03-10T07:58:51.019785+0000 mgr.y (mgr.24407) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:52.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:52 vm00 bash[20701]: cluster 2026-03-10T07:58:51.019785+0000 mgr.y (mgr.24407) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:54.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:58:54 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:58:54.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:54 vm03 bash[23382]: cluster 2026-03-10T07:58:53.020096+0000 mgr.y (mgr.24407) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:54.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:54 vm03 bash[23382]: cluster 2026-03-10T07:58:53.020096+0000 mgr.y (mgr.24407) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:54.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:54 vm00 bash[28005]: cluster 2026-03-10T07:58:53.020096+0000 mgr.y (mgr.24407) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:54.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:54 vm00 bash[28005]: cluster 2026-03-10T07:58:53.020096+0000 mgr.y (mgr.24407) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:54.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:54 vm00 bash[20701]: cluster 2026-03-10T07:58:53.020096+0000 mgr.y (mgr.24407) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:54.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:54 vm00 bash[20701]: cluster 2026-03-10T07:58:53.020096+0000 mgr.y (mgr.24407) 1247 : cluster [DBG] pgmap v1684: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:55.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:55 vm03 bash[23382]: audit 2026-03-10T07:58:54.390453+0000 mgr.y (mgr.24407) 1248 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:55.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:55 vm03 bash[23382]: audit 2026-03-10T07:58:54.390453+0000 mgr.y (mgr.24407) 1248 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:55.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:55 vm03 bash[23382]: audit 2026-03-10T07:58:55.473185+0000 mon.c (mon.2) 488 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:55.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:55 vm03 bash[23382]: audit 2026-03-10T07:58:55.473185+0000 mon.c (mon.2) 488 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:55.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:55 vm00 bash[28005]: audit 2026-03-10T07:58:54.390453+0000 mgr.y (mgr.24407) 1248 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:55.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:55 vm00 bash[28005]: audit 2026-03-10T07:58:54.390453+0000 mgr.y (mgr.24407) 1248 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:55.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:55 vm00 bash[28005]: audit 2026-03-10T07:58:55.473185+0000 mon.c (mon.2) 488 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:55.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:55 vm00 bash[28005]: audit 2026-03-10T07:58:55.473185+0000 mon.c (mon.2) 488 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:55.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:55 vm00 bash[20701]: audit 2026-03-10T07:58:54.390453+0000 mgr.y (mgr.24407) 1248 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:55.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:55 vm00 bash[20701]: audit 2026-03-10T07:58:54.390453+0000 mgr.y (mgr.24407) 1248 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:58:55.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:55 vm00 bash[20701]: audit 2026-03-10T07:58:55.473185+0000 mon.c (mon.2) 488 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:55.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:55 vm00 bash[20701]: audit 2026-03-10T07:58:55.473185+0000 mon.c (mon.2) 488 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:58:56.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:56 vm00 bash[28005]: cluster 2026-03-10T07:58:55.020923+0000 mgr.y (mgr.24407) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:56.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:56 vm00 bash[28005]: cluster 2026-03-10T07:58:55.020923+0000 mgr.y (mgr.24407) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:56.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:56 vm00 bash[20701]: cluster 2026-03-10T07:58:55.020923+0000 mgr.y (mgr.24407) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:56.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:56 vm00 bash[20701]: cluster 2026-03-10T07:58:55.020923+0000 mgr.y (mgr.24407) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:57.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:56 vm03 bash[23382]: cluster 2026-03-10T07:58:55.020923+0000 mgr.y (mgr.24407) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:57.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:56 vm03 bash[23382]: cluster 2026-03-10T07:58:55.020923+0000 mgr.y (mgr.24407) 1249 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:58:57.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:57 vm00 bash[28005]: cluster 2026-03-10T07:58:57.021153+0000 mgr.y (mgr.24407) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:57.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:58:57 vm00 bash[28005]: cluster 2026-03-10T07:58:57.021153+0000 mgr.y (mgr.24407) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:57.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:57 vm00 bash[20701]: cluster 2026-03-10T07:58:57.021153+0000 mgr.y (mgr.24407) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:57.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:58:57 vm00 bash[20701]: cluster 2026-03-10T07:58:57.021153+0000 mgr.y (mgr.24407) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:58.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:57 vm03 bash[23382]: cluster 2026-03-10T07:58:57.021153+0000 mgr.y (mgr.24407) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:58:58.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:58:57 vm03 bash[23382]: cluster 2026-03-10T07:58:57.021153+0000 mgr.y (mgr.24407) 1250 : cluster [DBG] pgmap v1686: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:00.096 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=131391 2026-03-10T07:59:00.096 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota 848a4d04-314a-4289-950b-2472b7cc83f9 max_objects 100 2026-03-10T07:59:00.096 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put onemore /etc/passwd 2026-03-10T07:59:00.153 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.160+0000 7fde45075640 1 -- 192.168.123.100:0/3600146319 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fde40108150 msgr2=0x7fde40106450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.160+0000 7fde45075640 1 --2- 192.168.123.100:0/3600146319 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fde40108150 0x7fde40106450 secure :-1 s=READY pgs=3132 cs=0 l=1 rev1=1 crypto rx=0x7fde2800fcb0 tx=0x7fde28020bf0 comp rx=0 tx=0).stop 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.160+0000 7fde45075640 1 -- 192.168.123.100:0/3600146319 shutdown_connections 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.160+0000 7fde45075640 1 --2- 192.168.123.100:0/3600146319 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde40106b80 0x7fde4010e700 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.160+0000 7fde45075640 1 --2- 192.168.123.100:0/3600146319 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fde40108150 0x7fde40106450 unknown :-1 s=CLOSED pgs=3132 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.160+0000 7fde45075640 1 --2- 192.168.123.100:0/3600146319 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fde401077a0 0x7fde40107b80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.160+0000 7fde45075640 1 -- 192.168.123.100:0/3600146319 >> 192.168.123.100:0/3600146319 conn(0x7fde400fc820 msgr2=0x7fde400fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.160+0000 7fde45075640 1 -- 192.168.123.100:0/3600146319 shutdown_connections 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde45075640 1 -- 192.168.123.100:0/3600146319 wait complete. 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde45075640 1 Processor -- start 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde45075640 1 -- start start 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde45075640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fde40106b80 0x7fde401a08f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde45075640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fde401077a0 0x7fde401a0e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde45075640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde40108150 0x7fde4019aab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde45075640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fde401126e0 con 0x7fde40108150 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde45075640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fde40112560 con 0x7fde401077a0 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde45075640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fde40112860 con 0x7fde40106b80 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3f577640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde40108150 0x7fde4019aab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3ed76640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fde40106b80 0x7fde401a08f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3ed76640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fde40106b80 0x7fde401a08f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:57066/0 (socket says 192.168.123.100:57066) 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3ed76640 1 -- 192.168.123.100:0/12024472 learned_addr learned my addr 192.168.123.100:0/12024472 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:59:00.154 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3e575640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fde401077a0 0x7fde401a0e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3f577640 1 -- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fde40106b80 msgr2=0x7fde401a08f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3f577640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fde40106b80 0x7fde401a08f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3f577640 1 -- 192.168.123.100:0/12024472 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fde401077a0 msgr2=0x7fde401a0e30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3f577640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fde401077a0 0x7fde401a0e30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3f577640 1 -- 192.168.123.100:0/12024472 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fde4019b280 con 0x7fde40108150 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3f577640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde40108150 0x7fde4019aab0 secure :-1 s=READY pgs=3133 cs=0 l=1 rev1=1 crypto rx=0x7fde3000ad90 tx=0x7fde30004420 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fde30014620 con 0x7fde40108150 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fde30019070 con 0x7fde40108150 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde45075640 1 -- 192.168.123.100:0/12024472 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fde4019b570 con 0x7fde40108150 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fde300079c0 con 0x7fde40108150 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde45075640 1 -- 192.168.123.100:0/12024472 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fde401a77c0 con 0x7fde40108150 2026-03-10T07:59:00.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3ed76640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fde40106b80 0x7fde401a08f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:59:00.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fde30005ce0 con 0x7fde40108150 2026-03-10T07:59:00.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde1ffff640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fde140777e0 0x7fde14079ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:00.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(767..767 src has 258..767) ==== 9073+0+0 (secure 0 0 0) 0x7fde3009a410 con 0x7fde40108150 2026-03-10T07:59:00.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3ed76640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fde140777e0 0x7fde14079ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:00.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.164+0000 7fde3ed76640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fde140777e0 0x7fde14079ca0 secure :-1 s=READY pgs=4318 cs=0 l=1 rev1=1 crypto rx=0x7fde4019bbf0 tx=0x7fde340074e0 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:00.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.168+0000 7fde45075640 1 -- 192.168.123.100:0/12024472 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fde04005190 con 0x7fde40108150 2026-03-10T07:59:00.160 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.168+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=768}) -- 0x7fde140833e0 con 0x7fde40108150 2026-03-10T07:59:00.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.168+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fde300667d0 con 0x7fde40108150 2026-03-10T07:59:00.246 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:00.256+0000 7fde45075640 1 -- 192.168.123.100:0/12024472 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"} v 0) -- 0x7fde04005480 con 0x7fde40108150 2026-03-10T07:59:00.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:00 vm00 bash[28005]: cluster 2026-03-10T07:58:59.021445+0000 mgr.y (mgr.24407) 1251 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:00.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:00 vm00 bash[28005]: cluster 2026-03-10T07:58:59.021445+0000 mgr.y (mgr.24407) 1251 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:00.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:00 vm00 bash[20701]: cluster 2026-03-10T07:58:59.021445+0000 mgr.y (mgr.24407) 1251 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:00.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:00 vm00 bash[20701]: cluster 2026-03-10T07:58:59.021445+0000 mgr.y (mgr.24407) 1251 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:00 vm03 bash[23382]: cluster 2026-03-10T07:58:59.021445+0000 mgr.y (mgr.24407) 1251 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:00 vm03 bash[23382]: cluster 2026-03-10T07:58:59.021445+0000 mgr.y (mgr.24407) 1251 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:01.092 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:01.100+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v768) ==== 225+0+0 (secure 0 0 0) 0x7fde3006b680 con 0x7fde40108150 2026-03-10T07:59:01.110 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:01.120+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 <== mon.0 v2:192.168.123.100:3300/0 8 ==== osd_map(768..768 src has 258..768) ==== 628+0+0 (secure 0 0 0) 0x7fde3005e870 con 0x7fde40108150 2026-03-10T07:59:01.110 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:01.120+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=769}) -- 0x7fde14084360 con 0x7fde40108150 2026-03-10T07:59:01.148 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:01.156+0000 7fde45075640 1 -- 192.168.123.100:0/12024472 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"} v 0) -- 0x7fde04004820 con 0x7fde40108150 2026-03-10T07:59:01.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:01 vm00 bash[28005]: audit 2026-03-10T07:59:00.257795+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:01.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:01 vm00 bash[28005]: audit 2026-03-10T07:59:00.257795+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:01.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:59:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:59:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:59:01.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:01 vm00 bash[20701]: audit 2026-03-10T07:59:00.257795+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:01.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:01 vm00 bash[20701]: audit 2026-03-10T07:59:00.257795+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:01.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:01 vm03 bash[23382]: audit 2026-03-10T07:59:00.257795+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:01.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:01 vm03 bash[23382]: audit 2026-03-10T07:59:00.257795+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:02.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.048+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v769) ==== 225+0+0 (secure 0 0 0) 0x7fde3009a110 con 0x7fde40108150 2026-03-10T07:59:02.040 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_objects = 100 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 2026-03-10T07:59:02.041 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.048+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 <== mon.0 v2:192.168.123.100:3300/0 10 ==== osd_map(769..769 src has 258..769) ==== 628+0+0 (secure 0 0 0) 0x7fde30097cf0 con 0x7fde40108150 2026-03-10T07:59:02.041 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.048+0000 7fde1ffff640 1 -- 192.168.123.100:0/12024472 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=770}) -- 0x7fde140843a0 con 0x7fde40108150 2026-03-10T07:59:02.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 -- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fde140777e0 msgr2=0x7fde14079ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:02.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fde140777e0 0x7fde14079ca0 secure :-1 s=READY pgs=4318 cs=0 l=1 rev1=1 crypto rx=0x7fde4019bbf0 tx=0x7fde340074e0 comp rx=0 tx=0).stop 2026-03-10T07:59:02.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 -- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde40108150 msgr2=0x7fde4019aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:02.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde40108150 0x7fde4019aab0 secure :-1 s=READY pgs=3133 cs=0 l=1 rev1=1 crypto rx=0x7fde3000ad90 tx=0x7fde30004420 comp rx=0 tx=0).stop 2026-03-10T07:59:02.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 -- 192.168.123.100:0/12024472 shutdown_connections 2026-03-10T07:59:02.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fde140777e0 0x7fde14079ca0 unknown :-1 s=CLOSED pgs=4318 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:02.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde40108150 0x7fde4019aab0 unknown :-1 s=CLOSED pgs=3133 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:02.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fde401077a0 0x7fde401a0e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:02.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 --2- 192.168.123.100:0/12024472 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fde40106b80 0x7fde401a08f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:02.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 -- 192.168.123.100:0/12024472 >> 192.168.123.100:0/12024472 conn(0x7fde400fc820 msgr2=0x7fde40104700 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:02.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 -- 192.168.123.100:0/12024472 shutdown_connections 2026-03-10T07:59:02.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.052+0000 7fde45075640 1 -- 192.168.123.100:0/12024472 wait complete. 2026-03-10T07:59:02.062 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 131391 2026-03-10T07:59:02.067 INFO:tasks.workunit.client.0.vm00.stderr:+ [ 0 -ne 0 ] 2026-03-10T07:59:02.067 INFO:tasks.workunit.client.0.vm00.stderr:+ true 2026-03-10T07:59:02.067 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put twomore /etc/passwd 2026-03-10T07:59:02.090 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota 848a4d04-314a-4289-950b-2472b7cc83f9 max_bytes 100 2026-03-10T07:59:02.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.164+0000 7efd6276d640 1 -- 192.168.123.100:0/1972778663 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd5c10cf30 msgr2=0x7efd5c10f3d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:02.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.164+0000 7efd6276d640 1 --2- 192.168.123.100:0/1972778663 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd5c10cf30 0x7efd5c10f3d0 secure :-1 s=READY pgs=3136 cs=0 l=1 rev1=1 crypto rx=0x7efd5000b0a0 tx=0x7efd5001cb80 comp rx=0 tx=0).stop 2026-03-10T07:59:02.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.164+0000 7efd6276d640 1 -- 192.168.123.100:0/1972778663 shutdown_connections 2026-03-10T07:59:02.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.164+0000 7efd6276d640 1 --2- 192.168.123.100:0/1972778663 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd5c10cf30 0x7efd5c10f3d0 unknown :-1 s=CLOSED pgs=3136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:02.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.164+0000 7efd6276d640 1 --2- 192.168.123.100:0/1972778663 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd5c104500 0x7efd5c10c9f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:02.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.164+0000 7efd6276d640 1 --2- 192.168.123.100:0/1972778663 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd5c103be0 0x7efd5c103fc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:02.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.164+0000 7efd6276d640 1 -- 192.168.123.100:0/1972778663 >> 192.168.123.100:0/1972778663 conn(0x7efd5c0fd500 msgr2=0x7efd5c0ff920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:02.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.164+0000 7efd6276d640 1 -- 192.168.123.100:0/1972778663 shutdown_connections 2026-03-10T07:59:02.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.164+0000 7efd6276d640 1 -- 192.168.123.100:0/1972778663 wait complete. 2026-03-10T07:59:02.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.164+0000 7efd6276d640 1 Processor -- start 2026-03-10T07:59:02.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.164+0000 7efd6276d640 1 -- start start 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd6276d640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd5c103be0 0x7efd5c103230 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd6276d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd5c104500 0x7efd5c1017c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd6276d640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd5c10cf30 0x7efd5c101d00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd6276d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7efd5c114750 con 0x7efd5c104500 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd6276d640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7efd5c1145d0 con 0x7efd5c10cf30 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd6276d640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7efd5c1148d0 con 0x7efd5c103be0 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd5b7fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd5c104500 0x7efd5c1017c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd5b7fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd5c104500 0x7efd5c1017c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:46534/0 (socket says 192.168.123.100:46534) 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd5b7fe640 1 -- 192.168.123.100:0/4265685541 learned_addr learned my addr 192.168.123.100:0/4265685541 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd60ce3640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd5c10cf30 0x7efd5c101d00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd5b7fe640 1 -- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd5c103be0 msgr2=0x7efd5c103230 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd5bfff640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd5c103be0 0x7efd5c103230 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd5b7fe640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd5c103be0 0x7efd5c103230 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd5b7fe640 1 -- 192.168.123.100:0/4265685541 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd5c10cf30 msgr2=0x7efd5c101d00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd5b7fe640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd5c10cf30 0x7efd5c101d00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd5b7fe640 1 -- 192.168.123.100:0/4265685541 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efd5c102570 con 0x7efd5c104500 2026-03-10T07:59:02.158 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd5bfff640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd5c103be0 0x7efd5c103230 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:59:02.159 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd5b7fe640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd5c104500 0x7efd5c1017c0 secure :-1 s=READY pgs=3134 cs=0 l=1 rev1=1 crypto rx=0x7efd4800d950 tx=0x7efd4800de10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:02.159 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd597fa640 1 -- 192.168.123.100:0/4265685541 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efd48014070 con 0x7efd5c104500 2026-03-10T07:59:02.159 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd597fa640 1 -- 192.168.123.100:0/4265685541 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7efd480044e0 con 0x7efd5c104500 2026-03-10T07:59:02.159 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd597fa640 1 -- 192.168.123.100:0/4265685541 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efd48002d50 con 0x7efd5c104500 2026-03-10T07:59:02.159 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd6276d640 1 -- 192.168.123.100:0/4265685541 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7efd5c06c900 con 0x7efd5c104500 2026-03-10T07:59:02.159 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd6276d640 1 -- 192.168.123.100:0/4265685541 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7efd5c06cdc0 con 0x7efd5c104500 2026-03-10T07:59:02.160 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.168+0000 7efd6276d640 1 -- 192.168.123.100:0/4265685541 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7efd24005190 con 0x7efd5c104500 2026-03-10T07:59:02.164 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.172+0000 7efd597fa640 1 -- 192.168.123.100:0/4265685541 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7efd4800b840 con 0x7efd5c104500 2026-03-10T07:59:02.164 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.172+0000 7efd597fa640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd300777e0 0x7efd30079ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:02.164 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.172+0000 7efd5bfff640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd300777e0 0x7efd30079ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:02.164 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.172+0000 7efd597fa640 1 -- 192.168.123.100:0/4265685541 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(769..769 src has 258..769) ==== 9073+0+0 (secure 0 0 0) 0x7efd4809abc0 con 0x7efd5c104500 2026-03-10T07:59:02.164 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.172+0000 7efd5bfff640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd300777e0 0x7efd30079ca0 secure :-1 s=READY pgs=4320 cs=0 l=1 rev1=1 crypto rx=0x7efd40005fd0 tx=0x7efd40005ea0 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:02.164 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.172+0000 7efd597fa640 1 -- 192.168.123.100:0/4265685541 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7efd480ca9f0 con 0x7efd5c104500 2026-03-10T07:59:02.248 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:02.256+0000 7efd6276d640 1 -- 192.168.123.100:0/4265685541 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"} v 0) -- 0x7efd24005480 con 0x7efd5c104500 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: cluster 2026-03-10T07:59:01.021944+0000 mgr.y (mgr.24407) 1252 : cluster [DBG] pgmap v1688: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: cluster 2026-03-10T07:59:01.021944+0000 mgr.y (mgr.24407) 1252 : cluster [DBG] pgmap v1688: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: audit 2026-03-10T07:59:01.103460+0000 mon.a (mon.0) 3626 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: audit 2026-03-10T07:59:01.103460+0000 mon.a (mon.0) 3626 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: cluster 2026-03-10T07:59:01.111103+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: cluster 2026-03-10T07:59:01.111103+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: audit 2026-03-10T07:59:01.160084+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: audit 2026-03-10T07:59:01.160084+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: cluster 2026-03-10T07:59:02.046846+0000 mon.a (mon.0) 3629 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: cluster 2026-03-10T07:59:02.046846+0000 mon.a (mon.0) 3629 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: cluster 2026-03-10T07:59:02.047080+0000 mon.a (mon.0) 3630 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: cluster 2026-03-10T07:59:02.047080+0000 mon.a (mon.0) 3630 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: audit 2026-03-10T07:59:02.051190+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: audit 2026-03-10T07:59:02.051190+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: cluster 2026-03-10T07:59:02.070202+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:02 vm00 bash[20701]: cluster 2026-03-10T07:59:02.070202+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: cluster 2026-03-10T07:59:01.021944+0000 mgr.y (mgr.24407) 1252 : cluster [DBG] pgmap v1688: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: cluster 2026-03-10T07:59:01.021944+0000 mgr.y (mgr.24407) 1252 : cluster [DBG] pgmap v1688: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: audit 2026-03-10T07:59:01.103460+0000 mon.a (mon.0) 3626 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: audit 2026-03-10T07:59:01.103460+0000 mon.a (mon.0) 3626 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: cluster 2026-03-10T07:59:01.111103+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: cluster 2026-03-10T07:59:01.111103+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: audit 2026-03-10T07:59:01.160084+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: audit 2026-03-10T07:59:01.160084+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: cluster 2026-03-10T07:59:02.046846+0000 mon.a (mon.0) 3629 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:02.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: cluster 2026-03-10T07:59:02.046846+0000 mon.a (mon.0) 3629 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:02.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: cluster 2026-03-10T07:59:02.047080+0000 mon.a (mon.0) 3630 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:02.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: cluster 2026-03-10T07:59:02.047080+0000 mon.a (mon.0) 3630 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:02.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: audit 2026-03-10T07:59:02.051190+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: audit 2026-03-10T07:59:02.051190+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: cluster 2026-03-10T07:59:02.070202+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T07:59:02.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:02 vm00 bash[28005]: cluster 2026-03-10T07:59:02.070202+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: cluster 2026-03-10T07:59:01.021944+0000 mgr.y (mgr.24407) 1252 : cluster [DBG] pgmap v1688: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: cluster 2026-03-10T07:59:01.021944+0000 mgr.y (mgr.24407) 1252 : cluster [DBG] pgmap v1688: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: audit 2026-03-10T07:59:01.103460+0000 mon.a (mon.0) 3626 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: audit 2026-03-10T07:59:01.103460+0000 mon.a (mon.0) 3626 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: cluster 2026-03-10T07:59:01.111103+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: cluster 2026-03-10T07:59:01.111103+0000 mon.a (mon.0) 3627 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: audit 2026-03-10T07:59:01.160084+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: audit 2026-03-10T07:59:01.160084+0000 mon.a (mon.0) 3628 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: cluster 2026-03-10T07:59:02.046846+0000 mon.a (mon.0) 3629 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: cluster 2026-03-10T07:59:02.046846+0000 mon.a (mon.0) 3629 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: cluster 2026-03-10T07:59:02.047080+0000 mon.a (mon.0) 3630 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: cluster 2026-03-10T07:59:02.047080+0000 mon.a (mon.0) 3630 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: audit 2026-03-10T07:59:02.051190+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: audit 2026-03-10T07:59:02.051190+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.100:0/12024472' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "100"}]': finished 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: cluster 2026-03-10T07:59:02.070202+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T07:59:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:02 vm03 bash[23382]: cluster 2026-03-10T07:59:02.070202+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T07:59:03.107 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:03.116+0000 7efd597fa640 1 -- 192.168.123.100:0/4265685541 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v770) ==== 221+0+0 (secure 0 0 0) 0x7efd4806b020 con 0x7efd5c104500 2026-03-10T07:59:03.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:03.172+0000 7efd6276d640 1 -- 192.168.123.100:0/4265685541 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"} v 0) -- 0x7efd24004a90 con 0x7efd5c104500 2026-03-10T07:59:03.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:03 vm00 bash[28005]: audit 2026-03-10T07:59:02.259711+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:03.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:03 vm00 bash[28005]: audit 2026-03-10T07:59:02.259711+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:03.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:03 vm00 bash[20701]: audit 2026-03-10T07:59:02.259711+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:03.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:03 vm00 bash[20701]: audit 2026-03-10T07:59:02.259711+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:03.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:03 vm03 bash[23382]: audit 2026-03-10T07:59:02.259711+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:03.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:03 vm03 bash[23382]: audit 2026-03-10T07:59:02.259711+0000 mon.a (mon.0) 3633 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:04.127 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd597fa640 1 -- 192.168.123.100:0/4265685541 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v771) ==== 221+0+0 (secure 0 0 0) 0x7efd48066ec0 con 0x7efd5c104500 2026-03-10T07:59:04.127 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_bytes = 100 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 -- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd300777e0 msgr2=0x7efd30079ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd300777e0 0x7efd30079ca0 secure :-1 s=READY pgs=4320 cs=0 l=1 rev1=1 crypto rx=0x7efd40005fd0 tx=0x7efd40005ea0 comp rx=0 tx=0).stop 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 -- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd5c104500 msgr2=0x7efd5c1017c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd5c104500 0x7efd5c1017c0 secure :-1 s=READY pgs=3134 cs=0 l=1 rev1=1 crypto rx=0x7efd4800d950 tx=0x7efd4800de10 comp rx=0 tx=0).stop 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 -- 192.168.123.100:0/4265685541 shutdown_connections 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efd300777e0 0x7efd30079ca0 unknown :-1 s=CLOSED pgs=4320 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efd5c10cf30 0x7efd5c101d00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd5c104500 0x7efd5c1017c0 unknown :-1 s=CLOSED pgs=3134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 --2- 192.168.123.100:0/4265685541 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efd5c103be0 0x7efd5c103230 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 -- 192.168.123.100:0/4265685541 >> 192.168.123.100:0/4265685541 conn(0x7efd5c0fd500 msgr2=0x7efd5c10e740 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 -- 192.168.123.100:0/4265685541 shutdown_connections 2026-03-10T07:59:04.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:04.136+0000 7efd6276d640 1 -- 192.168.123.100:0/4265685541 wait complete. 2026-03-10T07:59:04.148 INFO:tasks.workunit.client.0.vm00.stderr:+ sleep 30 2026-03-10T07:59:04.512 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:59:04 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:59:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:04 vm03 bash[23382]: cluster 2026-03-10T07:59:03.022202+0000 mgr.y (mgr.24407) 1253 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:59:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:04 vm03 bash[23382]: cluster 2026-03-10T07:59:03.022202+0000 mgr.y (mgr.24407) 1253 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:59:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:04 vm03 bash[23382]: audit 2026-03-10T07:59:03.118075+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:04 vm03 bash[23382]: audit 2026-03-10T07:59:03.118075+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:04 vm03 bash[23382]: cluster 2026-03-10T07:59:03.134399+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T07:59:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:04 vm03 bash[23382]: cluster 2026-03-10T07:59:03.134399+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T07:59:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:04 vm03 bash[23382]: audit 2026-03-10T07:59:03.174400+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:04 vm03 bash[23382]: audit 2026-03-10T07:59:03.174400+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:04 vm00 bash[28005]: cluster 2026-03-10T07:59:03.022202+0000 mgr.y (mgr.24407) 1253 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:04 vm00 bash[28005]: cluster 2026-03-10T07:59:03.022202+0000 mgr.y (mgr.24407) 1253 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:04 vm00 bash[28005]: audit 2026-03-10T07:59:03.118075+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:04 vm00 bash[28005]: audit 2026-03-10T07:59:03.118075+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:04 vm00 bash[28005]: cluster 2026-03-10T07:59:03.134399+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:04 vm00 bash[28005]: cluster 2026-03-10T07:59:03.134399+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:04 vm00 bash[28005]: audit 2026-03-10T07:59:03.174400+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:04 vm00 bash[28005]: audit 2026-03-10T07:59:03.174400+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:04 vm00 bash[20701]: cluster 2026-03-10T07:59:03.022202+0000 mgr.y (mgr.24407) 1253 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:04 vm00 bash[20701]: cluster 2026-03-10T07:59:03.022202+0000 mgr.y (mgr.24407) 1253 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:04 vm00 bash[20701]: audit 2026-03-10T07:59:03.118075+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:04 vm00 bash[20701]: audit 2026-03-10T07:59:03.118075+0000 mon.a (mon.0) 3634 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:04 vm00 bash[20701]: cluster 2026-03-10T07:59:03.134399+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:04 vm00 bash[20701]: cluster 2026-03-10T07:59:03.134399+0000 mon.a (mon.0) 3635 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:04 vm00 bash[20701]: audit 2026-03-10T07:59:03.174400+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:04.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:04 vm00 bash[20701]: audit 2026-03-10T07:59:03.174400+0000 mon.a (mon.0) 3636 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T07:59:05.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:05 vm03 bash[23382]: audit 2026-03-10T07:59:04.138430+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:05.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:05 vm03 bash[23382]: audit 2026-03-10T07:59:04.138430+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:05.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:05 vm03 bash[23382]: cluster 2026-03-10T07:59:04.147435+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T07:59:05.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:05 vm03 bash[23382]: cluster 2026-03-10T07:59:04.147435+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T07:59:05.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:05 vm03 bash[23382]: audit 2026-03-10T07:59:04.398913+0000 mgr.y (mgr.24407) 1254 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:05.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:05 vm03 bash[23382]: audit 2026-03-10T07:59:04.398913+0000 mgr.y (mgr.24407) 1254 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:05 vm00 bash[28005]: audit 2026-03-10T07:59:04.138430+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:05 vm00 bash[28005]: audit 2026-03-10T07:59:04.138430+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:05 vm00 bash[28005]: cluster 2026-03-10T07:59:04.147435+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:05 vm00 bash[28005]: cluster 2026-03-10T07:59:04.147435+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:05 vm00 bash[28005]: audit 2026-03-10T07:59:04.398913+0000 mgr.y (mgr.24407) 1254 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:05 vm00 bash[28005]: audit 2026-03-10T07:59:04.398913+0000 mgr.y (mgr.24407) 1254 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:05 vm00 bash[20701]: audit 2026-03-10T07:59:04.138430+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:05 vm00 bash[20701]: audit 2026-03-10T07:59:04.138430+0000 mon.a (mon.0) 3637 : audit [INF] from='client.? 192.168.123.100:0/4265685541' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:05 vm00 bash[20701]: cluster 2026-03-10T07:59:04.147435+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:05 vm00 bash[20701]: cluster 2026-03-10T07:59:04.147435+0000 mon.a (mon.0) 3638 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:05 vm00 bash[20701]: audit 2026-03-10T07:59:04.398913+0000 mgr.y (mgr.24407) 1254 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:05.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:05 vm00 bash[20701]: audit 2026-03-10T07:59:04.398913+0000 mgr.y (mgr.24407) 1254 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:06.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:06 vm03 bash[23382]: cluster 2026-03-10T07:59:05.022811+0000 mgr.y (mgr.24407) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:06.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:06 vm03 bash[23382]: cluster 2026-03-10T07:59:05.022811+0000 mgr.y (mgr.24407) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:06.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:06 vm00 bash[28005]: cluster 2026-03-10T07:59:05.022811+0000 mgr.y (mgr.24407) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:06.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:06 vm00 bash[28005]: cluster 2026-03-10T07:59:05.022811+0000 mgr.y (mgr.24407) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:06.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:06 vm00 bash[20701]: cluster 2026-03-10T07:59:05.022811+0000 mgr.y (mgr.24407) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:06.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:06 vm00 bash[20701]: cluster 2026-03-10T07:59:05.022811+0000 mgr.y (mgr.24407) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:07.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:07 vm03 bash[23382]: cluster 2026-03-10T07:59:07.048981+0000 mon.a (mon.0) 3639 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_bytes: 100 B) 2026-03-10T07:59:07.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:07 vm03 bash[23382]: cluster 2026-03-10T07:59:07.048981+0000 mon.a (mon.0) 3639 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_bytes: 100 B) 2026-03-10T07:59:07.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:07 vm03 bash[23382]: cluster 2026-03-10T07:59:07.049169+0000 mon.a (mon.0) 3640 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:59:07.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:07 vm03 bash[23382]: cluster 2026-03-10T07:59:07.049169+0000 mon.a (mon.0) 3640 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:59:07.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:07 vm03 bash[23382]: cluster 2026-03-10T07:59:07.060058+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T07:59:07.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:07 vm03 bash[23382]: cluster 2026-03-10T07:59:07.060058+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:07 vm00 bash[28005]: cluster 2026-03-10T07:59:07.048981+0000 mon.a (mon.0) 3639 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_bytes: 100 B) 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:07 vm00 bash[28005]: cluster 2026-03-10T07:59:07.048981+0000 mon.a (mon.0) 3639 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_bytes: 100 B) 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:07 vm00 bash[28005]: cluster 2026-03-10T07:59:07.049169+0000 mon.a (mon.0) 3640 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:07 vm00 bash[28005]: cluster 2026-03-10T07:59:07.049169+0000 mon.a (mon.0) 3640 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:07 vm00 bash[28005]: cluster 2026-03-10T07:59:07.060058+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:07 vm00 bash[28005]: cluster 2026-03-10T07:59:07.060058+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:07 vm00 bash[20701]: cluster 2026-03-10T07:59:07.048981+0000 mon.a (mon.0) 3639 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_bytes: 100 B) 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:07 vm00 bash[20701]: cluster 2026-03-10T07:59:07.048981+0000 mon.a (mon.0) 3639 : cluster [WRN] pool '848a4d04-314a-4289-950b-2472b7cc83f9' is full (reached quota's max_bytes: 100 B) 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:07 vm00 bash[20701]: cluster 2026-03-10T07:59:07.049169+0000 mon.a (mon.0) 3640 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:07 vm00 bash[20701]: cluster 2026-03-10T07:59:07.049169+0000 mon.a (mon.0) 3640 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:07 vm00 bash[20701]: cluster 2026-03-10T07:59:07.060058+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T07:59:07.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:07 vm00 bash[20701]: cluster 2026-03-10T07:59:07.060058+0000 mon.a (mon.0) 3641 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T07:59:08.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:08 vm03 bash[23382]: cluster 2026-03-10T07:59:07.023059+0000 mgr.y (mgr.24407) 1256 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 866 B/s rd, 1.0 KiB/s wr, 1 op/s 2026-03-10T07:59:08.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:08 vm03 bash[23382]: cluster 2026-03-10T07:59:07.023059+0000 mgr.y (mgr.24407) 1256 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 866 B/s rd, 1.0 KiB/s wr, 1 op/s 2026-03-10T07:59:08.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:08 vm00 bash[28005]: cluster 2026-03-10T07:59:07.023059+0000 mgr.y (mgr.24407) 1256 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 866 B/s rd, 1.0 KiB/s wr, 1 op/s 2026-03-10T07:59:08.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:08 vm00 bash[28005]: cluster 2026-03-10T07:59:07.023059+0000 mgr.y (mgr.24407) 1256 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 866 B/s rd, 1.0 KiB/s wr, 1 op/s 2026-03-10T07:59:08.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:08 vm00 bash[20701]: cluster 2026-03-10T07:59:07.023059+0000 mgr.y (mgr.24407) 1256 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 866 B/s rd, 1.0 KiB/s wr, 1 op/s 2026-03-10T07:59:08.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:08 vm00 bash[20701]: cluster 2026-03-10T07:59:07.023059+0000 mgr.y (mgr.24407) 1256 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 866 B/s rd, 1.0 KiB/s wr, 1 op/s 2026-03-10T07:59:10.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:10 vm03 bash[23382]: cluster 2026-03-10T07:59:09.023319+0000 mgr.y (mgr.24407) 1257 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:59:10.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:10 vm03 bash[23382]: cluster 2026-03-10T07:59:09.023319+0000 mgr.y (mgr.24407) 1257 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:59:10.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:10 vm00 bash[28005]: cluster 2026-03-10T07:59:09.023319+0000 mgr.y (mgr.24407) 1257 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:59:10.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:10 vm00 bash[28005]: cluster 2026-03-10T07:59:09.023319+0000 mgr.y (mgr.24407) 1257 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:59:10.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:10 vm00 bash[20701]: cluster 2026-03-10T07:59:09.023319+0000 mgr.y (mgr.24407) 1257 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:59:10.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:10 vm00 bash[20701]: cluster 2026-03-10T07:59:09.023319+0000 mgr.y (mgr.24407) 1257 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:59:11.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:11 vm00 bash[28005]: audit 2026-03-10T07:59:10.479019+0000 mon.c (mon.2) 489 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:11.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:11 vm00 bash[28005]: audit 2026-03-10T07:59:10.479019+0000 mon.c (mon.2) 489 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:11.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:59:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:59:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:59:11.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:11 vm00 bash[20701]: audit 2026-03-10T07:59:10.479019+0000 mon.c (mon.2) 489 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:11.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:11 vm00 bash[20701]: audit 2026-03-10T07:59:10.479019+0000 mon.c (mon.2) 489 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:11.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:11 vm03 bash[23382]: audit 2026-03-10T07:59:10.479019+0000 mon.c (mon.2) 489 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:11.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:11 vm03 bash[23382]: audit 2026-03-10T07:59:10.479019+0000 mon.c (mon.2) 489 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:12.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:12 vm03 bash[23382]: cluster 2026-03-10T07:59:11.023838+0000 mgr.y (mgr.24407) 1258 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 777 B/s wr, 1 op/s 2026-03-10T07:59:12.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:12 vm03 bash[23382]: cluster 2026-03-10T07:59:11.023838+0000 mgr.y (mgr.24407) 1258 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 777 B/s wr, 1 op/s 2026-03-10T07:59:12.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:12 vm00 bash[28005]: cluster 2026-03-10T07:59:11.023838+0000 mgr.y (mgr.24407) 1258 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 777 B/s wr, 1 op/s 2026-03-10T07:59:12.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:12 vm00 bash[28005]: cluster 2026-03-10T07:59:11.023838+0000 mgr.y (mgr.24407) 1258 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 777 B/s wr, 1 op/s 2026-03-10T07:59:12.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:12 vm00 bash[20701]: cluster 2026-03-10T07:59:11.023838+0000 mgr.y (mgr.24407) 1258 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 777 B/s wr, 1 op/s 2026-03-10T07:59:12.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:12 vm00 bash[20701]: cluster 2026-03-10T07:59:11.023838+0000 mgr.y (mgr.24407) 1258 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 777 B/s wr, 1 op/s 2026-03-10T07:59:14.512 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:59:14 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:59:14.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:14 vm03 bash[23382]: cluster 2026-03-10T07:59:13.024117+0000 mgr.y (mgr.24407) 1259 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 691 B/s wr, 1 op/s 2026-03-10T07:59:14.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:14 vm03 bash[23382]: cluster 2026-03-10T07:59:13.024117+0000 mgr.y (mgr.24407) 1259 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 691 B/s wr, 1 op/s 2026-03-10T07:59:14.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:14 vm00 bash[28005]: cluster 2026-03-10T07:59:13.024117+0000 mgr.y (mgr.24407) 1259 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 691 B/s wr, 1 op/s 2026-03-10T07:59:14.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:14 vm00 bash[28005]: cluster 2026-03-10T07:59:13.024117+0000 mgr.y (mgr.24407) 1259 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 691 B/s wr, 1 op/s 2026-03-10T07:59:14.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:14 vm00 bash[20701]: cluster 2026-03-10T07:59:13.024117+0000 mgr.y (mgr.24407) 1259 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 691 B/s wr, 1 op/s 2026-03-10T07:59:14.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:14 vm00 bash[20701]: cluster 2026-03-10T07:59:13.024117+0000 mgr.y (mgr.24407) 1259 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 691 B/s wr, 1 op/s 2026-03-10T07:59:15.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:15 vm03 bash[23382]: audit 2026-03-10T07:59:14.409475+0000 mgr.y (mgr.24407) 1260 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:15.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:15 vm03 bash[23382]: audit 2026-03-10T07:59:14.409475+0000 mgr.y (mgr.24407) 1260 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:15.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:15 vm00 bash[28005]: audit 2026-03-10T07:59:14.409475+0000 mgr.y (mgr.24407) 1260 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:15.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:15 vm00 bash[28005]: audit 2026-03-10T07:59:14.409475+0000 mgr.y (mgr.24407) 1260 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:15.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:15 vm00 bash[20701]: audit 2026-03-10T07:59:14.409475+0000 mgr.y (mgr.24407) 1260 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:15.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:15 vm00 bash[20701]: audit 2026-03-10T07:59:14.409475+0000 mgr.y (mgr.24407) 1260 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:16.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:16 vm03 bash[23382]: cluster 2026-03-10T07:59:15.024711+0000 mgr.y (mgr.24407) 1261 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:16.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:16 vm03 bash[23382]: cluster 2026-03-10T07:59:15.024711+0000 mgr.y (mgr.24407) 1261 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:16.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:16 vm00 bash[28005]: cluster 2026-03-10T07:59:15.024711+0000 mgr.y (mgr.24407) 1261 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:16.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:16 vm00 bash[28005]: cluster 2026-03-10T07:59:15.024711+0000 mgr.y (mgr.24407) 1261 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:16.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:16 vm00 bash[20701]: cluster 2026-03-10T07:59:15.024711+0000 mgr.y (mgr.24407) 1261 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:16.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:16 vm00 bash[20701]: cluster 2026-03-10T07:59:15.024711+0000 mgr.y (mgr.24407) 1261 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:18 vm03 bash[23382]: cluster 2026-03-10T07:59:17.025029+0000 mgr.y (mgr.24407) 1262 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:18.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:18 vm03 bash[23382]: cluster 2026-03-10T07:59:17.025029+0000 mgr.y (mgr.24407) 1262 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:18.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:18 vm00 bash[28005]: cluster 2026-03-10T07:59:17.025029+0000 mgr.y (mgr.24407) 1262 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:18.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:18 vm00 bash[28005]: cluster 2026-03-10T07:59:17.025029+0000 mgr.y (mgr.24407) 1262 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:18.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:18 vm00 bash[20701]: cluster 2026-03-10T07:59:17.025029+0000 mgr.y (mgr.24407) 1262 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:18.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:18 vm00 bash[20701]: cluster 2026-03-10T07:59:17.025029+0000 mgr.y (mgr.24407) 1262 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:20 vm03 bash[23382]: cluster 2026-03-10T07:59:19.025341+0000 mgr.y (mgr.24407) 1263 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:59:20.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:20 vm03 bash[23382]: cluster 2026-03-10T07:59:19.025341+0000 mgr.y (mgr.24407) 1263 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:59:20.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:20 vm00 bash[28005]: cluster 2026-03-10T07:59:19.025341+0000 mgr.y (mgr.24407) 1263 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:59:20.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:20 vm00 bash[28005]: cluster 2026-03-10T07:59:19.025341+0000 mgr.y (mgr.24407) 1263 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:59:20.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:20 vm00 bash[20701]: cluster 2026-03-10T07:59:19.025341+0000 mgr.y (mgr.24407) 1263 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:59:20.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:20 vm00 bash[20701]: cluster 2026-03-10T07:59:19.025341+0000 mgr.y (mgr.24407) 1263 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 855 B/s rd, 0 op/s 2026-03-10T07:59:21.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:59:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:59:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:59:22.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:22 vm03 bash[23382]: cluster 2026-03-10T07:59:21.025893+0000 mgr.y (mgr.24407) 1264 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:22.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:22 vm03 bash[23382]: cluster 2026-03-10T07:59:21.025893+0000 mgr.y (mgr.24407) 1264 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:22.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:22 vm00 bash[28005]: cluster 2026-03-10T07:59:21.025893+0000 mgr.y (mgr.24407) 1264 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:22.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:22 vm00 bash[28005]: cluster 2026-03-10T07:59:21.025893+0000 mgr.y (mgr.24407) 1264 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:22.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:22 vm00 bash[20701]: cluster 2026-03-10T07:59:21.025893+0000 mgr.y (mgr.24407) 1264 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:22.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:22 vm00 bash[20701]: cluster 2026-03-10T07:59:21.025893+0000 mgr.y (mgr.24407) 1264 : cluster [DBG] pgmap v1703: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:24.512 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:59:24 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:59:24.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:24 vm03 bash[23382]: cluster 2026-03-10T07:59:23.026167+0000 mgr.y (mgr.24407) 1265 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:24.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:24 vm03 bash[23382]: cluster 2026-03-10T07:59:23.026167+0000 mgr.y (mgr.24407) 1265 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:24.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:24 vm00 bash[28005]: cluster 2026-03-10T07:59:23.026167+0000 mgr.y (mgr.24407) 1265 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:24.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:24 vm00 bash[28005]: cluster 2026-03-10T07:59:23.026167+0000 mgr.y (mgr.24407) 1265 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:24.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:24 vm00 bash[20701]: cluster 2026-03-10T07:59:23.026167+0000 mgr.y (mgr.24407) 1265 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:24.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:24 vm00 bash[20701]: cluster 2026-03-10T07:59:23.026167+0000 mgr.y (mgr.24407) 1265 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:25 vm03 bash[23382]: audit 2026-03-10T07:59:24.419604+0000 mgr.y (mgr.24407) 1266 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:25 vm03 bash[23382]: audit 2026-03-10T07:59:24.419604+0000 mgr.y (mgr.24407) 1266 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:25.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:25 vm00 bash[28005]: audit 2026-03-10T07:59:24.419604+0000 mgr.y (mgr.24407) 1266 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:25.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:25 vm00 bash[28005]: audit 2026-03-10T07:59:24.419604+0000 mgr.y (mgr.24407) 1266 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:25.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:25 vm00 bash[20701]: audit 2026-03-10T07:59:24.419604+0000 mgr.y (mgr.24407) 1266 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:25.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:25 vm00 bash[20701]: audit 2026-03-10T07:59:24.419604+0000 mgr.y (mgr.24407) 1266 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:26 vm03 bash[23382]: cluster 2026-03-10T07:59:25.026734+0000 mgr.y (mgr.24407) 1267 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:26 vm03 bash[23382]: cluster 2026-03-10T07:59:25.026734+0000 mgr.y (mgr.24407) 1267 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:26 vm03 bash[23382]: audit 2026-03-10T07:59:25.484913+0000 mon.c (mon.2) 490 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:26.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:26 vm03 bash[23382]: audit 2026-03-10T07:59:25.484913+0000 mon.c (mon.2) 490 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:26.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:26 vm00 bash[28005]: cluster 2026-03-10T07:59:25.026734+0000 mgr.y (mgr.24407) 1267 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:26.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:26 vm00 bash[28005]: cluster 2026-03-10T07:59:25.026734+0000 mgr.y (mgr.24407) 1267 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:26.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:26 vm00 bash[28005]: audit 2026-03-10T07:59:25.484913+0000 mon.c (mon.2) 490 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:26.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:26 vm00 bash[28005]: audit 2026-03-10T07:59:25.484913+0000 mon.c (mon.2) 490 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:26.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:26 vm00 bash[20701]: cluster 2026-03-10T07:59:25.026734+0000 mgr.y (mgr.24407) 1267 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:26.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:26 vm00 bash[20701]: cluster 2026-03-10T07:59:25.026734+0000 mgr.y (mgr.24407) 1267 : cluster [DBG] pgmap v1705: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:26.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:26 vm00 bash[20701]: audit 2026-03-10T07:59:25.484913+0000 mon.c (mon.2) 490 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:26.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:26 vm00 bash[20701]: audit 2026-03-10T07:59:25.484913+0000 mon.c (mon.2) 490 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:28.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:28 vm03 bash[23382]: cluster 2026-03-10T07:59:27.027019+0000 mgr.y (mgr.24407) 1268 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:28.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:28 vm03 bash[23382]: cluster 2026-03-10T07:59:27.027019+0000 mgr.y (mgr.24407) 1268 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:28.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:28 vm00 bash[28005]: cluster 2026-03-10T07:59:27.027019+0000 mgr.y (mgr.24407) 1268 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:28.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:28 vm00 bash[28005]: cluster 2026-03-10T07:59:27.027019+0000 mgr.y (mgr.24407) 1268 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:28.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:28 vm00 bash[20701]: cluster 2026-03-10T07:59:27.027019+0000 mgr.y (mgr.24407) 1268 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:28.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:28 vm00 bash[20701]: cluster 2026-03-10T07:59:27.027019+0000 mgr.y (mgr.24407) 1268 : cluster [DBG] pgmap v1706: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:30.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:30 vm03 bash[23382]: cluster 2026-03-10T07:59:29.027322+0000 mgr.y (mgr.24407) 1269 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:30.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:30 vm03 bash[23382]: cluster 2026-03-10T07:59:29.027322+0000 mgr.y (mgr.24407) 1269 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:30.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:30 vm00 bash[28005]: cluster 2026-03-10T07:59:29.027322+0000 mgr.y (mgr.24407) 1269 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:30.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:30 vm00 bash[28005]: cluster 2026-03-10T07:59:29.027322+0000 mgr.y (mgr.24407) 1269 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:30.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:30 vm00 bash[20701]: cluster 2026-03-10T07:59:29.027322+0000 mgr.y (mgr.24407) 1269 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:30.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:30 vm00 bash[20701]: cluster 2026-03-10T07:59:29.027322+0000 mgr.y (mgr.24407) 1269 : cluster [DBG] pgmap v1707: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:31.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:59:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:59:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:59:32.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:32 vm00 bash[28005]: cluster 2026-03-10T07:59:31.027850+0000 mgr.y (mgr.24407) 1270 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:32.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:32 vm00 bash[28005]: cluster 2026-03-10T07:59:31.027850+0000 mgr.y (mgr.24407) 1270 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:32.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:32 vm00 bash[20701]: cluster 2026-03-10T07:59:31.027850+0000 mgr.y (mgr.24407) 1270 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:32.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:32 vm00 bash[20701]: cluster 2026-03-10T07:59:31.027850+0000 mgr.y (mgr.24407) 1270 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:32.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:32 vm03 bash[23382]: cluster 2026-03-10T07:59:31.027850+0000 mgr.y (mgr.24407) 1270 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:32.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:32 vm03 bash[23382]: cluster 2026-03-10T07:59:31.027850+0000 mgr.y (mgr.24407) 1270 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:34.149 INFO:tasks.workunit.client.0.vm00.stderr:+ pid=131481 2026-03-10T07:59:34.150 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota 848a4d04-314a-4289-950b-2472b7cc83f9 max_bytes 0 2026-03-10T07:59:34.150 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put two /etc/passwd 2026-03-10T07:59:34.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- 192.168.123.100:0/82577678 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0ed4106ce0 msgr2=0x7f0ed4107190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:34.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 --2- 192.168.123.100:0/82577678 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0ed4106ce0 0x7f0ed4107190 secure :-1 s=READY pgs=3135 cs=0 l=1 rev1=1 crypto rx=0x7f0ec800b0a0 tx=0x7f0ec801cae0 comp rx=0 tx=0).stop 2026-03-10T07:59:34.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- 192.168.123.100:0/82577678 shutdown_connections 2026-03-10T07:59:34.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 --2- 192.168.123.100:0/82577678 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0ed4106ce0 0x7f0ed4107190 unknown :-1 s=CLOSED pgs=3135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:34.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 --2- 192.168.123.100:0/82577678 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0ed41021f0 0x7f0ed41066c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:34.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 --2- 192.168.123.100:0/82577678 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0ed41018d0 0x7f0ed4101cb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:34.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- 192.168.123.100:0/82577678 >> 192.168.123.100:0/82577678 conn(0x7f0ed40fd540 msgr2=0x7f0ed40ff960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:34.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- 192.168.123.100:0/82577678 shutdown_connections 2026-03-10T07:59:34.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- 192.168.123.100:0/82577678 wait complete. 2026-03-10T07:59:34.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 Processor -- start 2026-03-10T07:59:34.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- start start 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0ed41018d0 0x7f0ed41a4120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0ed41021f0 0x7f0ed41a4660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0ed4106ce0 0x7f0ed41a89f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0ed411bb60 con 0x7f0ed4106ce0 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f0ed411b9e0 con 0x7f0ed41021f0 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f0ed411bce0 con 0x7f0ed41018d0 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed8d78640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0ed4106ce0 0x7f0ed41a89f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed8d78640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0ed4106ce0 0x7f0ed41a89f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:46816/0 (socket says 192.168.123.100:46816) 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed37fe640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0ed41021f0 0x7f0ed41a4660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed8d78640 1 -- 192.168.123.100:0/2557351966 learned_addr learned my addr 192.168.123.100:0/2557351966 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed3fff640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0ed41018d0 0x7f0ed41a4120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed37fe640 1 -- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0ed41018d0 msgr2=0x7f0ed41a4120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed37fe640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0ed41018d0 0x7f0ed41a4120 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed37fe640 1 -- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0ed4106ce0 msgr2=0x7f0ed41a89f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed37fe640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0ed4106ce0 0x7f0ed41a89f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed37fe640 1 -- 192.168.123.100:0/2557351966 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0ed41a90d0 con 0x7f0ed41021f0 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed8d78640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0ed4106ce0 0x7f0ed41a89f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed3fff640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0ed41018d0 0x7f0ed41a4120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T07:59:34.208 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed37fe640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0ed41021f0 0x7f0ed41a4660 secure :-1 s=READY pgs=2924 cs=0 l=1 rev1=1 crypto rx=0x7f0ec000c970 tx=0x7f0ec000ce30 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:34.209 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 <== mon.1 v2:192.168.123.103:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0ec0013070 con 0x7f0ed41021f0 2026-03-10T07:59:34.209 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- 192.168.123.100:0/2557351966 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0ed41a93c0 con 0x7f0ed41021f0 2026-03-10T07:59:34.209 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- 192.168.123.100:0/2557351966 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0ed41b0ca0 con 0x7f0ed41021f0 2026-03-10T07:59:34.209 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 <== mon.1 v2:192.168.123.103:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f0ec0004480 con 0x7f0ed41021f0 2026-03-10T07:59:34.209 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 <== mon.1 v2:192.168.123.103:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0ec0002be0 con 0x7f0ed41021f0 2026-03-10T07:59:34.209 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.216+0000 7f0eda802640 1 -- 192.168.123.100:0/2557351966 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0ed406b870 con 0x7f0ed41021f0 2026-03-10T07:59:34.211 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.220+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 <== mon.1 v2:192.168.123.103:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f0ec0020020 con 0x7f0ed41021f0 2026-03-10T07:59:34.211 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.220+0000 7f0ed17fa640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f0eb4077710 0x7f0eb4079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:34.211 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.220+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 <== mon.1 v2:192.168.123.103:3300/0 5 ==== osd_map(772..772 src has 258..772) ==== 9073+0+0 (secure 0 0 0) 0x7f0ec009a5a0 con 0x7f0ed41021f0 2026-03-10T07:59:34.211 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.220+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=773}) -- 0x7f0eb40832f0 con 0x7f0ed41021f0 2026-03-10T07:59:34.211 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.220+0000 7f0ed3fff640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f0eb4077710 0x7f0eb4079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:34.211 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.220+0000 7f0ed3fff640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f0eb4077710 0x7f0eb4079bd0 secure :-1 s=READY pgs=4322 cs=0 l=1 rev1=1 crypto rx=0x7f0ec4009a30 tx=0x7f0ec4009290 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:34.216 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.224+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 <== mon.1 v2:192.168.123.103:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0ec00668a0 con 0x7f0ed41021f0 2026-03-10T07:59:34.304 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:34.312+0000 7f0eda802640 1 -- 192.168.123.100:0/2557351966 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"} v 0) -- 0x7f0ed41066c0 con 0x7f0ed41021f0 2026-03-10T07:59:34.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:34 vm00 bash[28005]: cluster 2026-03-10T07:59:33.028168+0000 mgr.y (mgr.24407) 1271 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:34.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:34 vm00 bash[28005]: cluster 2026-03-10T07:59:33.028168+0000 mgr.y (mgr.24407) 1271 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:34.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:34 vm00 bash[20701]: cluster 2026-03-10T07:59:33.028168+0000 mgr.y (mgr.24407) 1271 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:34.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:34 vm00 bash[20701]: cluster 2026-03-10T07:59:33.028168+0000 mgr.y (mgr.24407) 1271 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:34.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:59:34 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:59:34.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:34 vm03 bash[23382]: cluster 2026-03-10T07:59:33.028168+0000 mgr.y (mgr.24407) 1271 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:34.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:34 vm03 bash[23382]: cluster 2026-03-10T07:59:33.028168+0000 mgr.y (mgr.24407) 1271 : cluster [DBG] pgmap v1709: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:35.299 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:35.308+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 <== mon.1 v2:192.168.123.103:3300/0 7 ==== osd_map(773..773 src has 258..773) ==== 628+0+0 (secure 0 0 0) 0x7f0ec005e940 con 0x7f0ed41021f0 2026-03-10T07:59:35.299 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:35.308+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=774}) -- 0x7f0eb4083e90 con 0x7f0ed41021f0 2026-03-10T07:59:35.311 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:35.320+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 <== mon.1 v2:192.168.123.103:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v773) ==== 217+0+0 (secure 0 0 0) 0x7f0ed41066c0 con 0x7f0ed41021f0 2026-03-10T07:59:35.360 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:35.368+0000 7f0eda802640 1 -- 192.168.123.100:0/2557351966 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"} v 0) -- 0x7f0ed41b0f30 con 0x7f0ed41021f0 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:35 vm00 bash[28005]: audit 2026-03-10T07:59:34.311279+0000 mon.b (mon.1) 683 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:35 vm00 bash[28005]: audit 2026-03-10T07:59:34.311279+0000 mon.b (mon.1) 683 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:35 vm00 bash[28005]: audit 2026-03-10T07:59:34.315483+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:35 vm00 bash[28005]: audit 2026-03-10T07:59:34.315483+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:35 vm00 bash[28005]: audit 2026-03-10T07:59:34.429177+0000 mgr.y (mgr.24407) 1272 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:35 vm00 bash[28005]: audit 2026-03-10T07:59:34.429177+0000 mgr.y (mgr.24407) 1272 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:35 vm00 bash[20701]: audit 2026-03-10T07:59:34.311279+0000 mon.b (mon.1) 683 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:35 vm00 bash[20701]: audit 2026-03-10T07:59:34.311279+0000 mon.b (mon.1) 683 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:35 vm00 bash[20701]: audit 2026-03-10T07:59:34.315483+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:35 vm00 bash[20701]: audit 2026-03-10T07:59:34.315483+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:35 vm00 bash[20701]: audit 2026-03-10T07:59:34.429177+0000 mgr.y (mgr.24407) 1272 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:35.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:35 vm00 bash[20701]: audit 2026-03-10T07:59:34.429177+0000 mgr.y (mgr.24407) 1272 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:35.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:35 vm03 bash[23382]: audit 2026-03-10T07:59:34.311279+0000 mon.b (mon.1) 683 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:35 vm03 bash[23382]: audit 2026-03-10T07:59:34.311279+0000 mon.b (mon.1) 683 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:35 vm03 bash[23382]: audit 2026-03-10T07:59:34.315483+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:35 vm03 bash[23382]: audit 2026-03-10T07:59:34.315483+0000 mon.a (mon.0) 3642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:35.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:35 vm03 bash[23382]: audit 2026-03-10T07:59:34.429177+0000 mgr.y (mgr.24407) 1272 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:35.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:35 vm03 bash[23382]: audit 2026-03-10T07:59:34.429177+0000 mgr.y (mgr.24407) 1272 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:36.301 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.308+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 <== mon.1 v2:192.168.123.103:3300/0 9 ==== osd_map(774..774 src has 258..774) ==== 628+0+0 (secure 0 0 0) 0x7f0ec0002d80 con 0x7f0ed41021f0 2026-03-10T07:59:36.301 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.308+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_subscribe({osdmap=775}) -- 0x7f0eb4084490 con 0x7f0ed41021f0 2026-03-10T07:59:36.312 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.320+0000 7f0ed17fa640 1 -- 192.168.123.100:0/2557351966 <== mon.1 v2:192.168.123.103:3300/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v774) ==== 217+0+0 (secure 0 0 0) 0x7f0ec00990d0 con 0x7f0ed41021f0 2026-03-10T07:59:36.312 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_bytes = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 -- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f0eb4077710 msgr2=0x7f0eb4079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f0eb4077710 0x7f0eb4079bd0 secure :-1 s=READY pgs=4322 cs=0 l=1 rev1=1 crypto rx=0x7f0ec4009a30 tx=0x7f0ec4009290 comp rx=0 tx=0).stop 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 -- 192.168.123.100:0/2557351966 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0ed41021f0 msgr2=0x7f0ed41a4660 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0ed41021f0 0x7f0ed41a4660 secure :-1 s=READY pgs=2924 cs=0 l=1 rev1=1 crypto rx=0x7f0ec000c970 tx=0x7f0ec000ce30 comp rx=0 tx=0).stop 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 -- 192.168.123.100:0/2557351966 shutdown_connections 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f0eb4077710 0x7f0eb4079bd0 unknown :-1 s=CLOSED pgs=4322 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0ed4106ce0 0x7f0ed41a89f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f0ed41021f0 0x7f0ed41a4660 unknown :-1 s=CLOSED pgs=2924 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 --2- 192.168.123.100:0/2557351966 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f0ed41018d0 0x7f0ed41a4120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 -- 192.168.123.100:0/2557351966 >> 192.168.123.100:0/2557351966 conn(0x7f0ed40fd540 msgr2=0x7f0ed40fea70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 -- 192.168.123.100:0/2557351966 shutdown_connections 2026-03-10T07:59:36.314 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.324+0000 7f0eda802640 1 -- 192.168.123.100:0/2557351966 wait complete. 2026-03-10T07:59:36.325 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota 848a4d04-314a-4289-950b-2472b7cc83f9 max_objects 0 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 -- 192.168.123.100:0/3124316414 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efca410b9c0 msgr2=0x7efca410ddb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 --2- 192.168.123.100:0/3124316414 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efca410b9c0 0x7efca410ddb0 secure :-1 s=READY pgs=3136 cs=0 l=1 rev1=1 crypto rx=0x7efc9800b0a0 tx=0x7efc9801c990 comp rx=0 tx=0).stop 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 -- 192.168.123.100:0/3124316414 shutdown_connections 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 --2- 192.168.123.100:0/3124316414 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efca410b9c0 0x7efca410ddb0 unknown :-1 s=CLOSED pgs=3136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 --2- 192.168.123.100:0/3124316414 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efca41074d0 0x7efca4101040 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 --2- 192.168.123.100:0/3124316414 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efca4069380 0x7efca4100b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 -- 192.168.123.100:0/3124316414 >> 192.168.123.100:0/3124316414 conn(0x7efca40fc820 msgr2=0x7efca40fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 -- 192.168.123.100:0/3124316414 shutdown_connections 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 -- 192.168.123.100:0/3124316414 wait complete. 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 Processor -- start 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 -- start start 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efca4069380 0x7efca4105150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efca41074d0 0x7efca4105690 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efca410b9c0 0x7efca41a3b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7efca4114a20 con 0x7efca41074d0 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7efca41148a0 con 0x7efca4069380 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7efca4114ba0 con 0x7efca410b9c0 2026-03-10T07:59:36.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca37fe640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efca4069380 0x7efca4105150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca2ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efca41074d0 0x7efca4105690 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca37fe640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efca4069380 0x7efca4105150 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.103:3300/0 says I am v2:192.168.123.100:43804/0 (socket says 192.168.123.100:43804) 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca37fe640 1 -- 192.168.123.100:0/1325597710 learned_addr learned my addr 192.168.123.100:0/1325597710 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca3fff640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efca410b9c0 0x7efca41a3b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca2ffd640 1 -- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efca410b9c0 msgr2=0x7efca41a3b90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca2ffd640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efca410b9c0 0x7efca41a3b90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca2ffd640 1 -- 192.168.123.100:0/1325597710 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efca4069380 msgr2=0x7efca4105150 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca2ffd640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efca4069380 0x7efca4105150 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca2ffd640 1 -- 192.168.123.100:0/1325597710 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efca41a4310 con 0x7efca41074d0 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca3fff640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efca410b9c0 0x7efca41a3b90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca37fe640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efca4069380 0x7efca4105150 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca2ffd640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efca41074d0 0x7efca4105690 secure :-1 s=READY pgs=3137 cs=0 l=1 rev1=1 crypto rx=0x7efc9000ed30 tx=0x7efc9000c6a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efc9000cdb0 con 0x7efca41074d0 2026-03-10T07:59:36.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 -- 192.168.123.100:0/1325597710 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7efca41a4600 con 0x7efca41074d0 2026-03-10T07:59:36.382 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efcaa1f7640 1 -- 192.168.123.100:0/1325597710 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7efca41abe80 con 0x7efca41074d0 2026-03-10T07:59:36.382 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7efc90004510 con 0x7efca41074d0 2026-03-10T07:59:36.382 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.388+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efc90010430 con 0x7efca41074d0 2026-03-10T07:59:36.382 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.392+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7efc900040d0 con 0x7efca41074d0 2026-03-10T07:59:36.382 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.392+0000 7efca0ff9640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efc6c0777e0 0x7efc6c079ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:36.383 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.392+0000 7efca37fe640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efc6c0777e0 0x7efc6c079ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:36.383 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.392+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(774..774 src has 258..774) ==== 9073+0+0 (secure 0 0 0) 0x7efc9009a9a0 con 0x7efca41074d0 2026-03-10T07:59:36.383 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.392+0000 7efca37fe640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efc6c0777e0 0x7efc6c079ca0 secure :-1 s=READY pgs=4323 cs=0 l=1 rev1=1 crypto rx=0x7efc94002800 tx=0x7efc94009290 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:36.383 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.392+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=775}) -- 0x7efc6c0833e0 con 0x7efca41074d0 2026-03-10T07:59:36.383 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.392+0000 7efcaa1f7640 1 -- 192.168.123.100:0/1325597710 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7efc70005190 con 0x7efca41074d0 2026-03-10T07:59:36.386 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.392+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7efc90066ca0 con 0x7efca41074d0 2026-03-10T07:59:36.468 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:36.476+0000 7efcaa1f7640 1 -- 192.168.123.100:0/1325597710 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"} v 0) -- 0x7efc70005480 con 0x7efca41074d0 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:36 vm00 bash[20701]: cluster 2026-03-10T07:59:35.028699+0000 mgr.y (mgr.24407) 1273 : cluster [DBG] pgmap v1710: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:36 vm00 bash[20701]: cluster 2026-03-10T07:59:35.028699+0000 mgr.y (mgr.24407) 1273 : cluster [DBG] pgmap v1710: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:36 vm00 bash[20701]: audit 2026-03-10T07:59:35.307650+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:36 vm00 bash[20701]: audit 2026-03-10T07:59:35.307650+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:36 vm00 bash[20701]: cluster 2026-03-10T07:59:35.314873+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:36 vm00 bash[20701]: cluster 2026-03-10T07:59:35.314873+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:36 vm00 bash[20701]: audit 2026-03-10T07:59:35.367536+0000 mon.b (mon.1) 684 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:36 vm00 bash[20701]: audit 2026-03-10T07:59:35.367536+0000 mon.b (mon.1) 684 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:36 vm00 bash[20701]: audit 2026-03-10T07:59:35.371751+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:36 vm00 bash[20701]: audit 2026-03-10T07:59:35.371751+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:36 vm00 bash[28005]: cluster 2026-03-10T07:59:35.028699+0000 mgr.y (mgr.24407) 1273 : cluster [DBG] pgmap v1710: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:36 vm00 bash[28005]: cluster 2026-03-10T07:59:35.028699+0000 mgr.y (mgr.24407) 1273 : cluster [DBG] pgmap v1710: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:36 vm00 bash[28005]: audit 2026-03-10T07:59:35.307650+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:36 vm00 bash[28005]: audit 2026-03-10T07:59:35.307650+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:36 vm00 bash[28005]: cluster 2026-03-10T07:59:35.314873+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:36 vm00 bash[28005]: cluster 2026-03-10T07:59:35.314873+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:36 vm00 bash[28005]: audit 2026-03-10T07:59:35.367536+0000 mon.b (mon.1) 684 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:36 vm00 bash[28005]: audit 2026-03-10T07:59:35.367536+0000 mon.b (mon.1) 684 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:36 vm00 bash[28005]: audit 2026-03-10T07:59:35.371751+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:36.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:36 vm00 bash[28005]: audit 2026-03-10T07:59:35.371751+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:36 vm03 bash[23382]: cluster 2026-03-10T07:59:35.028699+0000 mgr.y (mgr.24407) 1273 : cluster [DBG] pgmap v1710: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:36 vm03 bash[23382]: cluster 2026-03-10T07:59:35.028699+0000 mgr.y (mgr.24407) 1273 : cluster [DBG] pgmap v1710: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:36 vm03 bash[23382]: audit 2026-03-10T07:59:35.307650+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:36 vm03 bash[23382]: audit 2026-03-10T07:59:35.307650+0000 mon.a (mon.0) 3643 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:36 vm03 bash[23382]: cluster 2026-03-10T07:59:35.314873+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T07:59:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:36 vm03 bash[23382]: cluster 2026-03-10T07:59:35.314873+0000 mon.a (mon.0) 3644 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T07:59:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:36 vm03 bash[23382]: audit 2026-03-10T07:59:35.367536+0000 mon.b (mon.1) 684 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:36 vm03 bash[23382]: audit 2026-03-10T07:59:35.367536+0000 mon.b (mon.1) 684 : audit [INF] from='client.? 192.168.123.100:0/2557351966' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:36 vm03 bash[23382]: audit 2026-03-10T07:59:35.371751+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:36.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:36 vm03 bash[23382]: audit 2026-03-10T07:59:35.371751+0000 mon.a (mon.0) 3645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T07:59:37.061 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:37.068+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v775) ==== 221+0+0 (secure 0 0 0) 0x7efc9006bb50 con 0x7efca41074d0 2026-03-10T07:59:37.062 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:37.072+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 <== mon.0 v2:192.168.123.100:3300/0 8 ==== osd_map(775..775 src has 258..775) ==== 628+0+0 (secure 0 0 0) 0x7efc9005ec90 con 0x7efca41074d0 2026-03-10T07:59:37.063 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:37.072+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=776}) -- 0x7efc6c084370 con 0x7efca41074d0 2026-03-10T07:59:37.118 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:37.124+0000 7efcaa1f7640 1 -- 192.168.123.100:0/1325597710 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"} v 0) -- 0x7efc700046e0 con 0x7efca41074d0 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: audit 2026-03-10T07:59:36.311516+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: audit 2026-03-10T07:59:36.311516+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: cluster 2026-03-10T07:59:36.316173+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: cluster 2026-03-10T07:59:36.316173+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: audit 2026-03-10T07:59:36.479349+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: audit 2026-03-10T07:59:36.479349+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: cluster 2026-03-10T07:59:37.068719+0000 mon.a (mon.0) 3649 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: cluster 2026-03-10T07:59:37.068719+0000 mon.a (mon.0) 3649 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: cluster 2026-03-10T07:59:37.068967+0000 mon.a (mon.0) 3650 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: cluster 2026-03-10T07:59:37.068967+0000 mon.a (mon.0) 3650 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: audit 2026-03-10T07:59:37.072568+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: audit 2026-03-10T07:59:37.072568+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: cluster 2026-03-10T07:59:37.076968+0000 mon.a (mon.0) 3652 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: cluster 2026-03-10T07:59:37.076968+0000 mon.a (mon.0) 3652 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: audit 2026-03-10T07:59:37.129299+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:37.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:37 vm00 bash[28005]: audit 2026-03-10T07:59:37.129299+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: audit 2026-03-10T07:59:36.311516+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: audit 2026-03-10T07:59:36.311516+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: cluster 2026-03-10T07:59:36.316173+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: cluster 2026-03-10T07:59:36.316173+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: audit 2026-03-10T07:59:36.479349+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: audit 2026-03-10T07:59:36.479349+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: cluster 2026-03-10T07:59:37.068719+0000 mon.a (mon.0) 3649 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: cluster 2026-03-10T07:59:37.068719+0000 mon.a (mon.0) 3649 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: cluster 2026-03-10T07:59:37.068967+0000 mon.a (mon.0) 3650 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: cluster 2026-03-10T07:59:37.068967+0000 mon.a (mon.0) 3650 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: audit 2026-03-10T07:59:37.072568+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: audit 2026-03-10T07:59:37.072568+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: cluster 2026-03-10T07:59:37.076968+0000 mon.a (mon.0) 3652 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: cluster 2026-03-10T07:59:37.076968+0000 mon.a (mon.0) 3652 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: audit 2026-03-10T07:59:37.129299+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:37.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:37 vm00 bash[20701]: audit 2026-03-10T07:59:37.129299+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: audit 2026-03-10T07:59:36.311516+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: audit 2026-03-10T07:59:36.311516+0000 mon.a (mon.0) 3646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: cluster 2026-03-10T07:59:36.316173+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: cluster 2026-03-10T07:59:36.316173+0000 mon.a (mon.0) 3647 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: audit 2026-03-10T07:59:36.479349+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: audit 2026-03-10T07:59:36.479349+0000 mon.a (mon.0) 3648 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: cluster 2026-03-10T07:59:37.068719+0000 mon.a (mon.0) 3649 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: cluster 2026-03-10T07:59:37.068719+0000 mon.a (mon.0) 3649 : cluster [INF] pool '848a4d04-314a-4289-950b-2472b7cc83f9' no longer out of quota; removing NO_QUOTA flag 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: cluster 2026-03-10T07:59:37.068967+0000 mon.a (mon.0) 3650 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: cluster 2026-03-10T07:59:37.068967+0000 mon.a (mon.0) 3650 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: audit 2026-03-10T07:59:37.072568+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: audit 2026-03-10T07:59:37.072568+0000 mon.a (mon.0) 3651 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: cluster 2026-03-10T07:59:37.076968+0000 mon.a (mon.0) 3652 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: cluster 2026-03-10T07:59:37.076968+0000 mon.a (mon.0) 3652 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: audit 2026-03-10T07:59:37.129299+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:37.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:37 vm03 bash[23382]: audit 2026-03-10T07:59:37.129299+0000 mon.a (mon.0) 3653 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T07:59:38.065 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.072+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v776) ==== 221+0+0 (secure 0 0 0) 0x7efc900027f0 con 0x7efca41074d0 2026-03-10T07:59:38.065 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_objects = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 2026-03-10T07:59:38.066 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.072+0000 7efca0ff9640 1 -- 192.168.123.100:0/1325597710 <== mon.0 v2:192.168.123.100:3300/0 10 ==== osd_map(776..776 src has 258..776) ==== 628+0+0 (secure 0 0 0) 0x7efc90098280 con 0x7efca41074d0 2026-03-10T07:59:38.067 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 -- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efc6c0777e0 msgr2=0x7efc6c079ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:38.067 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efc6c0777e0 0x7efc6c079ca0 secure :-1 s=READY pgs=4323 cs=0 l=1 rev1=1 crypto rx=0x7efc94002800 tx=0x7efc94009290 comp rx=0 tx=0).stop 2026-03-10T07:59:38.067 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 -- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efca41074d0 msgr2=0x7efca4105690 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:38.067 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efca41074d0 0x7efca4105690 secure :-1 s=READY pgs=3137 cs=0 l=1 rev1=1 crypto rx=0x7efc9000ed30 tx=0x7efc9000c6a0 comp rx=0 tx=0).stop 2026-03-10T07:59:38.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 -- 192.168.123.100:0/1325597710 shutdown_connections 2026-03-10T07:59:38.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7efc6c0777e0 0x7efc6c079ca0 unknown :-1 s=CLOSED pgs=4323 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:38.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7efca410b9c0 0x7efca41a3b90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:38.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efca41074d0 0x7efca4105690 unknown :-1 s=CLOSED pgs=3137 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:38.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 --2- 192.168.123.100:0/1325597710 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7efca4069380 0x7efca4105150 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:38.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 -- 192.168.123.100:0/1325597710 >> 192.168.123.100:0/1325597710 conn(0x7efca40fc820 msgr2=0x7efca410d080 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:38.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 -- 192.168.123.100:0/1325597710 shutdown_connections 2026-03-10T07:59:38.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.076+0000 7efcaa1f7640 1 -- 192.168.123.100:0/1325597710 wait complete. 2026-03-10T07:59:38.082 INFO:tasks.workunit.client.0.vm00.stderr:+ wait 131481 2026-03-10T07:59:38.082 INFO:tasks.workunit.client.0.vm00.stderr:+ [ 0 -ne 0 ] 2026-03-10T07:59:38.082 INFO:tasks.workunit.client.0.vm00.stderr:+ true 2026-03-10T07:59:38.082 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put three /etc/passwd 2026-03-10T07:59:38.106 INFO:tasks.workunit.client.0.vm00.stderr:+ uuidgen 2026-03-10T07:59:38.106 INFO:tasks.workunit.client.0.vm00.stderr:+ pp=a16f944b-49df-4ed0-bee4-6bfebe190ca8 2026-03-10T07:59:38.106 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool create a16f944b-49df-4ed0-bee4-6bfebe190ca8 12 2026-03-10T07:59:38.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 -- 192.168.123.100:0/3656376576 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e241074d0 msgr2=0x7f6e24101040 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:38.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 --2- 192.168.123.100:0/3656376576 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e241074d0 0x7f6e24101040 secure :-1 s=READY pgs=3141 cs=0 l=1 rev1=1 crypto rx=0x7f6e14009a30 tx=0x7f6e1401c920 comp rx=0 tx=0).stop 2026-03-10T07:59:38.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 -- 192.168.123.100:0/3656376576 shutdown_connections 2026-03-10T07:59:38.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 --2- 192.168.123.100:0/3656376576 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e2410b9c0 0x7f6e2410ddb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:38.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 --2- 192.168.123.100:0/3656376576 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e241074d0 0x7f6e24101040 unknown :-1 s=CLOSED pgs=3141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:38.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 --2- 192.168.123.100:0/3656376576 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6e24069380 0x7f6e24100b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:38.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 -- 192.168.123.100:0/3656376576 >> 192.168.123.100:0/3656376576 conn(0x7f6e240fc820 msgr2=0x7f6e240fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:38.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 -- 192.168.123.100:0/3656376576 shutdown_connections 2026-03-10T07:59:38.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 -- 192.168.123.100:0/3656376576 wait complete. 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 Processor -- start 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 -- start start 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e24069380 0x7f6e2419af70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e241074d0 0x7f6e2419b4b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6e2410b9c0 0x7f6e2419f840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6e24110890 con 0x7f6e24069380 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f6e24110710 con 0x7f6e2410b9c0 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e2c285640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f6e24110a10 con 0x7f6e241074d0 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e297f9640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e241074d0 0x7f6e2419b4b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e2a7fb640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6e2410b9c0 0x7f6e2419f840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.168+0000 7f6e297f9640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e241074d0 0x7f6e2419b4b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:57992/0 (socket says 192.168.123.100:57992) 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e297f9640 1 -- 192.168.123.100:0/1042850585 learned_addr learned my addr 192.168.123.100:0/1042850585 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e297f9640 1 -- 192.168.123.100:0/1042850585 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6e2410b9c0 msgr2=0x7f6e2419f840 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e297f9640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6e2410b9c0 0x7f6e2419f840 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e297f9640 1 -- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e24069380 msgr2=0x7f6e2419af70 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e297f9640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e24069380 0x7f6e2419af70 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e297f9640 1 -- 192.168.123.100:0/1042850585 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6e2419ffc0 con 0x7f6e241074d0 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e297f9640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e241074d0 0x7f6e2419b4b0 secure :-1 s=READY pgs=3140 cs=0 l=1 rev1=1 crypto rx=0x7f6e14009a00 tx=0x7f6e140ae650 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:38.162 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e12ffd640 1 -- 192.168.123.100:0/1042850585 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6e14004070 con 0x7f6e241074d0 2026-03-10T07:59:38.163 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e12ffd640 1 -- 192.168.123.100:0/1042850585 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f6e140ae840 con 0x7f6e241074d0 2026-03-10T07:59:38.163 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e12ffd640 1 -- 192.168.123.100:0/1042850585 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6e14004a10 con 0x7f6e241074d0 2026-03-10T07:59:38.163 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e2c285640 1 -- 192.168.123.100:0/1042850585 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6e241a0250 con 0x7f6e241074d0 2026-03-10T07:59:38.163 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e2c285640 1 -- 192.168.123.100:0/1042850585 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f6e241a7af0 con 0x7f6e241074d0 2026-03-10T07:59:38.164 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e12ffd640 1 -- 192.168.123.100:0/1042850585 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f6e14004210 con 0x7f6e241074d0 2026-03-10T07:59:38.164 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e2c285640 1 -- 192.168.123.100:0/1042850585 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6dec005190 con 0x7f6e241074d0 2026-03-10T07:59:38.167 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e12ffd640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6e00077710 0x7f6e00079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:38.167 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.172+0000 7f6e12ffd640 1 -- 192.168.123.100:0/1042850585 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(776..776 src has 258..776) ==== 9073+0+0 (secure 0 0 0) 0x7f6e14133770 con 0x7f6e241074d0 2026-03-10T07:59:38.167 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.176+0000 7f6e29ffa640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6e00077710 0x7f6e00079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:38.167 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.176+0000 7f6e12ffd640 1 -- 192.168.123.100:0/1042850585 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6e14016630 con 0x7f6e241074d0 2026-03-10T07:59:38.168 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.176+0000 7f6e29ffa640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6e00077710 0x7f6e00079bd0 secure :-1 s=READY pgs=4325 cs=0 l=1 rev1=1 crypto rx=0x7f6e180097b0 tx=0x7f6e18006d30 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:38.252 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:38.260+0000 7f6e2c285640 1 -- 192.168.123.100:0/1042850585 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12} v 0) -- 0x7f6dec005480 con 0x7f6e241074d0 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:38 vm00 bash[28005]: cluster 2026-03-10T07:59:37.029036+0000 mgr.y (mgr.24407) 1274 : cluster [DBG] pgmap v1713: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:38 vm00 bash[28005]: cluster 2026-03-10T07:59:37.029036+0000 mgr.y (mgr.24407) 1274 : cluster [DBG] pgmap v1713: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:38 vm00 bash[28005]: audit 2026-03-10T07:59:38.076367+0000 mon.a (mon.0) 3654 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:38 vm00 bash[28005]: audit 2026-03-10T07:59:38.076367+0000 mon.a (mon.0) 3654 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:38 vm00 bash[28005]: cluster 2026-03-10T07:59:38.096621+0000 mon.a (mon.0) 3655 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:38 vm00 bash[28005]: cluster 2026-03-10T07:59:38.096621+0000 mon.a (mon.0) 3655 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:38 vm00 bash[28005]: audit 2026-03-10T07:59:38.263980+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:38 vm00 bash[28005]: audit 2026-03-10T07:59:38.263980+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:38 vm00 bash[28005]: audit 2026-03-10T07:59:38.264360+0000 mon.a (mon.0) 3656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:38 vm00 bash[28005]: audit 2026-03-10T07:59:38.264360+0000 mon.a (mon.0) 3656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:38 vm00 bash[20701]: cluster 2026-03-10T07:59:37.029036+0000 mgr.y (mgr.24407) 1274 : cluster [DBG] pgmap v1713: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:38 vm00 bash[20701]: cluster 2026-03-10T07:59:37.029036+0000 mgr.y (mgr.24407) 1274 : cluster [DBG] pgmap v1713: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:38 vm00 bash[20701]: audit 2026-03-10T07:59:38.076367+0000 mon.a (mon.0) 3654 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:38 vm00 bash[20701]: audit 2026-03-10T07:59:38.076367+0000 mon.a (mon.0) 3654 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:38.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:38 vm00 bash[20701]: cluster 2026-03-10T07:59:38.096621+0000 mon.a (mon.0) 3655 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T07:59:38.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:38 vm00 bash[20701]: cluster 2026-03-10T07:59:38.096621+0000 mon.a (mon.0) 3655 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T07:59:38.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:38 vm00 bash[20701]: audit 2026-03-10T07:59:38.263980+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:38.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:38 vm00 bash[20701]: audit 2026-03-10T07:59:38.263980+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:38.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:38 vm00 bash[20701]: audit 2026-03-10T07:59:38.264360+0000 mon.a (mon.0) 3656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:38.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:38 vm00 bash[20701]: audit 2026-03-10T07:59:38.264360+0000 mon.a (mon.0) 3656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:38 vm03 bash[23382]: cluster 2026-03-10T07:59:37.029036+0000 mgr.y (mgr.24407) 1274 : cluster [DBG] pgmap v1713: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:38 vm03 bash[23382]: cluster 2026-03-10T07:59:37.029036+0000 mgr.y (mgr.24407) 1274 : cluster [DBG] pgmap v1713: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T07:59:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:38 vm03 bash[23382]: audit 2026-03-10T07:59:38.076367+0000 mon.a (mon.0) 3654 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:38 vm03 bash[23382]: audit 2026-03-10T07:59:38.076367+0000 mon.a (mon.0) 3654 : audit [INF] from='client.? 192.168.123.100:0/1325597710' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T07:59:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:38 vm03 bash[23382]: cluster 2026-03-10T07:59:38.096621+0000 mon.a (mon.0) 3655 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T07:59:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:38 vm03 bash[23382]: cluster 2026-03-10T07:59:38.096621+0000 mon.a (mon.0) 3655 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T07:59:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:38 vm03 bash[23382]: audit 2026-03-10T07:59:38.263980+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:38 vm03 bash[23382]: audit 2026-03-10T07:59:38.263980+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:38 vm03 bash[23382]: audit 2026-03-10T07:59:38.264360+0000 mon.a (mon.0) 3656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:38.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:38 vm03 bash[23382]: audit 2026-03-10T07:59:38.264360+0000 mon.a (mon.0) 3656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:39.074 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.084+0000 7f6e12ffd640 1 -- 192.168.123.100:0/1042850585 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]=0 pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' created v777) ==== 176+0+0 (secure 0 0 0) 0x7f6e140ff9f0 con 0x7f6e241074d0 2026-03-10T07:59:39.125 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.132+0000 7f6e2c285640 1 -- 192.168.123.100:0/1042850585 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12} v 0) -- 0x7f6dec0049a0 con 0x7f6e241074d0 2026-03-10T07:59:39.126 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.132+0000 7f6e12ffd640 1 -- 192.168.123.100:0/1042850585 <== mon.2 v2:192.168.123.100:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]=0 pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' already exists v777) ==== 183+0+0 (secure 0 0 0) 0x7f6e141048a0 con 0x7f6e241074d0 2026-03-10T07:59:39.126 INFO:tasks.workunit.client.0.vm00.stderr:pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' already exists 2026-03-10T07:59:39.127 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 -- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6e00077710 msgr2=0x7f6e00079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:39.127 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6e00077710 0x7f6e00079bd0 secure :-1 s=READY pgs=4325 cs=0 l=1 rev1=1 crypto rx=0x7f6e180097b0 tx=0x7f6e18006d30 comp rx=0 tx=0).stop 2026-03-10T07:59:39.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 -- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e241074d0 msgr2=0x7f6e2419b4b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:39.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e241074d0 0x7f6e2419b4b0 secure :-1 s=READY pgs=3140 cs=0 l=1 rev1=1 crypto rx=0x7f6e14009a00 tx=0x7f6e140ae650 comp rx=0 tx=0).stop 2026-03-10T07:59:39.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 -- 192.168.123.100:0/1042850585 shutdown_connections 2026-03-10T07:59:39.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6e00077710 0x7f6e00079bd0 unknown :-1 s=CLOSED pgs=4325 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:39.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6e2410b9c0 0x7f6e2419f840 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:39.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6e241074d0 0x7f6e2419b4b0 unknown :-1 s=CLOSED pgs=3140 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:39.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 --2- 192.168.123.100:0/1042850585 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6e24069380 0x7f6e2419af70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:39.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 -- 192.168.123.100:0/1042850585 >> 192.168.123.100:0/1042850585 conn(0x7f6e240fc820 msgr2=0x7f6e2410da50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:39.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 -- 192.168.123.100:0/1042850585 shutdown_connections 2026-03-10T07:59:39.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.136+0000 7f6e2c285640 1 -- 192.168.123.100:0/1042850585 wait complete. 2026-03-10T07:59:39.138 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool application enable a16f944b-49df-4ed0-bee4-6bfebe190ca8 rados 2026-03-10T07:59:39.194 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- 192.168.123.100:0/166802252 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f73481057d0 msgr2=0x7f7348109820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:39.194 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 --2- 192.168.123.100:0/166802252 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f73481057d0 0x7f7348109820 secure :-1 s=READY pgs=3142 cs=0 l=1 rev1=1 crypto rx=0x7f7338009a80 tx=0x7f733801c960 comp rx=0 tx=0).stop 2026-03-10T07:59:39.194 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- 192.168.123.100:0/166802252 shutdown_connections 2026-03-10T07:59:39.194 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 --2- 192.168.123.100:0/166802252 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f7348109f50 0x7f7348111ad0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:39.194 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 --2- 192.168.123.100:0/166802252 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f73481057d0 0x7f7348109820 unknown :-1 s=CLOSED pgs=3142 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:39.194 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 --2- 192.168.123.100:0/166802252 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7348104e20 0x7f7348105200 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:39.194 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- 192.168.123.100:0/166802252 >> 192.168.123.100:0/166802252 conn(0x7f7348100880 msgr2=0x7f7348102ca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:39.194 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- 192.168.123.100:0/166802252 shutdown_connections 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- 192.168.123.100:0/166802252 wait complete. 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 Processor -- start 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- start start 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7348104e20 0x7f734819f0b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f73481057d0 0x7f734819f5f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7348109f50 0x7f73481a3980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f7348116aa0 con 0x7f7348109f50 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f7348116920 con 0x7f73481057d0 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f7348116c20 con 0x7f7348104e20 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734cc2e640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7348104e20 0x7f734819f0b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734d42f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7348109f50 0x7f73481a3980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:39.195 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734cc2e640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7348104e20 0x7f734819f0b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:57994/0 (socket says 192.168.123.100:57994) 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734cc2e640 1 -- 192.168.123.100:0/1213047435 learned_addr learned my addr 192.168.123.100:0/1213047435 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734d42f640 1 -- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7348104e20 msgr2=0x7f734819f0b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f733ffff640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f73481057d0 0x7f734819f5f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734d42f640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7348104e20 0x7f734819f0b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734d42f640 1 -- 192.168.123.100:0/1213047435 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f73481057d0 msgr2=0x7f734819f5f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734d42f640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f73481057d0 0x7f734819f5f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734d42f640 1 -- 192.168.123.100:0/1213047435 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f73481a4100 con 0x7f7348109f50 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f733ffff640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f73481057d0 0x7f734819f5f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734cc2e640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7348104e20 0x7f734819f0b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734d42f640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7348109f50 0x7f73481a3980 secure :-1 s=READY pgs=3143 cs=0 l=1 rev1=1 crypto rx=0x7f734400efc0 tx=0x7f734400c490 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f733dffb640 1 -- 192.168.123.100:0/1213047435 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7344019070 con 0x7f7348109f50 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f733dffb640 1 -- 192.168.123.100:0/1213047435 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f734400ce50 con 0x7f7348109f50 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f733dffb640 1 -- 192.168.123.100:0/1213047435 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7344004790 con 0x7f7348109f50 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- 192.168.123.100:0/1213047435 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f73481a4390 con 0x7f7348109f50 2026-03-10T07:59:39.196 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- 192.168.123.100:0/1213047435 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f73481abc30 con 0x7f7348109f50 2026-03-10T07:59:39.197 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.204+0000 7f734eeb9640 1 -- 192.168.123.100:0/1213047435 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7310005190 con 0x7f7348109f50 2026-03-10T07:59:39.200 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.208+0000 7f733dffb640 1 -- 192.168.123.100:0/1213047435 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f7344005ce0 con 0x7f7348109f50 2026-03-10T07:59:39.200 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.208+0000 7f733dffb640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f73240777e0 0x7f7324079ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:39.200 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.208+0000 7f733dffb640 1 -- 192.168.123.100:0/1213047435 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(777..777 src has 258..777) ==== 9448+0+0 (secure 0 0 0) 0x7f734409a5b0 con 0x7f7348109f50 2026-03-10T07:59:39.200 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.208+0000 7f734cc2e640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f73240777e0 0x7f7324079ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:39.200 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.208+0000 7f733dffb640 1 -- 192.168.123.100:0/1213047435 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f7344010040 con 0x7f7348109f50 2026-03-10T07:59:39.200 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.208+0000 7f734cc2e640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f73240777e0 0x7f7324079ca0 secure :-1 s=READY pgs=4326 cs=0 l=1 rev1=1 crypto rx=0x7f73300029e0 tx=0x7f7330004050 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:39.285 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:39.292+0000 7f734eeb9640 1 -- 192.168.123.100:0/1213047435 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"} v 0) -- 0x7f7310005480 con 0x7f7348109f50 2026-03-10T07:59:40.072 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:40.080+0000 7f733dffb640 1 -- 192.168.123.100:0/1213047435 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]=0 enabled application 'rados' on pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' v778) ==== 213+0+0 (secure 0 0 0) 0x7f7344066740 con 0x7f7348109f50 2026-03-10T07:59:40.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:40.136+0000 7f734eeb9640 1 -- 192.168.123.100:0/1213047435 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"} v 0) -- 0x7f7310005c30 con 0x7f7348109f50 2026-03-10T07:59:40.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: cluster 2026-03-10T07:59:39.029376+0000 mgr.y (mgr.24407) 1275 : cluster [DBG] pgmap v1716: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:59:40.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: cluster 2026-03-10T07:59:39.029376+0000 mgr.y (mgr.24407) 1275 : cluster [DBG] pgmap v1716: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:59:40.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: audit 2026-03-10T07:59:39.080262+0000 mon.a (mon.0) 3657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]': finished 2026-03-10T07:59:40.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: audit 2026-03-10T07:59:39.080262+0000 mon.a (mon.0) 3657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]': finished 2026-03-10T07:59:40.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: cluster 2026-03-10T07:59:39.103805+0000 mon.a (mon.0) 3658 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T07:59:40.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: cluster 2026-03-10T07:59:39.103805+0000 mon.a (mon.0) 3658 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T07:59:40.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: audit 2026-03-10T07:59:39.136699+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: audit 2026-03-10T07:59:39.136699+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: audit 2026-03-10T07:59:39.137048+0000 mon.a (mon.0) 3659 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: audit 2026-03-10T07:59:39.137048+0000 mon.a (mon.0) 3659 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: audit 2026-03-10T07:59:39.296525+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:40 vm00 bash[28005]: audit 2026-03-10T07:59:39.296525+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: cluster 2026-03-10T07:59:39.029376+0000 mgr.y (mgr.24407) 1275 : cluster [DBG] pgmap v1716: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: cluster 2026-03-10T07:59:39.029376+0000 mgr.y (mgr.24407) 1275 : cluster [DBG] pgmap v1716: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: audit 2026-03-10T07:59:39.080262+0000 mon.a (mon.0) 3657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]': finished 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: audit 2026-03-10T07:59:39.080262+0000 mon.a (mon.0) 3657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]': finished 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: cluster 2026-03-10T07:59:39.103805+0000 mon.a (mon.0) 3658 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: cluster 2026-03-10T07:59:39.103805+0000 mon.a (mon.0) 3658 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: audit 2026-03-10T07:59:39.136699+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: audit 2026-03-10T07:59:39.136699+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: audit 2026-03-10T07:59:39.137048+0000 mon.a (mon.0) 3659 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: audit 2026-03-10T07:59:39.137048+0000 mon.a (mon.0) 3659 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: audit 2026-03-10T07:59:39.296525+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:40.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:40 vm00 bash[20701]: audit 2026-03-10T07:59:39.296525+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: cluster 2026-03-10T07:59:39.029376+0000 mgr.y (mgr.24407) 1275 : cluster [DBG] pgmap v1716: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: cluster 2026-03-10T07:59:39.029376+0000 mgr.y (mgr.24407) 1275 : cluster [DBG] pgmap v1716: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: audit 2026-03-10T07:59:39.080262+0000 mon.a (mon.0) 3657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]': finished 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: audit 2026-03-10T07:59:39.080262+0000 mon.a (mon.0) 3657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]': finished 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: cluster 2026-03-10T07:59:39.103805+0000 mon.a (mon.0) 3658 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: cluster 2026-03-10T07:59:39.103805+0000 mon.a (mon.0) 3658 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: audit 2026-03-10T07:59:39.136699+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: audit 2026-03-10T07:59:39.136699+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.100:0/1042850585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: audit 2026-03-10T07:59:39.137048+0000 mon.a (mon.0) 3659 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: audit 2026-03-10T07:59:39.137048+0000 mon.a (mon.0) 3659 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pg_num": 12}]: dispatch 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: audit 2026-03-10T07:59:39.296525+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:40 vm03 bash[23382]: audit 2026-03-10T07:59:39.296525+0000 mon.a (mon.0) 3660 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:41.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.084+0000 7f733dffb640 1 -- 192.168.123.100:0/1213047435 <== mon.0 v2:192.168.123.100:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]=0 enabled application 'rados' on pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' v779) ==== 213+0+0 (secure 0 0 0) 0x7f734406b5f0 con 0x7f7348109f50 2026-03-10T07:59:41.077 INFO:tasks.workunit.client.0.vm00.stderr:enabled application 'rados' on pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' 2026-03-10T07:59:41.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 -- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f73240777e0 msgr2=0x7f7324079ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:41.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f73240777e0 0x7f7324079ca0 secure :-1 s=READY pgs=4326 cs=0 l=1 rev1=1 crypto rx=0x7f73300029e0 tx=0x7f7330004050 comp rx=0 tx=0).stop 2026-03-10T07:59:41.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 -- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7348109f50 msgr2=0x7f73481a3980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:41.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7348109f50 0x7f73481a3980 secure :-1 s=READY pgs=3143 cs=0 l=1 rev1=1 crypto rx=0x7f734400efc0 tx=0x7f734400c490 comp rx=0 tx=0).stop 2026-03-10T07:59:41.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 -- 192.168.123.100:0/1213047435 shutdown_connections 2026-03-10T07:59:41.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f73240777e0 0x7f7324079ca0 unknown :-1 s=CLOSED pgs=4326 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:41.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7348109f50 0x7f73481a3980 unknown :-1 s=CLOSED pgs=3143 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:41.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f73481057d0 0x7f734819f5f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:41.080 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 --2- 192.168.123.100:0/1213047435 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f7348104e20 0x7f734819f0b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:41.080 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 -- 192.168.123.100:0/1213047435 >> 192.168.123.100:0/1213047435 conn(0x7f7348100880 msgr2=0x7f7348100ed0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:41.080 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 -- 192.168.123.100:0/1213047435 shutdown_connections 2026-03-10T07:59:41.080 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.088+0000 7f734eeb9640 1 -- 192.168.123.100:0/1213047435 wait complete. 2026-03-10T07:59:41.096 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota a16f944b-49df-4ed0-bee4-6bfebe190ca8 max_objects 10 2026-03-10T07:59:41.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- 192.168.123.100:0/3548283999 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4264100ef0 msgr2=0x7f4264101370 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:41.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 --2- 192.168.123.100:0/3548283999 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4264100ef0 0x7f4264101370 secure :-1 s=READY pgs=3144 cs=0 l=1 rev1=1 crypto rx=0x7f424c009a30 tx=0x7f424c01c880 comp rx=0 tx=0).stop 2026-03-10T07:59:41.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- 192.168.123.100:0/3548283999 shutdown_connections 2026-03-10T07:59:41.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 --2- 192.168.123.100:0/3548283999 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f42641018b0 0x7f42641118d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:41.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 --2- 192.168.123.100:0/3548283999 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4264100ef0 0x7f4264101370 unknown :-1 s=CLOSED pgs=3144 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:41.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 --2- 192.168.123.100:0/3548283999 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f42641034e0 0x7f42641038c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:41.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- 192.168.123.100:0/3548283999 >> 192.168.123.100:0/3548283999 conn(0x7f4264078070 msgr2=0x7f4264078480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:41.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- 192.168.123.100:0/3548283999 shutdown_connections 2026-03-10T07:59:41.155 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- 192.168.123.100:0/3548283999 wait complete. 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 Processor -- start 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- start start 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f4264100ef0 0x7f426419ede0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42641018b0 0x7f426419f320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f42641034e0 0x7f42641a36b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f42641168e0 con 0x7f42641018b0 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f4264116760 con 0x7f4264100ef0 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f4264116a60 con 0x7f42641034e0 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4261d74640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42641018b0 0x7f426419f320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4261d74640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42641018b0 0x7f426419f320 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:46882/0 (socket says 192.168.123.100:46882) 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4261d74640 1 -- 192.168.123.100:0/604419308 learned_addr learned my addr 192.168.123.100:0/604419308 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4262d76640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f42641034e0 0x7f42641a36b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4262575640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f4264100ef0 0x7f426419ede0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4262d76640 1 -- 192.168.123.100:0/604419308 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f4264100ef0 msgr2=0x7f426419ede0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4262d76640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f4264100ef0 0x7f426419ede0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4262d76640 1 -- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42641018b0 msgr2=0x7f426419f320 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4262d76640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42641018b0 0x7f426419f320 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4262d76640 1 -- 192.168.123.100:0/604419308 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f42641a3e30 con 0x7f42641034e0 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4262d76640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f42641034e0 0x7f42641a36b0 secure :-1 s=READY pgs=3141 cs=0 l=1 rev1=1 crypto rx=0x7f425400efc0 tx=0x7f425400c490 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:41.156 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4262575640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f4264100ef0 0x7f426419ede0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T07:59:41.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f424b7fe640 1 -- 192.168.123.100:0/604419308 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4254019070 con 0x7f42641034e0 2026-03-10T07:59:41.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- 192.168.123.100:0/604419308 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f42641a40c0 con 0x7f42641034e0 2026-03-10T07:59:41.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- 192.168.123.100:0/604419308 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f42641ab960 con 0x7f42641034e0 2026-03-10T07:59:41.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f424b7fe640 1 -- 192.168.123.100:0/604419308 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f425400ce50 con 0x7f42641034e0 2026-03-10T07:59:41.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f424b7fe640 1 -- 192.168.123.100:0/604419308 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4254004790 con 0x7f42641034e0 2026-03-10T07:59:41.157 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.164+0000 7f4263577640 1 -- 192.168.123.100:0/604419308 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4230005190 con 0x7f42641034e0 2026-03-10T07:59:41.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.168+0000 7f424b7fe640 1 -- 192.168.123.100:0/604419308 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f4254005ce0 con 0x7f42641034e0 2026-03-10T07:59:41.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.168+0000 7f424b7fe640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f422c0777e0 0x7f422c079ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T07:59:41.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.168+0000 7f424b7fe640 1 -- 192.168.123.100:0/604419308 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(779..779 src has 258..779) ==== 9461+0+0 (secure 0 0 0) 0x7f425409a0b0 con 0x7f42641034e0 2026-03-10T07:59:41.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.168+0000 7f424b7fe640 1 -- 192.168.123.100:0/604419308 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4254010040 con 0x7f42641034e0 2026-03-10T07:59:41.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.168+0000 7f4262575640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f422c0777e0 0x7f422c079ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T07:59:41.161 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.168+0000 7f4262575640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f422c0777e0 0x7f422c079ca0 secure :-1 s=READY pgs=4327 cs=0 l=1 rev1=1 crypto rx=0x7f4258002740 tx=0x7f4258009290 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T07:59:41.245 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:41.252+0000 7f4263577640 1 -- 192.168.123.100:0/604419308 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"} v 0) -- 0x7f4230005480 con 0x7f42641034e0 2026-03-10T07:59:41.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:59:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:59:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:59:41.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: audit 2026-03-10T07:59:40.083276+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: audit 2026-03-10T07:59:40.083276+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: cluster 2026-03-10T07:59:40.091935+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T07:59:41.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: cluster 2026-03-10T07:59:40.091935+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: audit 2026-03-10T07:59:40.140646+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: audit 2026-03-10T07:59:40.140646+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: audit 2026-03-10T07:59:40.494604+0000 mon.a (mon.0) 3664 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: audit 2026-03-10T07:59:40.494604+0000 mon.a (mon.0) 3664 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: audit 2026-03-10T07:59:40.496172+0000 mon.c (mon.2) 493 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: audit 2026-03-10T07:59:40.496172+0000 mon.c (mon.2) 493 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: audit 2026-03-10T07:59:41.088428+0000 mon.a (mon.0) 3665 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: audit 2026-03-10T07:59:41.088428+0000 mon.a (mon.0) 3665 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: cluster 2026-03-10T07:59:41.094463+0000 mon.a (mon.0) 3666 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:41 vm00 bash[28005]: cluster 2026-03-10T07:59:41.094463+0000 mon.a (mon.0) 3666 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: audit 2026-03-10T07:59:40.083276+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: audit 2026-03-10T07:59:40.083276+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: cluster 2026-03-10T07:59:40.091935+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: cluster 2026-03-10T07:59:40.091935+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: audit 2026-03-10T07:59:40.140646+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: audit 2026-03-10T07:59:40.140646+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: audit 2026-03-10T07:59:40.494604+0000 mon.a (mon.0) 3664 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: audit 2026-03-10T07:59:40.494604+0000 mon.a (mon.0) 3664 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: audit 2026-03-10T07:59:40.496172+0000 mon.c (mon.2) 493 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: audit 2026-03-10T07:59:40.496172+0000 mon.c (mon.2) 493 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: audit 2026-03-10T07:59:41.088428+0000 mon.a (mon.0) 3665 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: audit 2026-03-10T07:59:41.088428+0000 mon.a (mon.0) 3665 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: cluster 2026-03-10T07:59:41.094463+0000 mon.a (mon.0) 3666 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T07:59:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:41 vm00 bash[20701]: cluster 2026-03-10T07:59:41.094463+0000 mon.a (mon.0) 3666 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: audit 2026-03-10T07:59:40.083276+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: audit 2026-03-10T07:59:40.083276+0000 mon.a (mon.0) 3661 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: cluster 2026-03-10T07:59:40.091935+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: cluster 2026-03-10T07:59:40.091935+0000 mon.a (mon.0) 3662 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: audit 2026-03-10T07:59:40.140646+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: audit 2026-03-10T07:59:40.140646+0000 mon.a (mon.0) 3663 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]: dispatch 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: audit 2026-03-10T07:59:40.494604+0000 mon.a (mon.0) 3664 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: audit 2026-03-10T07:59:40.494604+0000 mon.a (mon.0) 3664 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: audit 2026-03-10T07:59:40.496172+0000 mon.c (mon.2) 493 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: audit 2026-03-10T07:59:40.496172+0000 mon.c (mon.2) 493 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: audit 2026-03-10T07:59:41.088428+0000 mon.a (mon.0) 3665 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: audit 2026-03-10T07:59:41.088428+0000 mon.a (mon.0) 3665 : audit [INF] from='client.? 192.168.123.100:0/1213047435' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "app": "rados"}]': finished 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: cluster 2026-03-10T07:59:41.094463+0000 mon.a (mon.0) 3666 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T07:59:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:41 vm03 bash[23382]: cluster 2026-03-10T07:59:41.094463+0000 mon.a (mon.0) 3666 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T07:59:42.118 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:42.124+0000 7f424b7fe640 1 -- 192.168.123.100:0/604419308 <== mon.2 v2:192.168.123.100:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool a16f944b-49df-4ed0-bee4-6bfebe190ca8 v780) ==== 223+0+0 (secure 0 0 0) 0x7f4254066230 con 0x7f42641034e0 2026-03-10T07:59:42.165 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:42.172+0000 7f4263577640 1 -- 192.168.123.100:0/604419308 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"} v 0) -- 0x7f4230005c30 con 0x7f42641034e0 2026-03-10T07:59:42.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:42 vm00 bash[28005]: cluster 2026-03-10T07:59:41.029731+0000 mgr.y (mgr.24407) 1276 : cluster [DBG] pgmap v1719: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:42.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:42 vm00 bash[28005]: cluster 2026-03-10T07:59:41.029731+0000 mgr.y (mgr.24407) 1276 : cluster [DBG] pgmap v1719: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:42.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:42 vm00 bash[28005]: audit 2026-03-10T07:59:41.256758+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:42.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:42 vm00 bash[28005]: audit 2026-03-10T07:59:41.256758+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:42.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:42 vm00 bash[28005]: audit 2026-03-10T07:59:41.257107+0000 mon.a (mon.0) 3667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:42.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:42 vm00 bash[28005]: audit 2026-03-10T07:59:41.257107+0000 mon.a (mon.0) 3667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:42.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:42 vm00 bash[20701]: cluster 2026-03-10T07:59:41.029731+0000 mgr.y (mgr.24407) 1276 : cluster [DBG] pgmap v1719: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:42.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:42 vm00 bash[20701]: cluster 2026-03-10T07:59:41.029731+0000 mgr.y (mgr.24407) 1276 : cluster [DBG] pgmap v1719: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:42.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:42 vm00 bash[20701]: audit 2026-03-10T07:59:41.256758+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:42.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:42 vm00 bash[20701]: audit 2026-03-10T07:59:41.256758+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:42.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:42 vm00 bash[20701]: audit 2026-03-10T07:59:41.257107+0000 mon.a (mon.0) 3667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:42.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:42 vm00 bash[20701]: audit 2026-03-10T07:59:41.257107+0000 mon.a (mon.0) 3667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:42.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:42 vm03 bash[23382]: cluster 2026-03-10T07:59:41.029731+0000 mgr.y (mgr.24407) 1276 : cluster [DBG] pgmap v1719: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:42.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:42 vm03 bash[23382]: cluster 2026-03-10T07:59:41.029731+0000 mgr.y (mgr.24407) 1276 : cluster [DBG] pgmap v1719: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:42.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:42 vm03 bash[23382]: audit 2026-03-10T07:59:41.256758+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:42.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:42 vm03 bash[23382]: audit 2026-03-10T07:59:41.256758+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:42.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:42 vm03 bash[23382]: audit 2026-03-10T07:59:41.257107+0000 mon.a (mon.0) 3667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:42.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:42 vm03 bash[23382]: audit 2026-03-10T07:59:41.257107+0000 mon.a (mon.0) 3667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.127 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f424b7fe640 1 -- 192.168.123.100:0/604419308 <== mon.2 v2:192.168.123.100:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool a16f944b-49df-4ed0-bee4-6bfebe190ca8 v781) ==== 223+0+0 (secure 0 0 0) 0x7f425406b0e0 con 0x7f42641034e0 2026-03-10T07:59:43.127 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_objects = 10 for pool a16f944b-49df-4ed0-bee4-6bfebe190ca8 2026-03-10T07:59:43.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 -- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f422c0777e0 msgr2=0x7f422c079ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:43.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f422c0777e0 0x7f422c079ca0 secure :-1 s=READY pgs=4327 cs=0 l=1 rev1=1 crypto rx=0x7f4258002740 tx=0x7f4258009290 comp rx=0 tx=0).stop 2026-03-10T07:59:43.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 -- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f42641034e0 msgr2=0x7f42641a36b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T07:59:43.128 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f42641034e0 0x7f42641a36b0 secure :-1 s=READY pgs=3141 cs=0 l=1 rev1=1 crypto rx=0x7f425400efc0 tx=0x7f425400c490 comp rx=0 tx=0).stop 2026-03-10T07:59:43.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 -- 192.168.123.100:0/604419308 shutdown_connections 2026-03-10T07:59:43.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f422c0777e0 0x7f422c079ca0 unknown :-1 s=CLOSED pgs=4327 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:43.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f42641034e0 0x7f42641a36b0 unknown :-1 s=CLOSED pgs=3141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:43.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f42641018b0 0x7f426419f320 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:43.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 --2- 192.168.123.100:0/604419308 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f4264100ef0 0x7f426419ede0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T07:59:43.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 -- 192.168.123.100:0/604419308 >> 192.168.123.100:0/604419308 conn(0x7f4264078070 msgr2=0x7f42640fef40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T07:59:43.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 -- 192.168.123.100:0/604419308 shutdown_connections 2026-03-10T07:59:43.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T07:59:43.136+0000 7f4263577640 1 -- 192.168.123.100:0/604419308 wait complete. 2026-03-10T07:59:43.140 INFO:tasks.workunit.client.0.vm00.stderr:+ sleep 30 2026-03-10T07:59:43.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:43 vm00 bash[28005]: audit 2026-03-10T07:59:42.122304+0000 mon.a (mon.0) 3668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:43.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:43 vm00 bash[28005]: audit 2026-03-10T07:59:42.122304+0000 mon.a (mon.0) 3668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:43.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:43 vm00 bash[28005]: cluster 2026-03-10T07:59:42.127687+0000 mon.a (mon.0) 3669 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T07:59:43.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:43 vm00 bash[28005]: cluster 2026-03-10T07:59:42.127687+0000 mon.a (mon.0) 3669 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T07:59:43.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:43 vm00 bash[28005]: audit 2026-03-10T07:59:42.177139+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:43 vm00 bash[28005]: audit 2026-03-10T07:59:42.177139+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:43 vm00 bash[28005]: audit 2026-03-10T07:59:42.177641+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:43 vm00 bash[28005]: audit 2026-03-10T07:59:42.177641+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:43 vm00 bash[20701]: audit 2026-03-10T07:59:42.122304+0000 mon.a (mon.0) 3668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:43.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:43 vm00 bash[20701]: audit 2026-03-10T07:59:42.122304+0000 mon.a (mon.0) 3668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:43.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:43 vm00 bash[20701]: cluster 2026-03-10T07:59:42.127687+0000 mon.a (mon.0) 3669 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T07:59:43.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:43 vm00 bash[20701]: cluster 2026-03-10T07:59:42.127687+0000 mon.a (mon.0) 3669 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T07:59:43.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:43 vm00 bash[20701]: audit 2026-03-10T07:59:42.177139+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:43 vm00 bash[20701]: audit 2026-03-10T07:59:42.177139+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:43 vm00 bash[20701]: audit 2026-03-10T07:59:42.177641+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:43 vm00 bash[20701]: audit 2026-03-10T07:59:42.177641+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:43 vm03 bash[23382]: audit 2026-03-10T07:59:42.122304+0000 mon.a (mon.0) 3668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:43.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:43 vm03 bash[23382]: audit 2026-03-10T07:59:42.122304+0000 mon.a (mon.0) 3668 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:43.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:43 vm03 bash[23382]: cluster 2026-03-10T07:59:42.127687+0000 mon.a (mon.0) 3669 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T07:59:43.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:43 vm03 bash[23382]: cluster 2026-03-10T07:59:42.127687+0000 mon.a (mon.0) 3669 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T07:59:43.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:43 vm03 bash[23382]: audit 2026-03-10T07:59:42.177139+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:43 vm03 bash[23382]: audit 2026-03-10T07:59:42.177139+0000 mon.c (mon.2) 495 : audit [INF] from='client.? 192.168.123.100:0/604419308' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:43 vm03 bash[23382]: audit 2026-03-10T07:59:42.177641+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:43.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:43 vm03 bash[23382]: audit 2026-03-10T07:59:42.177641+0000 mon.a (mon.0) 3670 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T07:59:44.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:44 vm00 bash[28005]: cluster 2026-03-10T07:59:43.030037+0000 mgr.y (mgr.24407) 1277 : cluster [DBG] pgmap v1722: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:44.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:44 vm00 bash[28005]: cluster 2026-03-10T07:59:43.030037+0000 mgr.y (mgr.24407) 1277 : cluster [DBG] pgmap v1722: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:44.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:44 vm00 bash[28005]: audit 2026-03-10T07:59:43.129606+0000 mon.a (mon.0) 3671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:44.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:44 vm00 bash[28005]: audit 2026-03-10T07:59:43.129606+0000 mon.a (mon.0) 3671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:44.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:44 vm00 bash[28005]: cluster 2026-03-10T07:59:43.146634+0000 mon.a (mon.0) 3672 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T07:59:44.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:44 vm00 bash[28005]: cluster 2026-03-10T07:59:43.146634+0000 mon.a (mon.0) 3672 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T07:59:44.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:44 vm00 bash[20701]: cluster 2026-03-10T07:59:43.030037+0000 mgr.y (mgr.24407) 1277 : cluster [DBG] pgmap v1722: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:44.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:44 vm00 bash[20701]: cluster 2026-03-10T07:59:43.030037+0000 mgr.y (mgr.24407) 1277 : cluster [DBG] pgmap v1722: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:44.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:44 vm00 bash[20701]: audit 2026-03-10T07:59:43.129606+0000 mon.a (mon.0) 3671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:44.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:44 vm00 bash[20701]: audit 2026-03-10T07:59:43.129606+0000 mon.a (mon.0) 3671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:44.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:44 vm00 bash[20701]: cluster 2026-03-10T07:59:43.146634+0000 mon.a (mon.0) 3672 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T07:59:44.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:44 vm00 bash[20701]: cluster 2026-03-10T07:59:43.146634+0000 mon.a (mon.0) 3672 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T07:59:44.420 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:44 vm03 bash[23382]: cluster 2026-03-10T07:59:43.030037+0000 mgr.y (mgr.24407) 1277 : cluster [DBG] pgmap v1722: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:44.420 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:44 vm03 bash[23382]: cluster 2026-03-10T07:59:43.030037+0000 mgr.y (mgr.24407) 1277 : cluster [DBG] pgmap v1722: 188 pgs: 12 creating+peering, 176 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 1 op/s 2026-03-10T07:59:44.420 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:44 vm03 bash[23382]: audit 2026-03-10T07:59:43.129606+0000 mon.a (mon.0) 3671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:44.420 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:44 vm03 bash[23382]: audit 2026-03-10T07:59:43.129606+0000 mon.a (mon.0) 3671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "field": "max_objects", "val": "10"}]': finished 2026-03-10T07:59:44.420 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:44 vm03 bash[23382]: cluster 2026-03-10T07:59:43.146634+0000 mon.a (mon.0) 3672 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T07:59:44.420 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:44 vm03 bash[23382]: cluster 2026-03-10T07:59:43.146634+0000 mon.a (mon.0) 3672 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T07:59:44.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:59:44 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:59:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:45 vm03 bash[23382]: audit 2026-03-10T07:59:44.430494+0000 mgr.y (mgr.24407) 1278 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:45 vm03 bash[23382]: audit 2026-03-10T07:59:44.430494+0000 mgr.y (mgr.24407) 1278 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:45 vm03 bash[23382]: audit 2026-03-10T07:59:44.665790+0000 mon.c (mon.2) 496 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:59:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:45 vm03 bash[23382]: audit 2026-03-10T07:59:44.665790+0000 mon.c (mon.2) 496 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:59:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:45 vm03 bash[23382]: audit 2026-03-10T07:59:44.958650+0000 mon.c (mon.2) 497 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:59:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:45 vm03 bash[23382]: audit 2026-03-10T07:59:44.958650+0000 mon.c (mon.2) 497 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:59:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:45 vm03 bash[23382]: audit 2026-03-10T07:59:44.959312+0000 mon.c (mon.2) 498 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:59:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:45 vm03 bash[23382]: audit 2026-03-10T07:59:44.959312+0000 mon.c (mon.2) 498 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:59:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:45 vm03 bash[23382]: audit 2026-03-10T07:59:44.964582+0000 mon.a (mon.0) 3673 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:45 vm03 bash[23382]: audit 2026-03-10T07:59:44.964582+0000 mon.a (mon.0) 3673 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:45 vm00 bash[28005]: audit 2026-03-10T07:59:44.430494+0000 mgr.y (mgr.24407) 1278 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:45 vm00 bash[28005]: audit 2026-03-10T07:59:44.430494+0000 mgr.y (mgr.24407) 1278 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:45 vm00 bash[28005]: audit 2026-03-10T07:59:44.665790+0000 mon.c (mon.2) 496 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:45 vm00 bash[28005]: audit 2026-03-10T07:59:44.665790+0000 mon.c (mon.2) 496 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:45 vm00 bash[28005]: audit 2026-03-10T07:59:44.958650+0000 mon.c (mon.2) 497 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:45 vm00 bash[28005]: audit 2026-03-10T07:59:44.958650+0000 mon.c (mon.2) 497 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:45 vm00 bash[28005]: audit 2026-03-10T07:59:44.959312+0000 mon.c (mon.2) 498 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:45 vm00 bash[28005]: audit 2026-03-10T07:59:44.959312+0000 mon.c (mon.2) 498 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:45 vm00 bash[28005]: audit 2026-03-10T07:59:44.964582+0000 mon.a (mon.0) 3673 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:45 vm00 bash[28005]: audit 2026-03-10T07:59:44.964582+0000 mon.a (mon.0) 3673 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:45 vm00 bash[20701]: audit 2026-03-10T07:59:44.430494+0000 mgr.y (mgr.24407) 1278 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:45 vm00 bash[20701]: audit 2026-03-10T07:59:44.430494+0000 mgr.y (mgr.24407) 1278 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:45 vm00 bash[20701]: audit 2026-03-10T07:59:44.665790+0000 mon.c (mon.2) 496 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:45 vm00 bash[20701]: audit 2026-03-10T07:59:44.665790+0000 mon.c (mon.2) 496 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:45 vm00 bash[20701]: audit 2026-03-10T07:59:44.958650+0000 mon.c (mon.2) 497 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:45 vm00 bash[20701]: audit 2026-03-10T07:59:44.958650+0000 mon.c (mon.2) 497 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:45 vm00 bash[20701]: audit 2026-03-10T07:59:44.959312+0000 mon.c (mon.2) 498 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:45 vm00 bash[20701]: audit 2026-03-10T07:59:44.959312+0000 mon.c (mon.2) 498 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:45 vm00 bash[20701]: audit 2026-03-10T07:59:44.964582+0000 mon.a (mon.0) 3673 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:45.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:45 vm00 bash[20701]: audit 2026-03-10T07:59:44.964582+0000 mon.a (mon.0) 3673 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T07:59:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:46 vm03 bash[23382]: cluster 2026-03-10T07:59:45.030708+0000 mgr.y (mgr.24407) 1279 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:59:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:46 vm03 bash[23382]: cluster 2026-03-10T07:59:45.030708+0000 mgr.y (mgr.24407) 1279 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:59:46.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:46 vm00 bash[28005]: cluster 2026-03-10T07:59:45.030708+0000 mgr.y (mgr.24407) 1279 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:59:46.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:46 vm00 bash[28005]: cluster 2026-03-10T07:59:45.030708+0000 mgr.y (mgr.24407) 1279 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:59:46.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:46 vm00 bash[20701]: cluster 2026-03-10T07:59:45.030708+0000 mgr.y (mgr.24407) 1279 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:59:46.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:46 vm00 bash[20701]: cluster 2026-03-10T07:59:45.030708+0000 mgr.y (mgr.24407) 1279 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T07:59:48.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:48 vm03 bash[23382]: cluster 2026-03-10T07:59:47.031017+0000 mgr.y (mgr.24407) 1280 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:48.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:48 vm03 bash[23382]: cluster 2026-03-10T07:59:47.031017+0000 mgr.y (mgr.24407) 1280 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:48.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:48 vm00 bash[28005]: cluster 2026-03-10T07:59:47.031017+0000 mgr.y (mgr.24407) 1280 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:48.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:48 vm00 bash[28005]: cluster 2026-03-10T07:59:47.031017+0000 mgr.y (mgr.24407) 1280 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:48.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:48 vm00 bash[20701]: cluster 2026-03-10T07:59:47.031017+0000 mgr.y (mgr.24407) 1280 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:48.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:48 vm00 bash[20701]: cluster 2026-03-10T07:59:47.031017+0000 mgr.y (mgr.24407) 1280 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:50.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:50 vm03 bash[23382]: cluster 2026-03-10T07:59:49.031305+0000 mgr.y (mgr.24407) 1281 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 645 B/s rd, 0 op/s 2026-03-10T07:59:50.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:50 vm03 bash[23382]: cluster 2026-03-10T07:59:49.031305+0000 mgr.y (mgr.24407) 1281 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 645 B/s rd, 0 op/s 2026-03-10T07:59:50.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:50 vm00 bash[28005]: cluster 2026-03-10T07:59:49.031305+0000 mgr.y (mgr.24407) 1281 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 645 B/s rd, 0 op/s 2026-03-10T07:59:50.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:50 vm00 bash[28005]: cluster 2026-03-10T07:59:49.031305+0000 mgr.y (mgr.24407) 1281 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 645 B/s rd, 0 op/s 2026-03-10T07:59:50.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:50 vm00 bash[20701]: cluster 2026-03-10T07:59:49.031305+0000 mgr.y (mgr.24407) 1281 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 645 B/s rd, 0 op/s 2026-03-10T07:59:50.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:50 vm00 bash[20701]: cluster 2026-03-10T07:59:49.031305+0000 mgr.y (mgr.24407) 1281 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 645 B/s rd, 0 op/s 2026-03-10T07:59:51.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 07:59:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:07:59:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T07:59:52.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:52 vm03 bash[23382]: cluster 2026-03-10T07:59:51.031880+0000 mgr.y (mgr.24407) 1282 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:59:52.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:52 vm03 bash[23382]: cluster 2026-03-10T07:59:51.031880+0000 mgr.y (mgr.24407) 1282 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:59:52.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:52 vm00 bash[20701]: cluster 2026-03-10T07:59:51.031880+0000 mgr.y (mgr.24407) 1282 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:59:52.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:52 vm00 bash[20701]: cluster 2026-03-10T07:59:51.031880+0000 mgr.y (mgr.24407) 1282 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:59:52.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:52 vm00 bash[28005]: cluster 2026-03-10T07:59:51.031880+0000 mgr.y (mgr.24407) 1282 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:59:52.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:52 vm00 bash[28005]: cluster 2026-03-10T07:59:51.031880+0000 mgr.y (mgr.24407) 1282 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T07:59:54.427 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:54 vm03 bash[23382]: cluster 2026-03-10T07:59:53.032194+0000 mgr.y (mgr.24407) 1283 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:54.427 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:54 vm03 bash[23382]: cluster 2026-03-10T07:59:53.032194+0000 mgr.y (mgr.24407) 1283 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:54.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:54 vm00 bash[28005]: cluster 2026-03-10T07:59:53.032194+0000 mgr.y (mgr.24407) 1283 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:54.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:54 vm00 bash[28005]: cluster 2026-03-10T07:59:53.032194+0000 mgr.y (mgr.24407) 1283 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:54.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:54 vm00 bash[20701]: cluster 2026-03-10T07:59:53.032194+0000 mgr.y (mgr.24407) 1283 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:54.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:54 vm00 bash[20701]: cluster 2026-03-10T07:59:53.032194+0000 mgr.y (mgr.24407) 1283 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T07:59:54.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 07:59:54 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T07:59:55.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:55 vm03 bash[23382]: audit 2026-03-10T07:59:54.437963+0000 mgr.y (mgr.24407) 1284 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:55.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:55 vm03 bash[23382]: audit 2026-03-10T07:59:54.437963+0000 mgr.y (mgr.24407) 1284 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:55.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:55 vm00 bash[28005]: audit 2026-03-10T07:59:54.437963+0000 mgr.y (mgr.24407) 1284 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:55.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:55 vm00 bash[28005]: audit 2026-03-10T07:59:54.437963+0000 mgr.y (mgr.24407) 1284 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:55.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:55 vm00 bash[20701]: audit 2026-03-10T07:59:54.437963+0000 mgr.y (mgr.24407) 1284 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:55.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:55 vm00 bash[20701]: audit 2026-03-10T07:59:54.437963+0000 mgr.y (mgr.24407) 1284 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T07:59:56.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:56 vm03 bash[23382]: cluster 2026-03-10T07:59:55.032777+0000 mgr.y (mgr.24407) 1285 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T07:59:56.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:56 vm03 bash[23382]: cluster 2026-03-10T07:59:55.032777+0000 mgr.y (mgr.24407) 1285 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T07:59:56.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:56 vm03 bash[23382]: audit 2026-03-10T07:59:55.502065+0000 mon.c (mon.2) 499 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:56.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:56 vm03 bash[23382]: audit 2026-03-10T07:59:55.502065+0000 mon.c (mon.2) 499 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:56.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:56 vm00 bash[28005]: cluster 2026-03-10T07:59:55.032777+0000 mgr.y (mgr.24407) 1285 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T07:59:56.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:56 vm00 bash[28005]: cluster 2026-03-10T07:59:55.032777+0000 mgr.y (mgr.24407) 1285 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T07:59:56.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:56 vm00 bash[28005]: audit 2026-03-10T07:59:55.502065+0000 mon.c (mon.2) 499 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:56.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:56 vm00 bash[28005]: audit 2026-03-10T07:59:55.502065+0000 mon.c (mon.2) 499 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:56.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:56 vm00 bash[20701]: cluster 2026-03-10T07:59:55.032777+0000 mgr.y (mgr.24407) 1285 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T07:59:56.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:56 vm00 bash[20701]: cluster 2026-03-10T07:59:55.032777+0000 mgr.y (mgr.24407) 1285 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T07:59:56.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:56 vm00 bash[20701]: audit 2026-03-10T07:59:55.502065+0000 mon.c (mon.2) 499 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:56.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:56 vm00 bash[20701]: audit 2026-03-10T07:59:55.502065+0000 mon.c (mon.2) 499 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:59:58.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:58 vm03 bash[23382]: cluster 2026-03-10T07:59:57.033068+0000 mgr.y (mgr.24407) 1286 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:58.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 07:59:58 vm03 bash[23382]: cluster 2026-03-10T07:59:57.033068+0000 mgr.y (mgr.24407) 1286 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:58.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:58 vm00 bash[28005]: cluster 2026-03-10T07:59:57.033068+0000 mgr.y (mgr.24407) 1286 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:58.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 07:59:58 vm00 bash[28005]: cluster 2026-03-10T07:59:57.033068+0000 mgr.y (mgr.24407) 1286 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:58.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:58 vm00 bash[20701]: cluster 2026-03-10T07:59:57.033068+0000 mgr.y (mgr.24407) 1286 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T07:59:58.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 07:59:58 vm00 bash[20701]: cluster 2026-03-10T07:59:57.033068+0000 mgr.y (mgr.24407) 1286 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:00 vm03 bash[23382]: cluster 2026-03-10T07:59:59.033457+0000 mgr.y (mgr.24407) 1287 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:00 vm03 bash[23382]: cluster 2026-03-10T07:59:59.033457+0000 mgr.y (mgr.24407) 1287 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:00 vm03 bash[23382]: cluster 2026-03-10T08:00:00.000126+0000 mon.a (mon.0) 3674 : cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:00 vm03 bash[23382]: cluster 2026-03-10T08:00:00.000126+0000 mon.a (mon.0) 3674 : cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:00 vm03 bash[23382]: cluster 2026-03-10T08:00:00.000150+0000 mon.a (mon.0) 3675 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:00 vm03 bash[23382]: cluster 2026-03-10T08:00:00.000150+0000 mon.a (mon.0) 3675 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:00 vm03 bash[23382]: cluster 2026-03-10T08:00:00.000157+0000 mon.a (mon.0) 3676 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T08:00:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:00 vm03 bash[23382]: cluster 2026-03-10T08:00:00.000157+0000 mon.a (mon.0) 3676 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T08:00:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:00 vm03 bash[23382]: cluster 2026-03-10T08:00:00.000162+0000 mon.a (mon.0) 3677 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T08:00:00.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:00 vm03 bash[23382]: cluster 2026-03-10T08:00:00.000162+0000 mon.a (mon.0) 3677 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T08:00:00.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:00 vm00 bash[28005]: cluster 2026-03-10T07:59:59.033457+0000 mgr.y (mgr.24407) 1287 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:00.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:00 vm00 bash[28005]: cluster 2026-03-10T07:59:59.033457+0000 mgr.y (mgr.24407) 1287 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:00.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:00 vm00 bash[28005]: cluster 2026-03-10T08:00:00.000126+0000 mon.a (mon.0) 3674 : cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:00 vm00 bash[28005]: cluster 2026-03-10T08:00:00.000126+0000 mon.a (mon.0) 3674 : cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:00 vm00 bash[28005]: cluster 2026-03-10T08:00:00.000150+0000 mon.a (mon.0) 3675 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:00 vm00 bash[28005]: cluster 2026-03-10T08:00:00.000150+0000 mon.a (mon.0) 3675 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:00 vm00 bash[28005]: cluster 2026-03-10T08:00:00.000157+0000 mon.a (mon.0) 3676 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:00 vm00 bash[28005]: cluster 2026-03-10T08:00:00.000157+0000 mon.a (mon.0) 3676 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:00 vm00 bash[28005]: cluster 2026-03-10T08:00:00.000162+0000 mon.a (mon.0) 3677 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:00 vm00 bash[28005]: cluster 2026-03-10T08:00:00.000162+0000 mon.a (mon.0) 3677 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:00 vm00 bash[20701]: cluster 2026-03-10T07:59:59.033457+0000 mgr.y (mgr.24407) 1287 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:00 vm00 bash[20701]: cluster 2026-03-10T07:59:59.033457+0000 mgr.y (mgr.24407) 1287 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:00 vm00 bash[20701]: cluster 2026-03-10T08:00:00.000126+0000 mon.a (mon.0) 3674 : cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:00 vm00 bash[20701]: cluster 2026-03-10T08:00:00.000126+0000 mon.a (mon.0) 3674 : cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:00 vm00 bash[20701]: cluster 2026-03-10T08:00:00.000150+0000 mon.a (mon.0) 3675 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:00 vm00 bash[20701]: cluster 2026-03-10T08:00:00.000150+0000 mon.a (mon.0) 3675 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:00 vm00 bash[20701]: cluster 2026-03-10T08:00:00.000157+0000 mon.a (mon.0) 3676 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:00 vm00 bash[20701]: cluster 2026-03-10T08:00:00.000157+0000 mon.a (mon.0) 3676 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:00 vm00 bash[20701]: cluster 2026-03-10T08:00:00.000162+0000 mon.a (mon.0) 3677 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T08:00:00.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:00 vm00 bash[20701]: cluster 2026-03-10T08:00:00.000162+0000 mon.a (mon.0) 3677 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T08:00:01.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:00:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:08:00:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:00:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:02 vm03 bash[23382]: cluster 2026-03-10T08:00:01.034095+0000 mgr.y (mgr.24407) 1288 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:02.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:02 vm03 bash[23382]: cluster 2026-03-10T08:00:01.034095+0000 mgr.y (mgr.24407) 1288 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:02.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:02 vm00 bash[28005]: cluster 2026-03-10T08:00:01.034095+0000 mgr.y (mgr.24407) 1288 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:02.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:02 vm00 bash[28005]: cluster 2026-03-10T08:00:01.034095+0000 mgr.y (mgr.24407) 1288 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:02.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:02 vm00 bash[20701]: cluster 2026-03-10T08:00:01.034095+0000 mgr.y (mgr.24407) 1288 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:02.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:02 vm00 bash[20701]: cluster 2026-03-10T08:00:01.034095+0000 mgr.y (mgr.24407) 1288 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:04.512 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:00:04 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T08:00:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:04 vm03 bash[23382]: cluster 2026-03-10T08:00:03.034405+0000 mgr.y (mgr.24407) 1289 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:04.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:04 vm03 bash[23382]: cluster 2026-03-10T08:00:03.034405+0000 mgr.y (mgr.24407) 1289 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:04.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:04 vm00 bash[28005]: cluster 2026-03-10T08:00:03.034405+0000 mgr.y (mgr.24407) 1289 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:04.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:04 vm00 bash[28005]: cluster 2026-03-10T08:00:03.034405+0000 mgr.y (mgr.24407) 1289 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:04.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:04 vm00 bash[20701]: cluster 2026-03-10T08:00:03.034405+0000 mgr.y (mgr.24407) 1289 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:04.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:04 vm00 bash[20701]: cluster 2026-03-10T08:00:03.034405+0000 mgr.y (mgr.24407) 1289 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:05.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:05 vm00 bash[28005]: audit 2026-03-10T08:00:04.442614+0000 mgr.y (mgr.24407) 1290 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:05.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:05 vm00 bash[28005]: audit 2026-03-10T08:00:04.442614+0000 mgr.y (mgr.24407) 1290 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:05.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:05 vm00 bash[20701]: audit 2026-03-10T08:00:04.442614+0000 mgr.y (mgr.24407) 1290 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:05.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:05 vm00 bash[20701]: audit 2026-03-10T08:00:04.442614+0000 mgr.y (mgr.24407) 1290 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:05.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:05 vm03 bash[23382]: audit 2026-03-10T08:00:04.442614+0000 mgr.y (mgr.24407) 1290 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:05.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:05 vm03 bash[23382]: audit 2026-03-10T08:00:04.442614+0000 mgr.y (mgr.24407) 1290 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:06.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:06 vm00 bash[28005]: cluster 2026-03-10T08:00:05.034933+0000 mgr.y (mgr.24407) 1291 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:06.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:06 vm00 bash[28005]: cluster 2026-03-10T08:00:05.034933+0000 mgr.y (mgr.24407) 1291 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:06.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:06 vm00 bash[20701]: cluster 2026-03-10T08:00:05.034933+0000 mgr.y (mgr.24407) 1291 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:06.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:06 vm00 bash[20701]: cluster 2026-03-10T08:00:05.034933+0000 mgr.y (mgr.24407) 1291 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:06.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:06 vm03 bash[23382]: cluster 2026-03-10T08:00:05.034933+0000 mgr.y (mgr.24407) 1291 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:06.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:06 vm03 bash[23382]: cluster 2026-03-10T08:00:05.034933+0000 mgr.y (mgr.24407) 1291 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:08.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:08 vm00 bash[28005]: cluster 2026-03-10T08:00:07.035216+0000 mgr.y (mgr.24407) 1292 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:08.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:08 vm00 bash[28005]: cluster 2026-03-10T08:00:07.035216+0000 mgr.y (mgr.24407) 1292 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:08.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:08 vm00 bash[20701]: cluster 2026-03-10T08:00:07.035216+0000 mgr.y (mgr.24407) 1292 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:08.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:08 vm00 bash[20701]: cluster 2026-03-10T08:00:07.035216+0000 mgr.y (mgr.24407) 1292 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:08 vm03 bash[23382]: cluster 2026-03-10T08:00:07.035216+0000 mgr.y (mgr.24407) 1292 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:08 vm03 bash[23382]: cluster 2026-03-10T08:00:07.035216+0000 mgr.y (mgr.24407) 1292 : cluster [DBG] pgmap v1735: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:10.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:10 vm00 bash[28005]: cluster 2026-03-10T08:00:09.035490+0000 mgr.y (mgr.24407) 1293 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:10.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:10 vm00 bash[28005]: cluster 2026-03-10T08:00:09.035490+0000 mgr.y (mgr.24407) 1293 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:10.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:10 vm00 bash[20701]: cluster 2026-03-10T08:00:09.035490+0000 mgr.y (mgr.24407) 1293 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:10.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:10 vm00 bash[20701]: cluster 2026-03-10T08:00:09.035490+0000 mgr.y (mgr.24407) 1293 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:10 vm03 bash[23382]: cluster 2026-03-10T08:00:09.035490+0000 mgr.y (mgr.24407) 1293 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:10 vm03 bash[23382]: cluster 2026-03-10T08:00:09.035490+0000 mgr.y (mgr.24407) 1293 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:11.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:11 vm00 bash[28005]: audit 2026-03-10T08:00:10.507755+0000 mon.c (mon.2) 500 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:11.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:11 vm00 bash[28005]: audit 2026-03-10T08:00:10.507755+0000 mon.c (mon.2) 500 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:11.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:00:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:08:00:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:00:11.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:11 vm00 bash[20701]: audit 2026-03-10T08:00:10.507755+0000 mon.c (mon.2) 500 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:11.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:11 vm00 bash[20701]: audit 2026-03-10T08:00:10.507755+0000 mon.c (mon.2) 500 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:11.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:11 vm03 bash[23382]: audit 2026-03-10T08:00:10.507755+0000 mon.c (mon.2) 500 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:11.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:11 vm03 bash[23382]: audit 2026-03-10T08:00:10.507755+0000 mon.c (mon.2) 500 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:12.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:12 vm00 bash[28005]: cluster 2026-03-10T08:00:11.036083+0000 mgr.y (mgr.24407) 1294 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:12.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:12 vm00 bash[28005]: cluster 2026-03-10T08:00:11.036083+0000 mgr.y (mgr.24407) 1294 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:12.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:12 vm00 bash[20701]: cluster 2026-03-10T08:00:11.036083+0000 mgr.y (mgr.24407) 1294 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:12.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:12 vm00 bash[20701]: cluster 2026-03-10T08:00:11.036083+0000 mgr.y (mgr.24407) 1294 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:12.761 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:12 vm03 bash[23382]: cluster 2026-03-10T08:00:11.036083+0000 mgr.y (mgr.24407) 1294 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:12.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:12 vm03 bash[23382]: cluster 2026-03-10T08:00:11.036083+0000 mgr.y (mgr.24407) 1294 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:13.141 INFO:tasks.workunit.client.0.vm00.stderr:+ seq 1 10 2026-03-10T08:00:13.142 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p a16f944b-49df-4ed0-bee4-6bfebe190ca8 put obj1 /etc/passwd 2026-03-10T08:00:13.168 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p a16f944b-49df-4ed0-bee4-6bfebe190ca8 put obj2 /etc/passwd 2026-03-10T08:00:13.192 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p a16f944b-49df-4ed0-bee4-6bfebe190ca8 put obj3 /etc/passwd 2026-03-10T08:00:13.215 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p a16f944b-49df-4ed0-bee4-6bfebe190ca8 put obj4 /etc/passwd 2026-03-10T08:00:13.239 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p a16f944b-49df-4ed0-bee4-6bfebe190ca8 put obj5 /etc/passwd 2026-03-10T08:00:13.263 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p a16f944b-49df-4ed0-bee4-6bfebe190ca8 put obj6 /etc/passwd 2026-03-10T08:00:13.287 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p a16f944b-49df-4ed0-bee4-6bfebe190ca8 put obj7 /etc/passwd 2026-03-10T08:00:13.311 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p a16f944b-49df-4ed0-bee4-6bfebe190ca8 put obj8 /etc/passwd 2026-03-10T08:00:13.337 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p a16f944b-49df-4ed0-bee4-6bfebe190ca8 put obj9 /etc/passwd 2026-03-10T08:00:13.361 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p a16f944b-49df-4ed0-bee4-6bfebe190ca8 put obj10 /etc/passwd 2026-03-10T08:00:13.384 INFO:tasks.workunit.client.0.vm00.stderr:+ sleep 30 2026-03-10T08:00:14.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:14 vm00 bash[28005]: cluster 2026-03-10T08:00:13.036311+0000 mgr.y (mgr.24407) 1295 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:14.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:14 vm00 bash[28005]: cluster 2026-03-10T08:00:13.036311+0000 mgr.y (mgr.24407) 1295 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:14.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:14 vm00 bash[20701]: cluster 2026-03-10T08:00:13.036311+0000 mgr.y (mgr.24407) 1295 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:14.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:14 vm00 bash[20701]: cluster 2026-03-10T08:00:13.036311+0000 mgr.y (mgr.24407) 1295 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:14.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:00:14 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T08:00:14.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:14 vm03 bash[23382]: cluster 2026-03-10T08:00:13.036311+0000 mgr.y (mgr.24407) 1295 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:14.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:14 vm03 bash[23382]: cluster 2026-03-10T08:00:13.036311+0000 mgr.y (mgr.24407) 1295 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:15.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:15 vm00 bash[28005]: audit 2026-03-10T08:00:14.449699+0000 mgr.y (mgr.24407) 1296 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:15.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:15 vm00 bash[28005]: audit 2026-03-10T08:00:14.449699+0000 mgr.y (mgr.24407) 1296 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:15.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:15 vm00 bash[20701]: audit 2026-03-10T08:00:14.449699+0000 mgr.y (mgr.24407) 1296 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:15.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:15 vm00 bash[20701]: audit 2026-03-10T08:00:14.449699+0000 mgr.y (mgr.24407) 1296 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:15.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:15 vm03 bash[23382]: audit 2026-03-10T08:00:14.449699+0000 mgr.y (mgr.24407) 1296 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:15.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:15 vm03 bash[23382]: audit 2026-03-10T08:00:14.449699+0000 mgr.y (mgr.24407) 1296 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:16.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:16 vm00 bash[28005]: cluster 2026-03-10T08:00:15.036956+0000 mgr.y (mgr.24407) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T08:00:16.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:16 vm00 bash[28005]: cluster 2026-03-10T08:00:15.036956+0000 mgr.y (mgr.24407) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T08:00:16.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:16 vm00 bash[20701]: cluster 2026-03-10T08:00:15.036956+0000 mgr.y (mgr.24407) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T08:00:16.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:16 vm00 bash[20701]: cluster 2026-03-10T08:00:15.036956+0000 mgr.y (mgr.24407) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T08:00:16.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:16 vm03 bash[23382]: cluster 2026-03-10T08:00:15.036956+0000 mgr.y (mgr.24407) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T08:00:16.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:16 vm03 bash[23382]: cluster 2026-03-10T08:00:15.036956+0000 mgr.y (mgr.24407) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T08:00:17.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:17 vm00 bash[28005]: cluster 2026-03-10T08:00:17.075144+0000 mon.a (mon.0) 3678 : cluster [WRN] pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' is full (reached quota's max_objects: 10) 2026-03-10T08:00:17.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:17 vm00 bash[28005]: cluster 2026-03-10T08:00:17.075144+0000 mon.a (mon.0) 3678 : cluster [WRN] pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' is full (reached quota's max_objects: 10) 2026-03-10T08:00:17.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:17 vm00 bash[28005]: cluster 2026-03-10T08:00:17.075385+0000 mon.a (mon.0) 3679 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T08:00:17.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:17 vm00 bash[28005]: cluster 2026-03-10T08:00:17.075385+0000 mon.a (mon.0) 3679 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T08:00:17.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:17 vm00 bash[28005]: cluster 2026-03-10T08:00:17.084266+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T08:00:17.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:17 vm00 bash[28005]: cluster 2026-03-10T08:00:17.084266+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T08:00:17.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:17 vm00 bash[20701]: cluster 2026-03-10T08:00:17.075144+0000 mon.a (mon.0) 3678 : cluster [WRN] pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' is full (reached quota's max_objects: 10) 2026-03-10T08:00:17.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:17 vm00 bash[20701]: cluster 2026-03-10T08:00:17.075144+0000 mon.a (mon.0) 3678 : cluster [WRN] pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' is full (reached quota's max_objects: 10) 2026-03-10T08:00:17.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:17 vm00 bash[20701]: cluster 2026-03-10T08:00:17.075385+0000 mon.a (mon.0) 3679 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T08:00:17.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:17 vm00 bash[20701]: cluster 2026-03-10T08:00:17.075385+0000 mon.a (mon.0) 3679 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T08:00:17.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:17 vm00 bash[20701]: cluster 2026-03-10T08:00:17.084266+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T08:00:17.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:17 vm00 bash[20701]: cluster 2026-03-10T08:00:17.084266+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T08:00:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:17 vm03 bash[23382]: cluster 2026-03-10T08:00:17.075144+0000 mon.a (mon.0) 3678 : cluster [WRN] pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' is full (reached quota's max_objects: 10) 2026-03-10T08:00:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:17 vm03 bash[23382]: cluster 2026-03-10T08:00:17.075144+0000 mon.a (mon.0) 3678 : cluster [WRN] pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' is full (reached quota's max_objects: 10) 2026-03-10T08:00:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:17 vm03 bash[23382]: cluster 2026-03-10T08:00:17.075385+0000 mon.a (mon.0) 3679 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T08:00:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:17 vm03 bash[23382]: cluster 2026-03-10T08:00:17.075385+0000 mon.a (mon.0) 3679 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T08:00:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:17 vm03 bash[23382]: cluster 2026-03-10T08:00:17.084266+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T08:00:17.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:17 vm03 bash[23382]: cluster 2026-03-10T08:00:17.084266+0000 mon.a (mon.0) 3680 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T08:00:18.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:18 vm00 bash[28005]: cluster 2026-03-10T08:00:17.037288+0000 mgr.y (mgr.24407) 1298 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T08:00:18.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:18 vm00 bash[28005]: cluster 2026-03-10T08:00:17.037288+0000 mgr.y (mgr.24407) 1298 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T08:00:18.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:18 vm00 bash[20701]: cluster 2026-03-10T08:00:17.037288+0000 mgr.y (mgr.24407) 1298 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T08:00:18.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:18 vm00 bash[20701]: cluster 2026-03-10T08:00:17.037288+0000 mgr.y (mgr.24407) 1298 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T08:00:18.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:18 vm03 bash[23382]: cluster 2026-03-10T08:00:17.037288+0000 mgr.y (mgr.24407) 1298 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T08:00:18.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:18 vm03 bash[23382]: cluster 2026-03-10T08:00:17.037288+0000 mgr.y (mgr.24407) 1298 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T08:00:20.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:20 vm00 bash[28005]: cluster 2026-03-10T08:00:19.037593+0000 mgr.y (mgr.24407) 1299 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:20.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:20 vm00 bash[28005]: cluster 2026-03-10T08:00:19.037593+0000 mgr.y (mgr.24407) 1299 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:20.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:20 vm00 bash[20701]: cluster 2026-03-10T08:00:19.037593+0000 mgr.y (mgr.24407) 1299 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:20.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:20 vm00 bash[20701]: cluster 2026-03-10T08:00:19.037593+0000 mgr.y (mgr.24407) 1299 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:20.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:20 vm03 bash[23382]: cluster 2026-03-10T08:00:19.037593+0000 mgr.y (mgr.24407) 1299 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:20.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:20 vm03 bash[23382]: cluster 2026-03-10T08:00:19.037593+0000 mgr.y (mgr.24407) 1299 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:21.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:00:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:08:00:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:00:22.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:22 vm00 bash[28005]: cluster 2026-03-10T08:00:21.038193+0000 mgr.y (mgr.24407) 1300 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:22.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:22 vm00 bash[28005]: cluster 2026-03-10T08:00:21.038193+0000 mgr.y (mgr.24407) 1300 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:22.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:22 vm00 bash[20701]: cluster 2026-03-10T08:00:21.038193+0000 mgr.y (mgr.24407) 1300 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:22.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:22 vm00 bash[20701]: cluster 2026-03-10T08:00:21.038193+0000 mgr.y (mgr.24407) 1300 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:22.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:22 vm03 bash[23382]: cluster 2026-03-10T08:00:21.038193+0000 mgr.y (mgr.24407) 1300 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:22.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:22 vm03 bash[23382]: cluster 2026-03-10T08:00:21.038193+0000 mgr.y (mgr.24407) 1300 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:24.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:24 vm00 bash[28005]: cluster 2026-03-10T08:00:23.038494+0000 mgr.y (mgr.24407) 1301 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:24.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:24 vm00 bash[28005]: cluster 2026-03-10T08:00:23.038494+0000 mgr.y (mgr.24407) 1301 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:24.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:24 vm00 bash[20701]: cluster 2026-03-10T08:00:23.038494+0000 mgr.y (mgr.24407) 1301 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:24.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:24 vm00 bash[20701]: cluster 2026-03-10T08:00:23.038494+0000 mgr.y (mgr.24407) 1301 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:24.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:24 vm03 bash[23382]: cluster 2026-03-10T08:00:23.038494+0000 mgr.y (mgr.24407) 1301 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:24.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:24 vm03 bash[23382]: cluster 2026-03-10T08:00:23.038494+0000 mgr.y (mgr.24407) 1301 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T08:00:24.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:00:24 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T08:00:25.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:25 vm00 bash[28005]: audit 2026-03-10T08:00:24.458631+0000 mgr.y (mgr.24407) 1302 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:25.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:25 vm00 bash[28005]: audit 2026-03-10T08:00:24.458631+0000 mgr.y (mgr.24407) 1302 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:25.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:25 vm00 bash[20701]: audit 2026-03-10T08:00:24.458631+0000 mgr.y (mgr.24407) 1302 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:25.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:25 vm00 bash[20701]: audit 2026-03-10T08:00:24.458631+0000 mgr.y (mgr.24407) 1302 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:25.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:25 vm03 bash[23382]: audit 2026-03-10T08:00:24.458631+0000 mgr.y (mgr.24407) 1302 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:25.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:25 vm03 bash[23382]: audit 2026-03-10T08:00:24.458631+0000 mgr.y (mgr.24407) 1302 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:26.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:26 vm03 bash[23382]: cluster 2026-03-10T08:00:25.039016+0000 mgr.y (mgr.24407) 1303 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:26.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:26 vm03 bash[23382]: cluster 2026-03-10T08:00:25.039016+0000 mgr.y (mgr.24407) 1303 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:26.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:26 vm03 bash[23382]: audit 2026-03-10T08:00:25.516926+0000 mon.a (mon.0) 3681 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:26.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:26 vm03 bash[23382]: audit 2026-03-10T08:00:25.516926+0000 mon.a (mon.0) 3681 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:26.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:26 vm03 bash[23382]: audit 2026-03-10T08:00:25.517970+0000 mon.c (mon.2) 501 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:26.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:26 vm03 bash[23382]: audit 2026-03-10T08:00:25.517970+0000 mon.c (mon.2) 501 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:26.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:26 vm00 bash[28005]: cluster 2026-03-10T08:00:25.039016+0000 mgr.y (mgr.24407) 1303 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:26.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:26 vm00 bash[28005]: cluster 2026-03-10T08:00:25.039016+0000 mgr.y (mgr.24407) 1303 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:26.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:26 vm00 bash[28005]: audit 2026-03-10T08:00:25.516926+0000 mon.a (mon.0) 3681 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:26.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:26 vm00 bash[28005]: audit 2026-03-10T08:00:25.516926+0000 mon.a (mon.0) 3681 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:26.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:26 vm00 bash[28005]: audit 2026-03-10T08:00:25.517970+0000 mon.c (mon.2) 501 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:26.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:26 vm00 bash[28005]: audit 2026-03-10T08:00:25.517970+0000 mon.c (mon.2) 501 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:26.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:26 vm00 bash[20701]: cluster 2026-03-10T08:00:25.039016+0000 mgr.y (mgr.24407) 1303 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:26.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:26 vm00 bash[20701]: cluster 2026-03-10T08:00:25.039016+0000 mgr.y (mgr.24407) 1303 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:26.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:26 vm00 bash[20701]: audit 2026-03-10T08:00:25.516926+0000 mon.a (mon.0) 3681 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:26.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:26 vm00 bash[20701]: audit 2026-03-10T08:00:25.516926+0000 mon.a (mon.0) 3681 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:26.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:26 vm00 bash[20701]: audit 2026-03-10T08:00:25.517970+0000 mon.c (mon.2) 501 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:26.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:26 vm00 bash[20701]: audit 2026-03-10T08:00:25.517970+0000 mon.c (mon.2) 501 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:28.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:28 vm00 bash[28005]: cluster 2026-03-10T08:00:27.039318+0000 mgr.y (mgr.24407) 1304 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:28.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:28 vm00 bash[28005]: cluster 2026-03-10T08:00:27.039318+0000 mgr.y (mgr.24407) 1304 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:28.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:28 vm00 bash[20701]: cluster 2026-03-10T08:00:27.039318+0000 mgr.y (mgr.24407) 1304 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:28.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:28 vm00 bash[20701]: cluster 2026-03-10T08:00:27.039318+0000 mgr.y (mgr.24407) 1304 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:29.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:28 vm03 bash[23382]: cluster 2026-03-10T08:00:27.039318+0000 mgr.y (mgr.24407) 1304 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:29.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:28 vm03 bash[23382]: cluster 2026-03-10T08:00:27.039318+0000 mgr.y (mgr.24407) 1304 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:30.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:30 vm00 bash[28005]: cluster 2026-03-10T08:00:29.039670+0000 mgr.y (mgr.24407) 1305 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 856 B/s rd, 0 op/s 2026-03-10T08:00:30.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:30 vm00 bash[28005]: cluster 2026-03-10T08:00:29.039670+0000 mgr.y (mgr.24407) 1305 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 856 B/s rd, 0 op/s 2026-03-10T08:00:30.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:30 vm00 bash[20701]: cluster 2026-03-10T08:00:29.039670+0000 mgr.y (mgr.24407) 1305 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 856 B/s rd, 0 op/s 2026-03-10T08:00:30.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:30 vm00 bash[20701]: cluster 2026-03-10T08:00:29.039670+0000 mgr.y (mgr.24407) 1305 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 856 B/s rd, 0 op/s 2026-03-10T08:00:31.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:30 vm03 bash[23382]: cluster 2026-03-10T08:00:29.039670+0000 mgr.y (mgr.24407) 1305 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 856 B/s rd, 0 op/s 2026-03-10T08:00:31.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:30 vm03 bash[23382]: cluster 2026-03-10T08:00:29.039670+0000 mgr.y (mgr.24407) 1305 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 856 B/s rd, 0 op/s 2026-03-10T08:00:31.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:00:31 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:08:00:31] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:00:32.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:32 vm00 bash[28005]: cluster 2026-03-10T08:00:31.040563+0000 mgr.y (mgr.24407) 1306 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:32.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:32 vm00 bash[28005]: cluster 2026-03-10T08:00:31.040563+0000 mgr.y (mgr.24407) 1306 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:32.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:32 vm00 bash[20701]: cluster 2026-03-10T08:00:31.040563+0000 mgr.y (mgr.24407) 1306 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:32.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:32 vm00 bash[20701]: cluster 2026-03-10T08:00:31.040563+0000 mgr.y (mgr.24407) 1306 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:33.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:32 vm03 bash[23382]: cluster 2026-03-10T08:00:31.040563+0000 mgr.y (mgr.24407) 1306 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:33.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:32 vm03 bash[23382]: cluster 2026-03-10T08:00:31.040563+0000 mgr.y (mgr.24407) 1306 : cluster [DBG] pgmap v1748: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:33.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:33 vm00 bash[28005]: cluster 2026-03-10T08:00:33.041168+0000 mgr.y (mgr.24407) 1307 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:33.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:33 vm00 bash[28005]: cluster 2026-03-10T08:00:33.041168+0000 mgr.y (mgr.24407) 1307 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:33.875 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:33 vm00 bash[20701]: cluster 2026-03-10T08:00:33.041168+0000 mgr.y (mgr.24407) 1307 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:33.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:33 vm00 bash[20701]: cluster 2026-03-10T08:00:33.041168+0000 mgr.y (mgr.24407) 1307 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:34.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:33 vm03 bash[23382]: cluster 2026-03-10T08:00:33.041168+0000 mgr.y (mgr.24407) 1307 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:34.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:33 vm03 bash[23382]: cluster 2026-03-10T08:00:33.041168+0000 mgr.y (mgr.24407) 1307 : cluster [DBG] pgmap v1749: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:34.761 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:00:34 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T08:00:34.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:34 vm03 bash[23382]: audit 2026-03-10T08:00:34.462141+0000 mgr.y (mgr.24407) 1308 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:34.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:34 vm03 bash[23382]: audit 2026-03-10T08:00:34.462141+0000 mgr.y (mgr.24407) 1308 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:34.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:34 vm00 bash[28005]: audit 2026-03-10T08:00:34.462141+0000 mgr.y (mgr.24407) 1308 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:34.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:34 vm00 bash[28005]: audit 2026-03-10T08:00:34.462141+0000 mgr.y (mgr.24407) 1308 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:34.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:34 vm00 bash[20701]: audit 2026-03-10T08:00:34.462141+0000 mgr.y (mgr.24407) 1308 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:34.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:34 vm00 bash[20701]: audit 2026-03-10T08:00:34.462141+0000 mgr.y (mgr.24407) 1308 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:35.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:35 vm00 bash[28005]: cluster 2026-03-10T08:00:35.041813+0000 mgr.y (mgr.24407) 1309 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:35.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:35 vm00 bash[28005]: cluster 2026-03-10T08:00:35.041813+0000 mgr.y (mgr.24407) 1309 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:35.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:35 vm00 bash[20701]: cluster 2026-03-10T08:00:35.041813+0000 mgr.y (mgr.24407) 1309 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:35.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:35 vm00 bash[20701]: cluster 2026-03-10T08:00:35.041813+0000 mgr.y (mgr.24407) 1309 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:36.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:35 vm03 bash[23382]: cluster 2026-03-10T08:00:35.041813+0000 mgr.y (mgr.24407) 1309 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:36.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:35 vm03 bash[23382]: cluster 2026-03-10T08:00:35.041813+0000 mgr.y (mgr.24407) 1309 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:38.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:38 vm00 bash[20701]: cluster 2026-03-10T08:00:37.042775+0000 mgr.y (mgr.24407) 1310 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:38.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:38 vm00 bash[20701]: cluster 2026-03-10T08:00:37.042775+0000 mgr.y (mgr.24407) 1310 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:38.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:38 vm00 bash[28005]: cluster 2026-03-10T08:00:37.042775+0000 mgr.y (mgr.24407) 1310 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:38.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:38 vm00 bash[28005]: cluster 2026-03-10T08:00:37.042775+0000 mgr.y (mgr.24407) 1310 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:38.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:38 vm03 bash[23382]: cluster 2026-03-10T08:00:37.042775+0000 mgr.y (mgr.24407) 1310 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:38.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:38 vm03 bash[23382]: cluster 2026-03-10T08:00:37.042775+0000 mgr.y (mgr.24407) 1310 : cluster [DBG] pgmap v1751: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:40 vm03 bash[23382]: cluster 2026-03-10T08:00:39.043208+0000 mgr.y (mgr.24407) 1311 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:40.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:40 vm03 bash[23382]: cluster 2026-03-10T08:00:39.043208+0000 mgr.y (mgr.24407) 1311 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:40.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:40 vm00 bash[28005]: cluster 2026-03-10T08:00:39.043208+0000 mgr.y (mgr.24407) 1311 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:40.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:40 vm00 bash[28005]: cluster 2026-03-10T08:00:39.043208+0000 mgr.y (mgr.24407) 1311 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:40.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:40 vm00 bash[20701]: cluster 2026-03-10T08:00:39.043208+0000 mgr.y (mgr.24407) 1311 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:40.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:40 vm00 bash[20701]: cluster 2026-03-10T08:00:39.043208+0000 mgr.y (mgr.24407) 1311 : cluster [DBG] pgmap v1752: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:41.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:41 vm00 bash[28005]: audit 2026-03-10T08:00:40.529601+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:41.375 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:41 vm00 bash[28005]: audit 2026-03-10T08:00:40.529601+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:41.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:00:41 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:08:00:41] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:00:41.375 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:41 vm00 bash[20701]: audit 2026-03-10T08:00:40.529601+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:41.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:41 vm00 bash[20701]: audit 2026-03-10T08:00:40.529601+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:41 vm03 bash[23382]: audit 2026-03-10T08:00:40.529601+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:41.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:41 vm03 bash[23382]: audit 2026-03-10T08:00:40.529601+0000 mon.c (mon.2) 502 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:42.511 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:42 vm03 bash[23382]: cluster 2026-03-10T08:00:41.044326+0000 mgr.y (mgr.24407) 1312 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:42.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:42 vm03 bash[23382]: cluster 2026-03-10T08:00:41.044326+0000 mgr.y (mgr.24407) 1312 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:42.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:42 vm00 bash[28005]: cluster 2026-03-10T08:00:41.044326+0000 mgr.y (mgr.24407) 1312 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:42.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:42 vm00 bash[28005]: cluster 2026-03-10T08:00:41.044326+0000 mgr.y (mgr.24407) 1312 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:42.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:42 vm00 bash[20701]: cluster 2026-03-10T08:00:41.044326+0000 mgr.y (mgr.24407) 1312 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:42.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:42 vm00 bash[20701]: cluster 2026-03-10T08:00:41.044326+0000 mgr.y (mgr.24407) 1312 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:43.385 INFO:tasks.workunit.client.0.vm00.stderr:+ rados -p 848a4d04-314a-4289-950b-2472b7cc83f9 put threemore /etc/passwd 2026-03-10T08:00:43.413 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota 848a4d04-314a-4289-950b-2472b7cc83f9 max_bytes 0 2026-03-10T08:00:43.477 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 -- 192.168.123.100:0/1224488055 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa6fc106830 msgr2=0x7fa6fc075420 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:00:43.477 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 --2- 192.168.123.100:0/1224488055 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa6fc106830 0x7fa6fc075420 secure :-1 s=READY pgs=3159 cs=0 l=1 rev1=1 crypto rx=0x7fa6f0009a30 tx=0x7fa6f001c990 comp rx=0 tx=0).stop 2026-03-10T08:00:43.478 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 -- 192.168.123.100:0/1224488055 shutdown_connections 2026-03-10T08:00:43.478 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 --2- 192.168.123.100:0/1224488055 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa6fc113730 0x7fa6fc115b60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:43.478 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 --2- 192.168.123.100:0/1224488055 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fa6fc075960 0x7fa6fc075da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:43.478 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 --2- 192.168.123.100:0/1224488055 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa6fc106830 0x7fa6fc075420 unknown :-1 s=CLOSED pgs=3159 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:43.478 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 -- 192.168.123.100:0/1224488055 >> 192.168.123.100:0/1224488055 conn(0x7fa6fc0fe640 msgr2=0x7fa6fc100a60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T08:00:43.478 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 -- 192.168.123.100:0/1224488055 shutdown_connections 2026-03-10T08:00:43.478 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 -- 192.168.123.100:0/1224488055 wait complete. 2026-03-10T08:00:43.478 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 Processor -- start 2026-03-10T08:00:43.478 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 -- start start 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa6fc075960 0x7fa6fc1a41b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fa6fc106830 0x7fa6fc1a46f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa6fc113730 0x7fa6fc1a8a80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fa6fc11bac0 con 0x7fa6fc113730 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fa6fc11b940 con 0x7fa6fc106830 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa700f48640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fa6fc11bc40 con 0x7fa6fc075960 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa6fa575640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa6fc075960 0x7fa6fc1a41b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa6fa575640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa6fc075960 0x7fa6fc1a41b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:35910/0 (socket says 192.168.123.100:35910) 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.484+0000 7fa6fa575640 1 -- 192.168.123.100:0/3316182820 learned_addr learned my addr 192.168.123.100:0/3316182820 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6fa575640 1 -- 192.168.123.100:0/3316182820 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fa6fc106830 msgr2=0x7fa6fc1a46f0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6fad76640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa6fc113730 0x7fa6fc1a8a80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6fa575640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fa6fc106830 0x7fa6fc1a46f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6fa575640 1 -- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa6fc113730 msgr2=0x7fa6fc1a8a80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6fa575640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa6fc113730 0x7fa6fc1a8a80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6fa575640 1 -- 192.168.123.100:0/3316182820 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa6fc1a9160 con 0x7fa6fc075960 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6fad76640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa6fc113730 0x7fa6fc1a8a80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6fa575640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa6fc075960 0x7fa6fc1a41b0 secure :-1 s=READY pgs=3160 cs=0 l=1 rev1=1 crypto rx=0x7fa6f0019770 tx=0x7fa6f0005fb0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T08:00:43.479 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa6f0004280 con 0x7fa6fc075960 2026-03-10T08:00:43.480 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fa6f0004420 con 0x7fa6fc075960 2026-03-10T08:00:43.480 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa700f48640 1 -- 192.168.123.100:0/3316182820 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa6fc1a93f0 con 0x7fa6fc075960 2026-03-10T08:00:43.480 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa700f48640 1 -- 192.168.123.100:0/3316182820 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fa6fc1a98a0 con 0x7fa6fc075960 2026-03-10T08:00:43.480 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa6f0002990 con 0x7fa6fc075960 2026-03-10T08:00:43.482 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa700f48640 1 -- 192.168.123.100:0/3316182820 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa6bc005190 con 0x7fa6fc075960 2026-03-10T08:00:43.482 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fa6f0002b30 con 0x7fa6fc075960 2026-03-10T08:00:43.482 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6e37fe640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa6d8077710 0x7fa6d8079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:00:43.482 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(782..782 src has 258..782) ==== 9461+0+0 (secure 0 0 0) 0x7fa6f0134d50 con 0x7fa6fc075960 2026-03-10T08:00:43.482 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.488+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=783}) -- 0x7fa6d8083ab0 con 0x7fa6fc075960 2026-03-10T08:00:43.482 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.492+0000 7fa6f9d74640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa6d8077710 0x7fa6d8079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:00:43.483 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.492+0000 7fa6f9d74640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa6d8077710 0x7fa6d8079bd0 secure :-1 s=READY pgs=4339 cs=0 l=1 rev1=1 crypto rx=0x7fa6e4005fd0 tx=0x7fa6e4005d00 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T08:00:43.484 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.492+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa6f00bd050 con 0x7fa6fc075960 2026-03-10T08:00:43.578 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:43.584+0000 7fa700f48640 1 -- 192.168.123.100:0/3316182820 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"} v 0) -- 0x7fa6bc005480 con 0x7fa6fc075960 2026-03-10T08:00:44.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:44.216+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 <== mon.2 v2:192.168.123.100:3301/0 7 ==== osd_map(783..783 src has 258..783) ==== 628+0+0 (secure 0 0 0) 0x7fa6f00f8dc0 con 0x7fa6fc075960 2026-03-10T08:00:44.207 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:44.216+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=784}) -- 0x7fa6d8084660 con 0x7fa6fc075960 2026-03-10T08:00:44.214 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:44.220+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 <== mon.2 v2:192.168.123.100:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v783) ==== 217+0+0 (secure 0 0 0) 0x7fa6f0016610 con 0x7fa6fc075960 2026-03-10T08:00:44.266 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:44.272+0000 7fa700f48640 1 -- 192.168.123.100:0/3316182820 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"} v 0) -- 0x7fa6bc003560 con 0x7fa6fc075960 2026-03-10T08:00:44.512 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:00:44 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T08:00:44.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:44 vm03 bash[23382]: cluster 2026-03-10T08:00:43.044823+0000 mgr.y (mgr.24407) 1313 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:44.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:44 vm03 bash[23382]: cluster 2026-03-10T08:00:43.044823+0000 mgr.y (mgr.24407) 1313 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:44.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:44 vm03 bash[23382]: audit 2026-03-10T08:00:43.589074+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:44.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:44 vm03 bash[23382]: audit 2026-03-10T08:00:43.589074+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:44.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:44 vm03 bash[23382]: audit 2026-03-10T08:00:43.589393+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:44.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:44 vm03 bash[23382]: audit 2026-03-10T08:00:43.589393+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:44.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:44 vm00 bash[28005]: cluster 2026-03-10T08:00:43.044823+0000 mgr.y (mgr.24407) 1313 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:44.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:44 vm00 bash[28005]: cluster 2026-03-10T08:00:43.044823+0000 mgr.y (mgr.24407) 1313 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:44.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:44 vm00 bash[28005]: audit 2026-03-10T08:00:43.589074+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:44.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:44 vm00 bash[28005]: audit 2026-03-10T08:00:43.589074+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:44.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:44 vm00 bash[28005]: audit 2026-03-10T08:00:43.589393+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:44.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:44 vm00 bash[28005]: audit 2026-03-10T08:00:43.589393+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:44.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:44 vm00 bash[20701]: cluster 2026-03-10T08:00:43.044823+0000 mgr.y (mgr.24407) 1313 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:44.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:44 vm00 bash[20701]: cluster 2026-03-10T08:00:43.044823+0000 mgr.y (mgr.24407) 1313 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:44.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:44 vm00 bash[20701]: audit 2026-03-10T08:00:43.589074+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:44.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:44 vm00 bash[20701]: audit 2026-03-10T08:00:43.589074+0000 mon.c (mon.2) 503 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:44.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:44 vm00 bash[20701]: audit 2026-03-10T08:00:43.589393+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:44.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:44 vm00 bash[20701]: audit 2026-03-10T08:00:43.589393+0000 mon.a (mon.0) 3682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.251 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.260+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 <== mon.2 v2:192.168.123.100:3301/0 9 ==== osd_map(784..784 src has 258..784) ==== 628+0+0 (secure 0 0 0) 0x7fa6f00b6c20 con 0x7fa6fc075960 2026-03-10T08:00:45.251 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.260+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=785}) -- 0x7fa6d8084ad0 con 0x7fa6fc075960 2026-03-10T08:00:45.275 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa6e37fe640 1 -- 192.168.123.100:0/3316182820 <== mon.2 v2:192.168.123.100:3301/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v784) ==== 217+0+0 (secure 0 0 0) 0x7fa6f0100dd0 con 0x7fa6fc075960 2026-03-10T08:00:45.277 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_bytes = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 2026-03-10T08:00:45.277 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 -- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa6d8077710 msgr2=0x7fa6d8079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:00:45.277 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa6d8077710 0x7fa6d8079bd0 secure :-1 s=READY pgs=4339 cs=0 l=1 rev1=1 crypto rx=0x7fa6e4005fd0 tx=0x7fa6e4005d00 comp rx=0 tx=0).stop 2026-03-10T08:00:45.277 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 -- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa6fc075960 msgr2=0x7fa6fc1a41b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:00:45.277 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa6fc075960 0x7fa6fc1a41b0 secure :-1 s=READY pgs=3160 cs=0 l=1 rev1=1 crypto rx=0x7fa6f0019770 tx=0x7fa6f0005fb0 comp rx=0 tx=0).stop 2026-03-10T08:00:45.277 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 -- 192.168.123.100:0/3316182820 shutdown_connections 2026-03-10T08:00:45.278 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa6d8077710 0x7fa6d8079bd0 unknown :-1 s=CLOSED pgs=4339 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:45.278 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa6fc113730 0x7fa6fc1a8a80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:45.278 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fa6fc106830 0x7fa6fc1a46f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:45.278 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 --2- 192.168.123.100:0/3316182820 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa6fc075960 0x7fa6fc1a41b0 unknown :-1 s=CLOSED pgs=3160 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:45.278 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 -- 192.168.123.100:0/3316182820 >> 192.168.123.100:0/3316182820 conn(0x7fa6fc0fe640 msgr2=0x7fa6fc114aa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T08:00:45.278 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 -- 192.168.123.100:0/3316182820 shutdown_connections 2026-03-10T08:00:45.278 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.284+0000 7fa700f48640 1 -- 192.168.123.100:0/3316182820 wait complete. 2026-03-10T08:00:45.291 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool set-quota 848a4d04-314a-4289-950b-2472b7cc83f9 max_objects 0 2026-03-10T08:00:45.378 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.384+0000 7fa763577640 1 -- 192.168.123.100:0/1102806424 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa76410f6d0 msgr2=0x7fa764074180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:00:45.378 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.384+0000 7fa763577640 1 --2- 192.168.123.100:0/1102806424 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa76410f6d0 0x7fa764074180 secure :-1 s=READY pgs=3161 cs=0 l=1 rev1=1 crypto rx=0x7fa758009a30 tx=0x7fa75801c920 comp rx=0 tx=0).stop 2026-03-10T08:00:45.378 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.384+0000 7fa763577640 1 -- 192.168.123.100:0/1102806424 shutdown_connections 2026-03-10T08:00:45.378 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.384+0000 7fa763577640 1 --2- 192.168.123.100:0/1102806424 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa76410f6d0 0x7fa764074180 unknown :-1 s=CLOSED pgs=3161 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:45.378 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.384+0000 7fa763577640 1 --2- 192.168.123.100:0/1102806424 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76410ed30 0x7fa76410f190 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:45.378 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.384+0000 7fa763577640 1 --2- 192.168.123.100:0/1102806424 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fa764111030 0x7fa764111410 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:45.378 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.384+0000 7fa763577640 1 -- 192.168.123.100:0/1102806424 >> 192.168.123.100:0/1102806424 conn(0x7fa76406d7f0 msgr2=0x7fa76406dc00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T08:00:45.378 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.384+0000 7fa763577640 1 -- 192.168.123.100:0/1102806424 shutdown_connections 2026-03-10T08:00:45.378 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.384+0000 7fa763577640 1 -- 192.168.123.100:0/1102806424 wait complete. 2026-03-10T08:00:45.378 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa763577640 1 Processor -- start 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa763577640 1 -- start start 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa763577640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76410ed30 0x7fa7641abf30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa763577640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa76410f6d0 0x7fa7641ac470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa763577640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fa764111030 0x7fa7641b0800 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa763577640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fa764123b10 con 0x7fa76410ed30 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa763577640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7fa764123990 con 0x7fa764111030 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa763577640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7fa764123c90 con 0x7fa76410f6d0 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa762575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76410ed30 0x7fa7641abf30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa762575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76410ed30 0x7fa7641abf30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59672/0 (socket says 192.168.123.100:59672) 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa762575640 1 -- 192.168.123.100:0/125534291 learned_addr learned my addr 192.168.123.100:0/125534291 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa762575640 1 -- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa76410f6d0 msgr2=0x7fa7641ac470 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa762575640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa76410f6d0 0x7fa7641ac470 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa762575640 1 -- 192.168.123.100:0/125534291 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fa764111030 msgr2=0x7fa7641b0800 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa762575640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fa764111030 0x7fa7641b0800 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa762575640 1 -- 192.168.123.100:0/125534291 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa7641b0f00 con 0x7fa76410ed30 2026-03-10T08:00:45.379 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa762575640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76410ed30 0x7fa7641abf30 secure :-1 s=READY pgs=3158 cs=0 l=1 rev1=1 crypto rx=0x7fa75c00c370 tx=0x7fa75c00c830 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T08:00:45.380 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa75c019070 con 0x7fa76410ed30 2026-03-10T08:00:45.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa763577640 1 -- 192.168.123.100:0/125534291 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa7641b1190 con 0x7fa76410ed30 2026-03-10T08:00:45.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa763577640 1 -- 192.168.123.100:0/125534291 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fa764111d70 con 0x7fa76410ed30 2026-03-10T08:00:45.381 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7fa75c0092d0 con 0x7fa76410ed30 2026-03-10T08:00:45.382 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa75c0048b0 con 0x7fa76410ed30 2026-03-10T08:00:45.382 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7fa75c005ce0 con 0x7fa76410ed30 2026-03-10T08:00:45.382 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.388+0000 7fa7537fe640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa730077750 0x7fa730079c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:00:45.382 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.392+0000 7fa761d74640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa730077750 0x7fa730079c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:00:45.383 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.392+0000 7fa761d74640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa730077750 0x7fa730079c10 secure :-1 s=READY pgs=4340 cs=0 l=1 rev1=1 crypto rx=0x7fa76406e020 tx=0x7fa74c005df0 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T08:00:45.383 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.392+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(784..784 src has 258..784) ==== 9461+0+0 (secure 0 0 0) 0x7fa75c09b150 con 0x7fa76410ed30 2026-03-10T08:00:45.383 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.392+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=785}) -- 0x7fa730083af0 con 0x7fa76410ed30 2026-03-10T08:00:45.383 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.392+0000 7fa763577640 1 -- 192.168.123.100:0/125534291 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa72c005190 con 0x7fa76410ed30 2026-03-10T08:00:45.387 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.396+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa75c010040 con 0x7fa76410ed30 2026-03-10T08:00:45.478 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:45.484+0000 7fa763577640 1 -- 192.168.123.100:0/125534291 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"} v 0) -- 0x7fa72c005480 con 0x7fa76410ed30 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: audit 2026-03-10T08:00:44.204218+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: audit 2026-03-10T08:00:44.204218+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: cluster 2026-03-10T08:00:44.215033+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: cluster 2026-03-10T08:00:44.215033+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: audit 2026-03-10T08:00:44.277544+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: audit 2026-03-10T08:00:44.277544+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: audit 2026-03-10T08:00:44.277791+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: audit 2026-03-10T08:00:44.277791+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: audit 2026-03-10T08:00:44.470104+0000 mgr.y (mgr.24407) 1314 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: audit 2026-03-10T08:00:44.470104+0000 mgr.y (mgr.24407) 1314 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: audit 2026-03-10T08:00:45.002192+0000 mon.c (mon.2) 505 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:00:45.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:45 vm03 bash[23382]: audit 2026-03-10T08:00:45.002192+0000 mon.c (mon.2) 505 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: audit 2026-03-10T08:00:44.204218+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: audit 2026-03-10T08:00:44.204218+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: cluster 2026-03-10T08:00:44.215033+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: cluster 2026-03-10T08:00:44.215033+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: audit 2026-03-10T08:00:44.277544+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: audit 2026-03-10T08:00:44.277544+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: audit 2026-03-10T08:00:44.277791+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: audit 2026-03-10T08:00:44.277791+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: audit 2026-03-10T08:00:44.470104+0000 mgr.y (mgr.24407) 1314 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: audit 2026-03-10T08:00:44.470104+0000 mgr.y (mgr.24407) 1314 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: audit 2026-03-10T08:00:45.002192+0000 mon.c (mon.2) 505 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:45 vm00 bash[28005]: audit 2026-03-10T08:00:45.002192+0000 mon.c (mon.2) 505 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: audit 2026-03-10T08:00:44.204218+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: audit 2026-03-10T08:00:44.204218+0000 mon.a (mon.0) 3683 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: cluster 2026-03-10T08:00:44.215033+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: cluster 2026-03-10T08:00:44.215033+0000 mon.a (mon.0) 3684 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: audit 2026-03-10T08:00:44.277544+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: audit 2026-03-10T08:00:44.277544+0000 mon.c (mon.2) 504 : audit [INF] from='client.? 192.168.123.100:0/3316182820' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: audit 2026-03-10T08:00:44.277791+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: audit 2026-03-10T08:00:44.277791+0000 mon.a (mon.0) 3685 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: audit 2026-03-10T08:00:44.470104+0000 mgr.y (mgr.24407) 1314 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: audit 2026-03-10T08:00:44.470104+0000 mgr.y (mgr.24407) 1314 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: audit 2026-03-10T08:00:45.002192+0000 mon.c (mon.2) 505 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:00:45.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:45 vm00 bash[20701]: audit 2026-03-10T08:00:45.002192+0000 mon.c (mon.2) 505 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:00:46.344 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:46.352+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v785) ==== 221+0+0 (secure 0 0 0) 0x7fa75c0672d0 con 0x7fa76410ed30 2026-03-10T08:00:46.348 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:46.356+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 <== mon.0 v2:192.168.123.100:3300/0 8 ==== osd_map(785..785 src has 258..785) ==== 628+0+0 (secure 0 0 0) 0x7fa75c05f2c0 con 0x7fa76410ed30 2026-03-10T08:00:46.348 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:46.356+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=786}) -- 0x7fa730084b40 con 0x7fa76410ed30 2026-03-10T08:00:46.400 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:46.408+0000 7fa763577640 1 -- 192.168.123.100:0/125534291 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"} v 0) -- 0x7fa72c004b10 con 0x7fa76410ed30 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: cluster 2026-03-10T08:00:45.045599+0000 mgr.y (mgr.24407) 1315 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: cluster 2026-03-10T08:00:45.045599+0000 mgr.y (mgr.24407) 1315 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.251241+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.251241+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: cluster 2026-03-10T08:00:45.256883+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: cluster 2026-03-10T08:00:45.256883+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.345792+0000 mon.a (mon.0) 3688 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.345792+0000 mon.a (mon.0) 3688 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.354488+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.354488+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.489173+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.489173+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.644485+0000 mon.c (mon.2) 506 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.644485+0000 mon.c (mon.2) 506 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.645578+0000 mon.c (mon.2) 507 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.645578+0000 mon.c (mon.2) 507 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.654234+0000 mon.a (mon.0) 3691 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:46 vm03 bash[23382]: audit 2026-03-10T08:00:45.654234+0000 mon.a (mon.0) 3691 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: cluster 2026-03-10T08:00:45.045599+0000 mgr.y (mgr.24407) 1315 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: cluster 2026-03-10T08:00:45.045599+0000 mgr.y (mgr.24407) 1315 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.251241+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.251241+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: cluster 2026-03-10T08:00:45.256883+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: cluster 2026-03-10T08:00:45.256883+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.345792+0000 mon.a (mon.0) 3688 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.345792+0000 mon.a (mon.0) 3688 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.354488+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.354488+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.489173+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.489173+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.644485+0000 mon.c (mon.2) 506 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.644485+0000 mon.c (mon.2) 506 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.645578+0000 mon.c (mon.2) 507 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.645578+0000 mon.c (mon.2) 507 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.654234+0000 mon.a (mon.0) 3691 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:46 vm00 bash[28005]: audit 2026-03-10T08:00:45.654234+0000 mon.a (mon.0) 3691 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: cluster 2026-03-10T08:00:45.045599+0000 mgr.y (mgr.24407) 1315 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: cluster 2026-03-10T08:00:45.045599+0000 mgr.y (mgr.24407) 1315 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.251241+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.251241+0000 mon.a (mon.0) 3686 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: cluster 2026-03-10T08:00:45.256883+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: cluster 2026-03-10T08:00:45.256883+0000 mon.a (mon.0) 3687 : cluster [DBG] osdmap e784: 8 total, 8 up, 8 in 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.345792+0000 mon.a (mon.0) 3688 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.345792+0000 mon.a (mon.0) 3688 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.354488+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.354488+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.489173+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.489173+0000 mon.a (mon.0) 3690 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.644485+0000 mon.c (mon.2) 506 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:00:46.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.644485+0000 mon.c (mon.2) 506 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:00:46.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.645578+0000 mon.c (mon.2) 507 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:00:46.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.645578+0000 mon.c (mon.2) 507 : audit [INF] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:00:46.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.654234+0000 mon.a (mon.0) 3691 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:46.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:46 vm00 bash[20701]: audit 2026-03-10T08:00:45.654234+0000 mon.a (mon.0) 3691 : audit [INF] from='mgr.24407 ' entity='mgr.y' 2026-03-10T08:00:47.352 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 v786) ==== 221+0+0 (secure 0 0 0) 0x7fa75c06c180 con 0x7fa76410ed30 2026-03-10T08:00:47.352 INFO:tasks.workunit.client.0.vm00.stderr:set-quota max_objects = 0 for pool 848a4d04-314a-4289-950b-2472b7cc83f9 2026-03-10T08:00:47.352 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 <== mon.0 v2:192.168.123.100:3300/0 10 ==== osd_map(786..786 src has 258..786) ==== 628+0+0 (secure 0 0 0) 0x7fa75c0040a0 con 0x7fa76410ed30 2026-03-10T08:00:47.352 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa7537fe640 1 -- 192.168.123.100:0/125534291 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=787}) -- 0x7fa730084d90 con 0x7fa76410ed30 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa763577640 1 -- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa730077750 msgr2=0x7fa730079c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa763577640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa730077750 0x7fa730079c10 secure :-1 s=READY pgs=4340 cs=0 l=1 rev1=1 crypto rx=0x7fa76406e020 tx=0x7fa74c005df0 comp rx=0 tx=0).stop 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa763577640 1 -- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76410ed30 msgr2=0x7fa7641abf30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa763577640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76410ed30 0x7fa7641abf30 secure :-1 s=READY pgs=3158 cs=0 l=1 rev1=1 crypto rx=0x7fa75c00c370 tx=0x7fa75c00c830 comp rx=0 tx=0).stop 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa763577640 1 -- 192.168.123.100:0/125534291 shutdown_connections 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa763577640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7fa730077750 0x7fa730079c10 unknown :-1 s=CLOSED pgs=4340 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa763577640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7fa764111030 0x7fa7641b0800 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa763577640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7fa76410f6d0 0x7fa7641ac470 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa763577640 1 --2- 192.168.123.100:0/125534291 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fa76410ed30 0x7fa7641abf30 unknown :-1 s=CLOSED pgs=3158 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa763577640 1 -- 192.168.123.100:0/125534291 >> 192.168.123.100:0/125534291 conn(0x7fa76406d7f0 msgr2=0x7fa76410b0d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.360+0000 7fa763577640 1 -- 192.168.123.100:0/125534291 shutdown_connections 2026-03-10T08:00:47.354 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:00:47.364+0000 7fa763577640 1 -- 192.168.123.100:0/125534291 wait complete. 2026-03-10T08:00:47.369 INFO:tasks.workunit.client.0.vm00.stderr:+ sleep 30 2026-03-10T08:00:47.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:47 vm00 bash[28005]: audit 2026-03-10T08:00:46.355311+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:47.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:47 vm00 bash[28005]: audit 2026-03-10T08:00:46.355311+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:47.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:47 vm00 bash[28005]: cluster 2026-03-10T08:00:46.360545+0000 mon.a (mon.0) 3693 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-10T08:00:47.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:47 vm00 bash[28005]: cluster 2026-03-10T08:00:46.360545+0000 mon.a (mon.0) 3693 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-10T08:00:47.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:47 vm00 bash[28005]: audit 2026-03-10T08:00:46.410806+0000 mon.a (mon.0) 3694 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:47.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:47 vm00 bash[28005]: audit 2026-03-10T08:00:46.410806+0000 mon.a (mon.0) 3694 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:47.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:47 vm00 bash[20701]: audit 2026-03-10T08:00:46.355311+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:47.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:47 vm00 bash[20701]: audit 2026-03-10T08:00:46.355311+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:47.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:47 vm00 bash[20701]: cluster 2026-03-10T08:00:46.360545+0000 mon.a (mon.0) 3693 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-10T08:00:47.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:47 vm00 bash[20701]: cluster 2026-03-10T08:00:46.360545+0000 mon.a (mon.0) 3693 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-10T08:00:47.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:47 vm00 bash[20701]: audit 2026-03-10T08:00:46.410806+0000 mon.a (mon.0) 3694 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:47.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:47 vm00 bash[20701]: audit 2026-03-10T08:00:46.410806+0000 mon.a (mon.0) 3694 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:47.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:47 vm03 bash[23382]: audit 2026-03-10T08:00:46.355311+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:47.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:47 vm03 bash[23382]: audit 2026-03-10T08:00:46.355311+0000 mon.a (mon.0) 3692 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:47.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:47 vm03 bash[23382]: cluster 2026-03-10T08:00:46.360545+0000 mon.a (mon.0) 3693 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-10T08:00:47.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:47 vm03 bash[23382]: cluster 2026-03-10T08:00:46.360545+0000 mon.a (mon.0) 3693 : cluster [DBG] osdmap e785: 8 total, 8 up, 8 in 2026-03-10T08:00:47.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:47 vm03 bash[23382]: audit 2026-03-10T08:00:46.410806+0000 mon.a (mon.0) 3694 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:47.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:47 vm03 bash[23382]: audit 2026-03-10T08:00:46.410806+0000 mon.a (mon.0) 3694 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:48 vm00 bash[28005]: cluster 2026-03-10T08:00:47.046312+0000 mgr.y (mgr.24407) 1316 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:48 vm00 bash[28005]: cluster 2026-03-10T08:00:47.046312+0000 mgr.y (mgr.24407) 1316 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:48 vm00 bash[28005]: audit 2026-03-10T08:00:47.362631+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:48 vm00 bash[28005]: audit 2026-03-10T08:00:47.362631+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:48 vm00 bash[28005]: cluster 2026-03-10T08:00:47.374839+0000 mon.a (mon.0) 3696 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:48 vm00 bash[28005]: cluster 2026-03-10T08:00:47.374839+0000 mon.a (mon.0) 3696 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:48 vm00 bash[20701]: cluster 2026-03-10T08:00:47.046312+0000 mgr.y (mgr.24407) 1316 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:48 vm00 bash[20701]: cluster 2026-03-10T08:00:47.046312+0000 mgr.y (mgr.24407) 1316 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:48 vm00 bash[20701]: audit 2026-03-10T08:00:47.362631+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:48 vm00 bash[20701]: audit 2026-03-10T08:00:47.362631+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:48 vm00 bash[20701]: cluster 2026-03-10T08:00:47.374839+0000 mon.a (mon.0) 3696 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-10T08:00:48.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:48 vm00 bash[20701]: cluster 2026-03-10T08:00:47.374839+0000 mon.a (mon.0) 3696 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-10T08:00:48.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:48 vm03 bash[23382]: cluster 2026-03-10T08:00:47.046312+0000 mgr.y (mgr.24407) 1316 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T08:00:48.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:48 vm03 bash[23382]: cluster 2026-03-10T08:00:47.046312+0000 mgr.y (mgr.24407) 1316 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T08:00:48.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:48 vm03 bash[23382]: audit 2026-03-10T08:00:47.362631+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:48.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:48 vm03 bash[23382]: audit 2026-03-10T08:00:47.362631+0000 mon.a (mon.0) 3695 : audit [INF] from='client.? 192.168.123.100:0/125534291' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "field": "max_objects", "val": "0"}]': finished 2026-03-10T08:00:48.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:48 vm03 bash[23382]: cluster 2026-03-10T08:00:47.374839+0000 mon.a (mon.0) 3696 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-10T08:00:48.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:48 vm03 bash[23382]: cluster 2026-03-10T08:00:47.374839+0000 mon.a (mon.0) 3696 : cluster [DBG] osdmap e786: 8 total, 8 up, 8 in 2026-03-10T08:00:50.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:50 vm00 bash[28005]: cluster 2026-03-10T08:00:49.046640+0000 mgr.y (mgr.24407) 1317 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 636 B/s wr, 1 op/s 2026-03-10T08:00:50.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:50 vm00 bash[28005]: cluster 2026-03-10T08:00:49.046640+0000 mgr.y (mgr.24407) 1317 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 636 B/s wr, 1 op/s 2026-03-10T08:00:50.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:50 vm00 bash[20701]: cluster 2026-03-10T08:00:49.046640+0000 mgr.y (mgr.24407) 1317 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 636 B/s wr, 1 op/s 2026-03-10T08:00:50.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:50 vm00 bash[20701]: cluster 2026-03-10T08:00:49.046640+0000 mgr.y (mgr.24407) 1317 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 636 B/s wr, 1 op/s 2026-03-10T08:00:50.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:50 vm03 bash[23382]: cluster 2026-03-10T08:00:49.046640+0000 mgr.y (mgr.24407) 1317 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 636 B/s wr, 1 op/s 2026-03-10T08:00:50.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:50 vm03 bash[23382]: cluster 2026-03-10T08:00:49.046640+0000 mgr.y (mgr.24407) 1317 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 636 B/s wr, 1 op/s 2026-03-10T08:00:51.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:00:51 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:08:00:51] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:00:52.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:52 vm00 bash[28005]: cluster 2026-03-10T08:00:51.047152+0000 mgr.y (mgr.24407) 1318 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:52.626 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:52 vm00 bash[28005]: cluster 2026-03-10T08:00:51.047152+0000 mgr.y (mgr.24407) 1318 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:52.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:52 vm00 bash[20701]: cluster 2026-03-10T08:00:51.047152+0000 mgr.y (mgr.24407) 1318 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:52.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:52 vm00 bash[20701]: cluster 2026-03-10T08:00:51.047152+0000 mgr.y (mgr.24407) 1318 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:52.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:52 vm03 bash[23382]: cluster 2026-03-10T08:00:51.047152+0000 mgr.y (mgr.24407) 1318 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:52.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:52 vm03 bash[23382]: cluster 2026-03-10T08:00:51.047152+0000 mgr.y (mgr.24407) 1318 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:00:54.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:00:54 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T08:00:54.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:54 vm03 bash[23382]: cluster 2026-03-10T08:00:53.047386+0000 mgr.y (mgr.24407) 1319 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 op/s 2026-03-10T08:00:54.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:54 vm03 bash[23382]: cluster 2026-03-10T08:00:53.047386+0000 mgr.y (mgr.24407) 1319 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 op/s 2026-03-10T08:00:54.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:54 vm00 bash[28005]: cluster 2026-03-10T08:00:53.047386+0000 mgr.y (mgr.24407) 1319 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 op/s 2026-03-10T08:00:54.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:54 vm00 bash[28005]: cluster 2026-03-10T08:00:53.047386+0000 mgr.y (mgr.24407) 1319 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 op/s 2026-03-10T08:00:54.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:54 vm00 bash[20701]: cluster 2026-03-10T08:00:53.047386+0000 mgr.y (mgr.24407) 1319 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 op/s 2026-03-10T08:00:54.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:54 vm00 bash[20701]: cluster 2026-03-10T08:00:53.047386+0000 mgr.y (mgr.24407) 1319 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 657 B/s rd, 0 op/s 2026-03-10T08:00:55.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:55 vm03 bash[23382]: audit 2026-03-10T08:00:54.480760+0000 mgr.y (mgr.24407) 1320 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:55.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:55 vm03 bash[23382]: audit 2026-03-10T08:00:54.480760+0000 mgr.y (mgr.24407) 1320 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:55.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:55 vm00 bash[28005]: audit 2026-03-10T08:00:54.480760+0000 mgr.y (mgr.24407) 1320 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:55.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:55 vm00 bash[28005]: audit 2026-03-10T08:00:54.480760+0000 mgr.y (mgr.24407) 1320 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:55.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:55 vm00 bash[20701]: audit 2026-03-10T08:00:54.480760+0000 mgr.y (mgr.24407) 1320 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:55.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:55 vm00 bash[20701]: audit 2026-03-10T08:00:54.480760+0000 mgr.y (mgr.24407) 1320 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:00:56.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:56 vm03 bash[23382]: cluster 2026-03-10T08:00:55.047938+0000 mgr.y (mgr.24407) 1321 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:56.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:56 vm03 bash[23382]: cluster 2026-03-10T08:00:55.047938+0000 mgr.y (mgr.24407) 1321 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:56.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:56 vm03 bash[23382]: audit 2026-03-10T08:00:55.535958+0000 mon.c (mon.2) 508 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:56.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:56 vm03 bash[23382]: audit 2026-03-10T08:00:55.535958+0000 mon.c (mon.2) 508 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:56.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:56 vm00 bash[28005]: cluster 2026-03-10T08:00:55.047938+0000 mgr.y (mgr.24407) 1321 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:56.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:56 vm00 bash[28005]: cluster 2026-03-10T08:00:55.047938+0000 mgr.y (mgr.24407) 1321 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:56.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:56 vm00 bash[28005]: audit 2026-03-10T08:00:55.535958+0000 mon.c (mon.2) 508 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:56.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:56 vm00 bash[28005]: audit 2026-03-10T08:00:55.535958+0000 mon.c (mon.2) 508 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:56.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:56 vm00 bash[20701]: cluster 2026-03-10T08:00:55.047938+0000 mgr.y (mgr.24407) 1321 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:56.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:56 vm00 bash[20701]: cluster 2026-03-10T08:00:55.047938+0000 mgr.y (mgr.24407) 1321 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:00:56.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:56 vm00 bash[20701]: audit 2026-03-10T08:00:55.535958+0000 mon.c (mon.2) 508 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:56.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:56 vm00 bash[20701]: audit 2026-03-10T08:00:55.535958+0000 mon.c (mon.2) 508 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:00:58.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:58 vm03 bash[23382]: cluster 2026-03-10T08:00:57.048215+0000 mgr.y (mgr.24407) 1322 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:58.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:00:58 vm03 bash[23382]: cluster 2026-03-10T08:00:57.048215+0000 mgr.y (mgr.24407) 1322 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:58.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:58 vm00 bash[28005]: cluster 2026-03-10T08:00:57.048215+0000 mgr.y (mgr.24407) 1322 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:58.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:00:58 vm00 bash[28005]: cluster 2026-03-10T08:00:57.048215+0000 mgr.y (mgr.24407) 1322 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:58.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:58 vm00 bash[20701]: cluster 2026-03-10T08:00:57.048215+0000 mgr.y (mgr.24407) 1322 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:00:58.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:00:58 vm00 bash[20701]: cluster 2026-03-10T08:00:57.048215+0000 mgr.y (mgr.24407) 1322 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:01:00.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:00 vm03 bash[23382]: cluster 2026-03-10T08:00:59.048718+0000 mgr.y (mgr.24407) 1323 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 877 B/s rd, 0 op/s 2026-03-10T08:01:00.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:00 vm03 bash[23382]: cluster 2026-03-10T08:00:59.048718+0000 mgr.y (mgr.24407) 1323 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 877 B/s rd, 0 op/s 2026-03-10T08:01:00.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:00 vm00 bash[28005]: cluster 2026-03-10T08:00:59.048718+0000 mgr.y (mgr.24407) 1323 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 877 B/s rd, 0 op/s 2026-03-10T08:01:00.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:00 vm00 bash[28005]: cluster 2026-03-10T08:00:59.048718+0000 mgr.y (mgr.24407) 1323 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 877 B/s rd, 0 op/s 2026-03-10T08:01:00.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:00 vm00 bash[20701]: cluster 2026-03-10T08:00:59.048718+0000 mgr.y (mgr.24407) 1323 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 877 B/s rd, 0 op/s 2026-03-10T08:01:00.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:00 vm00 bash[20701]: cluster 2026-03-10T08:00:59.048718+0000 mgr.y (mgr.24407) 1323 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 877 B/s rd, 0 op/s 2026-03-10T08:01:01.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:01:01 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:08:01:01] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:01:02.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:02 vm03 bash[23382]: cluster 2026-03-10T08:01:01.049232+0000 mgr.y (mgr.24407) 1324 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:02.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:02 vm03 bash[23382]: cluster 2026-03-10T08:01:01.049232+0000 mgr.y (mgr.24407) 1324 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:02.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:02 vm00 bash[28005]: cluster 2026-03-10T08:01:01.049232+0000 mgr.y (mgr.24407) 1324 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:02.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:02 vm00 bash[28005]: cluster 2026-03-10T08:01:01.049232+0000 mgr.y (mgr.24407) 1324 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:02.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:02 vm00 bash[20701]: cluster 2026-03-10T08:01:01.049232+0000 mgr.y (mgr.24407) 1324 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:02.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:02 vm00 bash[20701]: cluster 2026-03-10T08:01:01.049232+0000 mgr.y (mgr.24407) 1324 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:04.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:01:04 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T08:01:04.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:04 vm03 bash[23382]: cluster 2026-03-10T08:01:03.049523+0000 mgr.y (mgr.24407) 1325 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:04.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:04 vm03 bash[23382]: cluster 2026-03-10T08:01:03.049523+0000 mgr.y (mgr.24407) 1325 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:04.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:04 vm00 bash[28005]: cluster 2026-03-10T08:01:03.049523+0000 mgr.y (mgr.24407) 1325 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:04.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:04 vm00 bash[28005]: cluster 2026-03-10T08:01:03.049523+0000 mgr.y (mgr.24407) 1325 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:04.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:04 vm00 bash[20701]: cluster 2026-03-10T08:01:03.049523+0000 mgr.y (mgr.24407) 1325 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:04.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:04 vm00 bash[20701]: cluster 2026-03-10T08:01:03.049523+0000 mgr.y (mgr.24407) 1325 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:05.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:05 vm03 bash[23382]: audit 2026-03-10T08:01:04.489944+0000 mgr.y (mgr.24407) 1326 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:05.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:05 vm03 bash[23382]: audit 2026-03-10T08:01:04.489944+0000 mgr.y (mgr.24407) 1326 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:05.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:05 vm00 bash[28005]: audit 2026-03-10T08:01:04.489944+0000 mgr.y (mgr.24407) 1326 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:05.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:05 vm00 bash[28005]: audit 2026-03-10T08:01:04.489944+0000 mgr.y (mgr.24407) 1326 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:05.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:05 vm00 bash[20701]: audit 2026-03-10T08:01:04.489944+0000 mgr.y (mgr.24407) 1326 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:05.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:05 vm00 bash[20701]: audit 2026-03-10T08:01:04.489944+0000 mgr.y (mgr.24407) 1326 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:06.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:06 vm03 bash[23382]: cluster 2026-03-10T08:01:05.050044+0000 mgr.y (mgr.24407) 1327 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:06.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:06 vm03 bash[23382]: cluster 2026-03-10T08:01:05.050044+0000 mgr.y (mgr.24407) 1327 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:06.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:06 vm00 bash[28005]: cluster 2026-03-10T08:01:05.050044+0000 mgr.y (mgr.24407) 1327 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:06.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:06 vm00 bash[28005]: cluster 2026-03-10T08:01:05.050044+0000 mgr.y (mgr.24407) 1327 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:06.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:06 vm00 bash[20701]: cluster 2026-03-10T08:01:05.050044+0000 mgr.y (mgr.24407) 1327 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:06.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:06 vm00 bash[20701]: cluster 2026-03-10T08:01:05.050044+0000 mgr.y (mgr.24407) 1327 : cluster [DBG] pgmap v1769: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:08 vm03 bash[23382]: cluster 2026-03-10T08:01:07.050308+0000 mgr.y (mgr.24407) 1328 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:08.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:08 vm03 bash[23382]: cluster 2026-03-10T08:01:07.050308+0000 mgr.y (mgr.24407) 1328 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:08.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:08 vm00 bash[28005]: cluster 2026-03-10T08:01:07.050308+0000 mgr.y (mgr.24407) 1328 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:08.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:08 vm00 bash[28005]: cluster 2026-03-10T08:01:07.050308+0000 mgr.y (mgr.24407) 1328 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:08.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:08 vm00 bash[20701]: cluster 2026-03-10T08:01:07.050308+0000 mgr.y (mgr.24407) 1328 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:08.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:08 vm00 bash[20701]: cluster 2026-03-10T08:01:07.050308+0000 mgr.y (mgr.24407) 1328 : cluster [DBG] pgmap v1770: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:10 vm03 bash[23382]: cluster 2026-03-10T08:01:09.050525+0000 mgr.y (mgr.24407) 1329 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:10.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:10 vm03 bash[23382]: cluster 2026-03-10T08:01:09.050525+0000 mgr.y (mgr.24407) 1329 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:10.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:10 vm00 bash[28005]: cluster 2026-03-10T08:01:09.050525+0000 mgr.y (mgr.24407) 1329 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:10.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:10 vm00 bash[28005]: cluster 2026-03-10T08:01:09.050525+0000 mgr.y (mgr.24407) 1329 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:10.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:10 vm00 bash[20701]: cluster 2026-03-10T08:01:09.050525+0000 mgr.y (mgr.24407) 1329 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:10.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:10 vm00 bash[20701]: cluster 2026-03-10T08:01:09.050525+0000 mgr.y (mgr.24407) 1329 : cluster [DBG] pgmap v1771: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:11.375 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:01:11 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:08:01:11] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:01:11.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:11 vm03 bash[23382]: audit 2026-03-10T08:01:10.542130+0000 mon.c (mon.2) 509 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:01:11.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:11 vm03 bash[23382]: audit 2026-03-10T08:01:10.542130+0000 mon.c (mon.2) 509 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:01:11.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:11 vm00 bash[28005]: audit 2026-03-10T08:01:10.542130+0000 mon.c (mon.2) 509 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:01:11.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:11 vm00 bash[28005]: audit 2026-03-10T08:01:10.542130+0000 mon.c (mon.2) 509 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:01:11.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:11 vm00 bash[20701]: audit 2026-03-10T08:01:10.542130+0000 mon.c (mon.2) 509 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:01:11.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:11 vm00 bash[20701]: audit 2026-03-10T08:01:10.542130+0000 mon.c (mon.2) 509 : audit [DBG] from='mgr.24407 192.168.123.100:0/531523711' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:01:12.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:12 vm03 bash[23382]: cluster 2026-03-10T08:01:11.051093+0000 mgr.y (mgr.24407) 1330 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:12.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:12 vm03 bash[23382]: cluster 2026-03-10T08:01:11.051093+0000 mgr.y (mgr.24407) 1330 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:12.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:12 vm00 bash[28005]: cluster 2026-03-10T08:01:11.051093+0000 mgr.y (mgr.24407) 1330 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:12.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:12 vm00 bash[28005]: cluster 2026-03-10T08:01:11.051093+0000 mgr.y (mgr.24407) 1330 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:12.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:12 vm00 bash[20701]: cluster 2026-03-10T08:01:11.051093+0000 mgr.y (mgr.24407) 1330 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:12.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:12 vm00 bash[20701]: cluster 2026-03-10T08:01:11.051093+0000 mgr.y (mgr.24407) 1330 : cluster [DBG] pgmap v1772: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:14.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:01:14 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T08:01:14.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:14 vm03 bash[23382]: cluster 2026-03-10T08:01:13.051312+0000 mgr.y (mgr.24407) 1331 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:14.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:14 vm03 bash[23382]: cluster 2026-03-10T08:01:13.051312+0000 mgr.y (mgr.24407) 1331 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:14.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:14 vm00 bash[28005]: cluster 2026-03-10T08:01:13.051312+0000 mgr.y (mgr.24407) 1331 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:14.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:14 vm00 bash[28005]: cluster 2026-03-10T08:01:13.051312+0000 mgr.y (mgr.24407) 1331 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:14.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:14 vm00 bash[20701]: cluster 2026-03-10T08:01:13.051312+0000 mgr.y (mgr.24407) 1331 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:14.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:14 vm00 bash[20701]: cluster 2026-03-10T08:01:13.051312+0000 mgr.y (mgr.24407) 1331 : cluster [DBG] pgmap v1773: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:15.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:15 vm03 bash[23382]: audit 2026-03-10T08:01:14.500077+0000 mgr.y (mgr.24407) 1332 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:15.762 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:15 vm03 bash[23382]: audit 2026-03-10T08:01:14.500077+0000 mgr.y (mgr.24407) 1332 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:15.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:15 vm00 bash[28005]: audit 2026-03-10T08:01:14.500077+0000 mgr.y (mgr.24407) 1332 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:15.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:15 vm00 bash[28005]: audit 2026-03-10T08:01:14.500077+0000 mgr.y (mgr.24407) 1332 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:15.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:15 vm00 bash[20701]: audit 2026-03-10T08:01:14.500077+0000 mgr.y (mgr.24407) 1332 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:15.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:15 vm00 bash[20701]: audit 2026-03-10T08:01:14.500077+0000 mgr.y (mgr.24407) 1332 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:16.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:16 vm00 bash[28005]: cluster 2026-03-10T08:01:15.052429+0000 mgr.y (mgr.24407) 1333 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:16.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:16 vm00 bash[28005]: cluster 2026-03-10T08:01:15.052429+0000 mgr.y (mgr.24407) 1333 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:16.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:16 vm00 bash[20701]: cluster 2026-03-10T08:01:15.052429+0000 mgr.y (mgr.24407) 1333 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:16.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:16 vm00 bash[20701]: cluster 2026-03-10T08:01:15.052429+0000 mgr.y (mgr.24407) 1333 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:17.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:16 vm03 bash[23382]: cluster 2026-03-10T08:01:15.052429+0000 mgr.y (mgr.24407) 1333 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:17.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:16 vm03 bash[23382]: cluster 2026-03-10T08:01:15.052429+0000 mgr.y (mgr.24407) 1333 : cluster [DBG] pgmap v1774: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:17.371 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool delete 848a4d04-314a-4289-950b-2472b7cc83f9 848a4d04-314a-4289-950b-2472b7cc83f9 --yes-i-really-really-mean-it 2026-03-10T08:01:17.427 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- 192.168.123.100:0/449271198 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b00077f50 msgr2=0x7f6b00113520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:01:17.427 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 --2- 192.168.123.100:0/449271198 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b00077f50 0x7f6b00113520 secure :-1 s=READY pgs=3162 cs=0 l=1 rev1=1 crypto rx=0x7f6af4009a80 tx=0x7f6af401c960 comp rx=0 tx=0).stop 2026-03-10T08:01:17.427 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- 192.168.123.100:0/449271198 shutdown_connections 2026-03-10T08:01:17.427 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 --2- 192.168.123.100:0/449271198 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b00113a60 0x7f6b00115e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:17.427 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 --2- 192.168.123.100:0/449271198 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b00077f50 0x7f6b00113520 unknown :-1 s=CLOSED pgs=3162 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:17.427 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 --2- 192.168.123.100:0/449271198 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b00077630 0x7f6b00077a10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:17.427 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- 192.168.123.100:0/449271198 >> 192.168.123.100:0/449271198 conn(0x7f6b00100880 msgr2=0x7f6b00102ca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T08:01:17.427 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- 192.168.123.100:0/449271198 shutdown_connections 2026-03-10T08:01:17.427 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- 192.168.123.100:0/449271198 wait complete. 2026-03-10T08:01:17.427 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 Processor -- start 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- start start 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b00077630 0x7f6b001a4210 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b00077f50 0x7f6b001a4750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b00113a60 0x7f6b001a8ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6b0011bba0 con 0x7f6b00077630 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f6b0011ba20 con 0x7f6b00077f50 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f6b0011bd20 con 0x7f6b00113a60 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6afed76640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b00113a60 0x7f6b001a8ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6afed76640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b00113a60 0x7f6b001a8ae0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:57202/0 (socket says 192.168.123.100:57202) 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6afed76640 1 -- 192.168.123.100:0/3354655610 learned_addr learned my addr 192.168.123.100:0/3354655610 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6afed76640 1 -- 192.168.123.100:0/3354655610 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b00077f50 msgr2=0x7f6b001a4750 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6afed76640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b00077f50 0x7f6b001a4750 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6afed76640 1 -- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b00077630 msgr2=0x7f6b001a4210 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6afe575640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b00077630 0x7f6b001a4210 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6afed76640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b00077630 0x7f6b001a4210 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6afed76640 1 -- 192.168.123.100:0/3354655610 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6b001a9260 con 0x7f6b00113a60 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6afe575640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b00077630 0x7f6b001a4210 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T08:01:17.428 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6afed76640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b00113a60 0x7f6b001a8ae0 secure :-1 s=READY pgs=3163 cs=0 l=1 rev1=1 crypto rx=0x7f6af000ef90 tx=0x7f6af000c550 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T08:01:17.429 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6ae77fe640 1 -- 192.168.123.100:0/3354655610 <== mon.2 v2:192.168.123.100:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6af0019070 con 0x7f6b00113a60 2026-03-10T08:01:17.429 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6ae77fe640 1 -- 192.168.123.100:0/3354655610 <== mon.2 v2:192.168.123.100:3301/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f6af00092d0 con 0x7f6b00113a60 2026-03-10T08:01:17.429 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- 192.168.123.100:0/3354655610 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6b001a94f0 con 0x7f6b00113a60 2026-03-10T08:01:17.429 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6ae77fe640 1 -- 192.168.123.100:0/3354655610 <== mon.2 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6af0004820 con 0x7f6b00113a60 2026-03-10T08:01:17.429 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- 192.168.123.100:0/3354655610 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f6b001b0d90 con 0x7f6b00113a60 2026-03-10T08:01:17.431 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6b04fe7640 1 -- 192.168.123.100:0/3354655610 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6ac4005190 con 0x7f6b00113a60 2026-03-10T08:01:17.431 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6ae77fe640 1 -- 192.168.123.100:0/3354655610 <== mon.2 v2:192.168.123.100:3301/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f6af0005ce0 con 0x7f6b00113a60 2026-03-10T08:01:17.431 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6ae77fe640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6ad8077710 0x7f6ad8079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:01:17.431 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6ae77fe640 1 -- 192.168.123.100:0/3354655610 <== mon.2 v2:192.168.123.100:3301/0 5 ==== osd_map(786..786 src has 258..786) ==== 9461+0+0 (secure 0 0 0) 0x7f6af009a270 con 0x7f6b00113a60 2026-03-10T08:01:17.431 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.435+0000 7f6ae77fe640 1 -- 192.168.123.100:0/3354655610 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=787}) -- 0x7f6ad8083b30 con 0x7f6b00113a60 2026-03-10T08:01:17.433 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.439+0000 7f6afe575640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6ad8077710 0x7f6ad8079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:01:17.433 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.439+0000 7f6afe575640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6ad8077710 0x7f6ad8079bd0 secure :-1 s=READY pgs=4341 cs=0 l=1 rev1=1 crypto rx=0x7f6ae80059d0 tx=0x7f6ae8005960 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T08:01:17.433 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.439+0000 7f6ae77fe640 1 -- 192.168.123.100:0/3354655610 <== mon.2 v2:192.168.123.100:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6af0010040 con 0x7f6b00113a60 2026-03-10T08:01:17.518 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:17.523+0000 7f6b04fe7640 1 -- 192.168.123.100:0/3354655610 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true} v 0) -- 0x7f6ac4005480 con 0x7f6b00113a60 2026-03-10T08:01:18.529 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.535+0000 7f6ae77fe640 1 -- 192.168.123.100:0/3354655610 <== mon.2 v2:192.168.123.100:3301/0 7 ==== osd_map(787..787 src has 258..787) ==== 296+0+0 (secure 0 0 0) 0x7f6af00d09f0 con 0x7f6b00113a60 2026-03-10T08:01:18.529 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.535+0000 7f6ae77fe640 1 -- 192.168.123.100:0/3354655610 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_subscribe({osdmap=788}) -- 0x7f6ad8084400 con 0x7f6b00113a60 2026-03-10T08:01:18.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.539+0000 7f6ae77fe640 1 -- 192.168.123.100:0/3354655610 <== mon.2 v2:192.168.123.100:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]=0 pool '848a4d04-314a-4289-950b-2472b7cc83f9' removed v787) ==== 248+0+0 (secure 0 0 0) 0x7f6af0066370 con 0x7f6b00113a60 2026-03-10T08:01:18.586 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.591+0000 7f6b04fe7640 1 -- 192.168.123.100:0/3354655610 --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_command({"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true} v 0) -- 0x7f6ac4004950 con 0x7f6b00113a60 2026-03-10T08:01:18.587 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6ae77fe640 1 -- 192.168.123.100:0/3354655610 <== mon.2 v2:192.168.123.100:3301/0 9 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]=0 pool '848a4d04-314a-4289-950b-2472b7cc83f9' does not exist v787) ==== 255+0+0 (secure 0 0 0) 0x7f6af005e360 con 0x7f6b00113a60 2026-03-10T08:01:18.587 INFO:tasks.workunit.client.0.vm00.stderr:pool '848a4d04-314a-4289-950b-2472b7cc83f9' does not exist 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 -- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6ad8077710 msgr2=0x7f6ad8079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6ad8077710 0x7f6ad8079bd0 secure :-1 s=READY pgs=4341 cs=0 l=1 rev1=1 crypto rx=0x7f6ae80059d0 tx=0x7f6ae8005960 comp rx=0 tx=0).stop 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 -- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b00113a60 msgr2=0x7f6b001a8ae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b00113a60 0x7f6b001a8ae0 secure :-1 s=READY pgs=3163 cs=0 l=1 rev1=1 crypto rx=0x7f6af000ef90 tx=0x7f6af000c550 comp rx=0 tx=0).stop 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 -- 192.168.123.100:0/3354655610 shutdown_connections 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f6ad8077710 0x7f6ad8079bd0 unknown :-1 s=CLOSED pgs=4341 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f6b00113a60 0x7f6b001a8ae0 unknown :-1 s=CLOSED pgs=3163 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f6b00077f50 0x7f6b001a4750 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 --2- 192.168.123.100:0/3354655610 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b00077630 0x7f6b001a4210 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 -- 192.168.123.100:0/3354655610 >> 192.168.123.100:0/3354655610 conn(0x7f6b00100880 msgr2=0x7f6b00102c70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 -- 192.168.123.100:0/3354655610 shutdown_connections 2026-03-10T08:01:18.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.595+0000 7f6b04fe7640 1 -- 192.168.123.100:0/3354655610 wait complete. 2026-03-10T08:01:18.599 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool delete a16f944b-49df-4ed0-bee4-6bfebe190ca8 a16f944b-49df-4ed0-bee4-6bfebe190ca8 --yes-i-really-really-mean-it 2026-03-10T08:01:18.655 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- 192.168.123.100:0/1740467580 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f88b010cf10 msgr2=0x7f88b010f3b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:01:18.655 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 --2- 192.168.123.100:0/1740467580 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f88b010cf10 0x7f88b010f3b0 secure :-1 s=READY pgs=3164 cs=0 l=1 rev1=1 crypto rx=0x7f88a4009f90 tx=0x7f88a401ca20 comp rx=0 tx=0).stop 2026-03-10T08:01:18.655 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- 192.168.123.100:0/1740467580 shutdown_connections 2026-03-10T08:01:18.655 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 --2- 192.168.123.100:0/1740467580 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f88b010cf10 0x7f88b010f3b0 unknown :-1 s=CLOSED pgs=3164 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:18.655 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 --2- 192.168.123.100:0/1740467580 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f88b0104450 0x7f88b010c940 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:18.655 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 --2- 192.168.123.100:0/1740467580 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f88b0103b30 0x7f88b0103f10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:18.655 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- 192.168.123.100:0/1740467580 >> 192.168.123.100:0/1740467580 conn(0x7f88b00fd4f0 msgr2=0x7f88b00ff910 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T08:01:18.655 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- 192.168.123.100:0/1740467580 shutdown_connections 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- 192.168.123.100:0/1740467580 wait complete. 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 Processor -- start 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- start start 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 --2- >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f88b0103b30 0x7f88b01a06f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f88b0104450 0x7f88b01a0c30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f88b010cf10 0x7f88b019a8b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f88b0102fc0 con 0x7f88b0104450 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- --> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] -- mon_getmap magic: 0 -- 0x7f88b0102e40 con 0x7f88b0103b30 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- --> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] -- mon_getmap magic: 0 -- 0x7f88b0103140 con 0x7f88b010cf10 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b4dd4640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f88b010cf10 0x7f88b019a8b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b4dd4640 1 --2- >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f88b010cf10 0x7f88b019a8b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3301/0 says I am v2:192.168.123.100:57214/0 (socket says 192.168.123.100:57214) 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b4dd4640 1 -- 192.168.123.100:0/2195324949 learned_addr learned my addr 192.168.123.100:0/2195324949 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-10T08:01:18.656 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88af7fe640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f88b0104450 0x7f88b01a0c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:01:18.657 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88affff640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f88b0103b30 0x7f88b01a06f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:01:18.657 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88af7fe640 1 -- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f88b010cf10 msgr2=0x7f88b019a8b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:01:18.657 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88af7fe640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f88b010cf10 0x7f88b019a8b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:18.657 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88af7fe640 1 -- 192.168.123.100:0/2195324949 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f88b0103b30 msgr2=0x7f88b01a06f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:01:18.657 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88af7fe640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f88b0103b30 0x7f88b01a06f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:18.657 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88af7fe640 1 -- 192.168.123.100:0/2195324949 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f88b019b080 con 0x7f88b0104450 2026-03-10T08:01:18.657 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88af7fe640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f88b0104450 0x7f88b01a0c30 secure :-1 s=READY pgs=3159 cs=0 l=1 rev1=1 crypto rx=0x7f88a000ed60 tx=0x7f88a000c6a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T08:01:18.658 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88ad7fa640 1 -- 192.168.123.100:0/2195324949 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f88a000ee50 con 0x7f88b0104450 2026-03-10T08:01:18.658 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88ad7fa640 1 -- 192.168.123.100:0/2195324949 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(23 keys) ==== 978+0+0 (secure 0 0 0) 0x7f88a0004510 con 0x7f88b0104450 2026-03-10T08:01:18.658 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88ad7fa640 1 -- 192.168.123.100:0/2195324949 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f88a0010430 con 0x7f88b0104450 2026-03-10T08:01:18.658 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- 192.168.123.100:0/2195324949 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f88b019b370 con 0x7f88b0104450 2026-03-10T08:01:18.658 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- 192.168.123.100:0/2195324949 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f88b01affd0 con 0x7f88b0104450 2026-03-10T08:01:18.662 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.663+0000 7f88b685e640 1 -- 192.168.123.100:0/2195324949 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8874005190 con 0x7f88b0104450 2026-03-10T08:01:18.662 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.667+0000 7f88ad7fa640 1 -- 192.168.123.100:0/2195324949 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 20) ==== 100095+0+0 (secure 0 0 0) 0x7f88a00040d0 con 0x7f88b0104450 2026-03-10T08:01:18.662 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.667+0000 7f88ad7fa640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f88840777e0 0x7f8884079ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T08:01:18.662 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.667+0000 7f88ad7fa640 1 -- 192.168.123.100:0/2195324949 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(787..787 src has 258..787) ==== 9073+0+0 (secure 0 0 0) 0x7f88a009a4d0 con 0x7f88b0104450 2026-03-10T08:01:18.662 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.667+0000 7f88ad7fa640 1 -- 192.168.123.100:0/2195324949 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=788}) -- 0x7f88840833e0 con 0x7f88b0104450 2026-03-10T08:01:18.662 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.667+0000 7f88affff640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f88840777e0 0x7f8884079ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T08:01:18.662 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.667+0000 7f88ad7fa640 1 -- 192.168.123.100:0/2195324949 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f88a00667d0 con 0x7f88b0104450 2026-03-10T08:01:18.662 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.667+0000 7f88affff640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f88840777e0 0x7f8884079ca0 secure :-1 s=READY pgs=4342 cs=0 l=1 rev1=1 crypto rx=0x7f88b019b9f0 tx=0x7f889c004420 comp rx=0 tx=0).ready entity=mgr.24407 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T08:01:18.749 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:18.755+0000 7f88b685e640 1 -- 192.168.123.100:0/2195324949 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true} v 0) -- 0x7f8874005480 con 0x7f88b0104450 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:18 vm00 bash[20701]: cluster 2026-03-10T08:01:17.052674+0000 mgr.y (mgr.24407) 1334 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:18 vm00 bash[20701]: cluster 2026-03-10T08:01:17.052674+0000 mgr.y (mgr.24407) 1334 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:18 vm00 bash[20701]: audit 2026-03-10T08:01:17.529619+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:18 vm00 bash[20701]: audit 2026-03-10T08:01:17.529619+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:18 vm00 bash[20701]: audit 2026-03-10T08:01:17.533374+0000 mon.a (mon.0) 3697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:18 vm00 bash[20701]: audit 2026-03-10T08:01:17.533374+0000 mon.a (mon.0) 3697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:18 vm00 bash[28005]: cluster 2026-03-10T08:01:17.052674+0000 mgr.y (mgr.24407) 1334 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:18 vm00 bash[28005]: cluster 2026-03-10T08:01:17.052674+0000 mgr.y (mgr.24407) 1334 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:18 vm00 bash[28005]: audit 2026-03-10T08:01:17.529619+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:18 vm00 bash[28005]: audit 2026-03-10T08:01:17.529619+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:18 vm00 bash[28005]: audit 2026-03-10T08:01:17.533374+0000 mon.a (mon.0) 3697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:18.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:18 vm00 bash[28005]: audit 2026-03-10T08:01:17.533374+0000 mon.a (mon.0) 3697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.011 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:18 vm03 bash[23382]: cluster 2026-03-10T08:01:17.052674+0000 mgr.y (mgr.24407) 1334 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:19.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:18 vm03 bash[23382]: cluster 2026-03-10T08:01:17.052674+0000 mgr.y (mgr.24407) 1334 : cluster [DBG] pgmap v1775: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:01:19.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:18 vm03 bash[23382]: audit 2026-03-10T08:01:17.529619+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:18 vm03 bash[23382]: audit 2026-03-10T08:01:17.529619+0000 mon.c (mon.2) 510 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:18 vm03 bash[23382]: audit 2026-03-10T08:01:17.533374+0000 mon.a (mon.0) 3697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:18 vm03 bash[23382]: audit 2026-03-10T08:01:17.533374+0000 mon.a (mon.0) 3697 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.547 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.555+0000 7f88ad7fa640 1 -- 192.168.123.100:0/2195324949 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]=0 pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' removed v788) ==== 248+0+0 (secure 0 0 0) 0x7f88a006b680 con 0x7f88b0104450 2026-03-10T08:01:19.559 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.567+0000 7f88ad7fa640 1 -- 192.168.123.100:0/2195324949 <== mon.0 v2:192.168.123.100:3300/0 8 ==== osd_map(788..788 src has 258..788) ==== 296+0+0 (secure 0 0 0) 0x7f88a00d09f0 con 0x7f88b0104450 2026-03-10T08:01:19.559 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.567+0000 7f88ad7fa640 1 -- 192.168.123.100:0/2195324949 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=789}) -- 0x7f8884084170 con 0x7f88b0104450 2026-03-10T08:01:19.612 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.619+0000 7f88b685e640 1 -- 192.168.123.100:0/2195324949 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true} v 0) -- 0x7f8874004910 con 0x7f88b0104450 2026-03-10T08:01:19.613 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.619+0000 7f88ad7fa640 1 -- 192.168.123.100:0/2195324949 <== mon.0 v2:192.168.123.100:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]=0 pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' does not exist v788) ==== 255+0+0 (secure 0 0 0) 0x7f88a005e7c0 con 0x7f88b0104450 2026-03-10T08:01:19.613 INFO:tasks.workunit.client.0.vm00.stderr:pool 'a16f944b-49df-4ed0-bee4-6bfebe190ca8' does not exist 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 -- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f88840777e0 msgr2=0x7f8884079ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f88840777e0 0x7f8884079ca0 secure :-1 s=READY pgs=4342 cs=0 l=1 rev1=1 crypto rx=0x7f88b019b9f0 tx=0x7f889c004420 comp rx=0 tx=0).stop 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 -- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f88b0104450 msgr2=0x7f88b01a0c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f88b0104450 0x7f88b01a0c30 secure :-1 s=READY pgs=3159 cs=0 l=1 rev1=1 crypto rx=0x7f88a000ed60 tx=0x7f88a000c6a0 comp rx=0 tx=0).stop 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 -- 192.168.123.100:0/2195324949 shutdown_connections 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:6800/3339031114,v1:192.168.123.100:6801/3339031114] conn(0x7f88840777e0 0x7f8884079ca0 unknown :-1 s=CLOSED pgs=4342 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] conn(0x7f88b010cf10 0x7f88b019a8b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f88b0104450 0x7f88b01a0c30 unknown :-1 s=CLOSED pgs=3159 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 --2- 192.168.123.100:0/2195324949 >> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] conn(0x7f88b0103b30 0x7f88b01a06f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 -- 192.168.123.100:0/2195324949 >> 192.168.123.100:0/2195324949 conn(0x7f88b00fd4f0 msgr2=0x7f88b00fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 -- 192.168.123.100:0/2195324949 shutdown_connections 2026-03-10T08:01:19.615 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-10T08:01:19.623+0000 7f88b685e640 1 -- 192.168.123.100:0/2195324949 wait complete. 2026-03-10T08:01:19.625 INFO:tasks.workunit.client.0.vm00.stdout:OK 2026-03-10T08:01:19.626 INFO:tasks.workunit.client.0.vm00.stderr:+ echo OK 2026-03-10T08:01:19.626 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-10T08:01:19.626 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T08:01:19.634 INFO:tasks.workunit:Stopping ['rados/test.sh', 'rados/test_pool_quota.sh'] on client.0... 2026-03-10T08:01:19.634 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:19 vm00 bash[20701]: audit 2026-03-10T08:01:18.532187+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:19 vm00 bash[20701]: audit 2026-03-10T08:01:18.532187+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:19 vm00 bash[20701]: cluster 2026-03-10T08:01:18.538578+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:19 vm00 bash[20701]: cluster 2026-03-10T08:01:18.538578+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:19 vm00 bash[20701]: audit 2026-03-10T08:01:18.596880+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:19 vm00 bash[20701]: audit 2026-03-10T08:01:18.596880+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:19 vm00 bash[20701]: audit 2026-03-10T08:01:18.597345+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:19 vm00 bash[20701]: audit 2026-03-10T08:01:18.597345+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:19 vm00 bash[20701]: audit 2026-03-10T08:01:18.759934+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:19 vm00 bash[20701]: audit 2026-03-10T08:01:18.759934+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:19 vm00 bash[28005]: audit 2026-03-10T08:01:18.532187+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:19 vm00 bash[28005]: audit 2026-03-10T08:01:18.532187+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:19 vm00 bash[28005]: cluster 2026-03-10T08:01:18.538578+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:19 vm00 bash[28005]: cluster 2026-03-10T08:01:18.538578+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:19 vm00 bash[28005]: audit 2026-03-10T08:01:18.596880+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:19 vm00 bash[28005]: audit 2026-03-10T08:01:18.596880+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:19 vm00 bash[28005]: audit 2026-03-10T08:01:18.597345+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:19 vm00 bash[28005]: audit 2026-03-10T08:01:18.597345+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:19 vm00 bash[28005]: audit 2026-03-10T08:01:18.759934+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:19.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:19 vm00 bash[28005]: audit 2026-03-10T08:01:18.759934+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:19 vm03 bash[23382]: audit 2026-03-10T08:01:18.532187+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:19 vm03 bash[23382]: audit 2026-03-10T08:01:18.532187+0000 mon.a (mon.0) 3698 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:19 vm03 bash[23382]: cluster 2026-03-10T08:01:18.538578+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-10T08:01:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:19 vm03 bash[23382]: cluster 2026-03-10T08:01:18.538578+0000 mon.a (mon.0) 3699 : cluster [DBG] osdmap e787: 8 total, 8 up, 8 in 2026-03-10T08:01:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:19 vm03 bash[23382]: audit 2026-03-10T08:01:18.596880+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:19 vm03 bash[23382]: audit 2026-03-10T08:01:18.596880+0000 mon.c (mon.2) 511 : audit [INF] from='client.? 192.168.123.100:0/3354655610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:19 vm03 bash[23382]: audit 2026-03-10T08:01:18.597345+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:19 vm03 bash[23382]: audit 2026-03-10T08:01:18.597345+0000 mon.a (mon.0) 3700 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "848a4d04-314a-4289-950b-2472b7cc83f9", "pool2": "848a4d04-314a-4289-950b-2472b7cc83f9", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:19 vm03 bash[23382]: audit 2026-03-10T08:01:18.759934+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:20.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:19 vm03 bash[23382]: audit 2026-03-10T08:01:18.759934+0000 mon.a (mon.0) 3701 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:20.060 DEBUG:teuthology.parallel:result is None 2026-03-10T08:01:20.060 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T08:01:20.068 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T08:01:20.068 DEBUG:teuthology.orchestra.run.vm00:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T08:01:20.113 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T08:01:20.113 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T08:01:20.115 INFO:tasks.cephadm:Teardown begin 2026-03-10T08:01:20.115 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T08:01:20.161 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T08:01:20.173 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T08:01:20.173 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 -- ceph mgr module disable cephadm 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:20 vm00 bash[28005]: cluster 2026-03-10T08:01:19.052899+0000 mgr.y (mgr.24407) 1335 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:20 vm00 bash[28005]: cluster 2026-03-10T08:01:19.052899+0000 mgr.y (mgr.24407) 1335 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:20 vm00 bash[28005]: cluster 2026-03-10T08:01:19.549239+0000 mon.a (mon.0) 3702 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:20 vm00 bash[28005]: cluster 2026-03-10T08:01:19.549239+0000 mon.a (mon.0) 3702 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:20 vm00 bash[28005]: audit 2026-03-10T08:01:19.557419+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:20 vm00 bash[28005]: audit 2026-03-10T08:01:19.557419+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:20 vm00 bash[28005]: cluster 2026-03-10T08:01:19.575182+0000 mon.a (mon.0) 3704 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:20 vm00 bash[28005]: cluster 2026-03-10T08:01:19.575182+0000 mon.a (mon.0) 3704 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:20 vm00 bash[28005]: audit 2026-03-10T08:01:19.623543+0000 mon.a (mon.0) 3705 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:20 vm00 bash[28005]: audit 2026-03-10T08:01:19.623543+0000 mon.a (mon.0) 3705 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:20 vm00 bash[20701]: cluster 2026-03-10T08:01:19.052899+0000 mgr.y (mgr.24407) 1335 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:20 vm00 bash[20701]: cluster 2026-03-10T08:01:19.052899+0000 mgr.y (mgr.24407) 1335 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:20 vm00 bash[20701]: cluster 2026-03-10T08:01:19.549239+0000 mon.a (mon.0) 3702 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:20 vm00 bash[20701]: cluster 2026-03-10T08:01:19.549239+0000 mon.a (mon.0) 3702 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:20 vm00 bash[20701]: audit 2026-03-10T08:01:19.557419+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:20 vm00 bash[20701]: audit 2026-03-10T08:01:19.557419+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:20 vm00 bash[20701]: cluster 2026-03-10T08:01:19.575182+0000 mon.a (mon.0) 3704 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:20 vm00 bash[20701]: cluster 2026-03-10T08:01:19.575182+0000 mon.a (mon.0) 3704 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:20 vm00 bash[20701]: audit 2026-03-10T08:01:19.623543+0000 mon.a (mon.0) 3705 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:20.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:20 vm00 bash[20701]: audit 2026-03-10T08:01:19.623543+0000 mon.a (mon.0) 3705 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:20 vm03 bash[23382]: cluster 2026-03-10T08:01:19.052899+0000 mgr.y (mgr.24407) 1335 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:01:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:20 vm03 bash[23382]: cluster 2026-03-10T08:01:19.052899+0000 mgr.y (mgr.24407) 1335 : cluster [DBG] pgmap v1777: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:01:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:20 vm03 bash[23382]: cluster 2026-03-10T08:01:19.549239+0000 mon.a (mon.0) 3702 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T08:01:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:20 vm03 bash[23382]: cluster 2026-03-10T08:01:19.549239+0000 mon.a (mon.0) 3702 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T08:01:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:20 vm03 bash[23382]: audit 2026-03-10T08:01:19.557419+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:20 vm03 bash[23382]: audit 2026-03-10T08:01:19.557419+0000 mon.a (mon.0) 3703 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T08:01:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:20 vm03 bash[23382]: cluster 2026-03-10T08:01:19.575182+0000 mon.a (mon.0) 3704 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-10T08:01:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:20 vm03 bash[23382]: cluster 2026-03-10T08:01:19.575182+0000 mon.a (mon.0) 3704 : cluster [DBG] osdmap e788: 8 total, 8 up, 8 in 2026-03-10T08:01:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:20 vm03 bash[23382]: audit 2026-03-10T08:01:19.623543+0000 mon.a (mon.0) 3705 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:21.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:20 vm03 bash[23382]: audit 2026-03-10T08:01:19.623543+0000 mon.a (mon.0) 3705 : audit [INF] from='client.? 192.168.123.100:0/2195324949' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "pool2": "a16f944b-49df-4ed0-bee4-6bfebe190ca8", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T08:01:21.376 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:01:21 vm00 bash[20971]: ::ffff:192.168.123.103 - - [10/Mar/2026:08:01:21] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:01:21.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:21 vm00 bash[28005]: cluster 2026-03-10T08:01:21.053361+0000 mgr.y (mgr.24407) 1336 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:21.875 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:21 vm00 bash[28005]: cluster 2026-03-10T08:01:21.053361+0000 mgr.y (mgr.24407) 1336 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:21.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:21 vm00 bash[20701]: cluster 2026-03-10T08:01:21.053361+0000 mgr.y (mgr.24407) 1336 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:21.876 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:21 vm00 bash[20701]: cluster 2026-03-10T08:01:21.053361+0000 mgr.y (mgr.24407) 1336 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:22.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:21 vm03 bash[23382]: cluster 2026-03-10T08:01:21.053361+0000 mgr.y (mgr.24407) 1336 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:22.012 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:21 vm03 bash[23382]: cluster 2026-03-10T08:01:21.053361+0000 mgr.y (mgr.24407) 1336 : cluster [DBG] pgmap v1779: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:01:24.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:24 vm00 bash[28005]: cluster 2026-03-10T08:01:23.053592+0000 mgr.y (mgr.24407) 1337 : cluster [DBG] pgmap v1780: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T08:01:24.376 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:24 vm00 bash[28005]: cluster 2026-03-10T08:01:23.053592+0000 mgr.y (mgr.24407) 1337 : cluster [DBG] pgmap v1780: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T08:01:24.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:24 vm00 bash[20701]: cluster 2026-03-10T08:01:23.053592+0000 mgr.y (mgr.24407) 1337 : cluster [DBG] pgmap v1780: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T08:01:24.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:24 vm00 bash[20701]: cluster 2026-03-10T08:01:23.053592+0000 mgr.y (mgr.24407) 1337 : cluster [DBG] pgmap v1780: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T08:01:24.493 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:24 vm03 bash[23382]: cluster 2026-03-10T08:01:23.053592+0000 mgr.y (mgr.24407) 1337 : cluster [DBG] pgmap v1780: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T08:01:24.493 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:24 vm03 bash[23382]: cluster 2026-03-10T08:01:23.053592+0000 mgr.y (mgr.24407) 1337 : cluster [DBG] pgmap v1780: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T08:01:24.761 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:01:24 vm03 bash[49156]: debug there is no tcmu-runner data available 2026-03-10T08:01:24.825 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/mon.c/config 2026-03-10T08:01:24.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T08:01:24.995+0000 7f5622a63640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T08:01:24.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T08:01:24.995+0000 7f5622a63640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T08:01:24.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T08:01:24.995+0000 7f5622a63640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T08:01:24.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T08:01:24.995+0000 7f5622a63640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T08:01:24.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T08:01:24.995+0000 7f5622a63640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T08:01:24.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T08:01:24.995+0000 7f5622a63640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T08:01:24.989 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T08:01:24.995+0000 7f5622a63640 -1 monclient: keyring not found 2026-03-10T08:01:24.989 INFO:teuthology.orchestra.run.vm00.stderr:[errno 21] error connecting to the cluster 2026-03-10T08:01:25.030 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:01:25.030 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T08:01:25.030 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T08:01:25.033 DEBUG:teuthology.orchestra.run.vm03:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T08:01:25.036 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T08:01:25.036 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T08:01:25.036 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.a 2026-03-10T08:01:25.106 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:25 vm00 systemd[1]: Stopping Ceph mon.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:25.302 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:25 vm00 bash[28005]: audit 2026-03-10T08:01:24.502902+0000 mgr.y (mgr.24407) 1338 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:25.303 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:25 vm00 bash[28005]: audit 2026-03-10T08:01:24.502902+0000 mgr.y (mgr.24407) 1338 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:25.303 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:25 vm00 bash[20701]: audit 2026-03-10T08:01:24.502902+0000 mgr.y (mgr.24407) 1338 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:25.303 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:25 vm00 bash[20701]: audit 2026-03-10T08:01:24.502902+0000 mgr.y (mgr.24407) 1338 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:25.303 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:25 vm00 bash[20701]: debug 2026-03-10T08:01:25.135+0000 7fad00463640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T08:01:25.303 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:25 vm00 bash[20701]: debug 2026-03-10T08:01:25.135+0000 7fad00463640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-10T08:01:25.356 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.a.service' 2026-03-10T08:01:25.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:25 vm00 bash[132129]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-mon-a 2026-03-10T08:01:25.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:25 vm00 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.a.service: Deactivated successfully. 2026-03-10T08:01:25.376 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 08:01:25 vm00 systemd[1]: Stopped Ceph mon.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T08:01:25.386 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:01:25.386 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T08:01:25.386 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-10T08:01:25.387 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.c 2026-03-10T08:01:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:25 vm03 bash[23382]: audit 2026-03-10T08:01:24.502902+0000 mgr.y (mgr.24407) 1338 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:25.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:25 vm03 bash[23382]: audit 2026-03-10T08:01:24.502902+0000 mgr.y (mgr.24407) 1338 : audit [DBG] from='client.24373 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:01:25.612 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:25 vm00 systemd[1]: Stopping Ceph mon.c for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:25.612 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:25 vm00 bash[28005]: debug 2026-03-10T08:01:25.475+0000 7f3115167640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T08:01:25.612 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 08:01:25 vm00 bash[28005]: debug 2026-03-10T08:01:25.475+0000 7f3115167640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-10T08:01:25.612 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:01:25 vm00 bash[20971]: [10/Mar/2026:08:01:25] ENGINE Bus STOPPING 2026-03-10T08:01:25.612 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:01:25 vm00 bash[20971]: [10/Mar/2026:08:01:25] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T08:01:25.612 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:01:25 vm00 bash[20971]: [10/Mar/2026:08:01:25] ENGINE Bus STOPPED 2026-03-10T08:01:25.612 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:01:25 vm00 bash[20971]: [10/Mar/2026:08:01:25] ENGINE Bus STARTING 2026-03-10T08:01:25.647 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.c.service' 2026-03-10T08:01:25.658 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:01:25.658 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-10T08:01:25.658 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-10T08:01:25.658 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.b 2026-03-10T08:01:25.838 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:01:25 vm00 bash[20971]: [10/Mar/2026:08:01:25] ENGINE Serving on http://:::9283 2026-03-10T08:01:25.838 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:01:25 vm00 bash[20971]: [10/Mar/2026:08:01:25] ENGINE Bus STARTED 2026-03-10T08:01:25.918 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:25 vm03 systemd[1]: Stopping Ceph mon.b for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:25.919 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:25 vm03 bash[23382]: debug 2026-03-10T08:01:25.713+0000 7f729db54640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T08:01:25.919 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 08:01:25 vm03 bash[23382]: debug 2026-03-10T08:01:25.713+0000 7f729db54640 -1 mon.b@1(peon) e3 *** Got Signal Terminated *** 2026-03-10T08:01:25.993 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mon.b.service' 2026-03-10T08:01:26.006 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:01:26.006 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-10T08:01:26.006 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-10T08:01:26.006 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mgr.y 2026-03-10T08:01:26.092 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:01:26 vm00 systemd[1]: Stopping Ceph mgr.y for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:26.092 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 08:01:26 vm00 bash[20971]: debug 2026-03-10T08:01:26.059+0000 7feceef5d640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mgr -n mgr.y -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T08:01:26.160 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mgr.y.service' 2026-03-10T08:01:26.170 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:01:26.170 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-10T08:01:26.170 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-10T08:01:26.171 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mgr.x 2026-03-10T08:01:26.253 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 08:01:26 vm03 systemd[1]: Stopping Ceph mgr.x for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:26.293 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@mgr.x.service' 2026-03-10T08:01:26.303 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:01:26.304 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-10T08:01:26.304 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T08:01:26.304 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.0 2026-03-10T08:01:26.351 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 08:01:26 vm00 systemd[1]: Stopping Ceph osd.0 for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:26.627 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 08:01:26 vm00 bash[30874]: debug 2026-03-10T08:01:26.355+0000 7f3e6cc77640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T08:01:26.627 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 08:01:26 vm00 bash[30874]: debug 2026-03-10T08:01:26.355+0000 7f3e6cc77640 -1 osd.0 788 *** Got signal Terminated *** 2026-03-10T08:01:26.627 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 08:01:26 vm00 bash[30874]: debug 2026-03-10T08:01:26.355+0000 7f3e6cc77640 -1 osd.0 788 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:01:31.695 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 08:01:31 vm00 bash[132394]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-osd-0 2026-03-10T08:01:31.731 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.0.service' 2026-03-10T08:01:31.741 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:01:31.741 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T08:01:31.741 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T08:01:31.741 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.1 2026-03-10T08:01:32.126 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 08:01:31 vm00 systemd[1]: Stopping Ceph osd.1 for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:32.126 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 08:01:31 vm00 bash[36922]: debug 2026-03-10T08:01:31.831+0000 7fde4f045640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T08:01:32.126 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 08:01:31 vm00 bash[36922]: debug 2026-03-10T08:01:31.831+0000 7fde4f045640 -1 osd.1 788 *** Got signal Terminated *** 2026-03-10T08:01:32.126 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 08:01:31 vm00 bash[36922]: debug 2026-03-10T08:01:31.831+0000 7fde4f045640 -1 osd.1 788 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:01:37.126 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 08:01:36 vm00 bash[132575]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-osd-1 2026-03-10T08:01:37.173 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.1.service' 2026-03-10T08:01:37.183 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:01:37.183 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T08:01:37.183 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T08:01:37.183 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.2 2026-03-10T08:01:37.626 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 08:01:37 vm00 systemd[1]: Stopping Ceph osd.2 for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:37.626 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 08:01:37 vm00 bash[42909]: debug 2026-03-10T08:01:37.275+0000 7f5002a36640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T08:01:37.626 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 08:01:37 vm00 bash[42909]: debug 2026-03-10T08:01:37.275+0000 7f5002a36640 -1 osd.2 788 *** Got signal Terminated *** 2026-03-10T08:01:37.626 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 08:01:37 vm00 bash[42909]: debug 2026-03-10T08:01:37.275+0000 7f5002a36640 -1 osd.2 788 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:01:39.262 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 08:01:38 vm03 bash[52014]: ts=2026-03-10T08:01:38.795Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T08:01:39.262 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 08:01:38 vm03 bash[52014]: ts=2026-03-10T08:01:38.795Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T08:01:39.262 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 08:01:38 vm03 bash[52014]: ts=2026-03-10T08:01:38.795Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T08:01:39.262 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 08:01:38 vm03 bash[52014]: ts=2026-03-10T08:01:38.795Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T08:01:39.262 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 08:01:38 vm03 bash[52014]: ts=2026-03-10T08:01:38.795Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T08:01:39.262 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 08:01:38 vm03 bash[52014]: ts=2026-03-10T08:01:38.795Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T08:01:42.595 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 08:01:42 vm00 bash[132761]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-osd-2 2026-03-10T08:01:42.640 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.2.service' 2026-03-10T08:01:42.650 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:01:42.650 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T08:01:42.650 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-10T08:01:42.650 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.3 2026-03-10T08:01:42.876 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 08:01:42 vm00 systemd[1]: Stopping Ceph osd.3 for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:42.876 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 08:01:42 vm00 bash[49191]: debug 2026-03-10T08:01:42.743+0000 7ff9fa889640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T08:01:42.876 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 08:01:42 vm00 bash[49191]: debug 2026-03-10T08:01:42.743+0000 7ff9fa889640 -1 osd.3 788 *** Got signal Terminated *** 2026-03-10T08:01:42.876 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 08:01:42 vm00 bash[49191]: debug 2026-03-10T08:01:42.743+0000 7ff9fa889640 -1 osd.3 788 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:01:48.044 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 08:01:47 vm00 bash[132950]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-osd-3 2026-03-10T08:01:48.084 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.3.service' 2026-03-10T08:01:48.094 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:01:48.094 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-10T08:01:48.094 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-10T08:01:48.094 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.4 2026-03-10T08:01:48.512 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 08:01:48 vm03 systemd[1]: Stopping Ceph osd.4 for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:48.512 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 08:01:48 vm03 bash[26632]: debug 2026-03-10T08:01:48.145+0000 7fad5e2d4640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T08:01:48.512 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 08:01:48 vm03 bash[26632]: debug 2026-03-10T08:01:48.145+0000 7fad5e2d4640 -1 osd.4 788 *** Got signal Terminated *** 2026-03-10T08:01:48.512 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 08:01:48 vm03 bash[26632]: debug 2026-03-10T08:01:48.145+0000 7fad5e2d4640 -1 osd.4 788 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:01:52.262 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:51 vm03 bash[38760]: debug 2026-03-10T08:01:51.801+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:01:53.188 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:52 vm03 bash[38760]: debug 2026-03-10T08:01:52.801+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:01:53.498 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 08:01:53 vm03 bash[57072]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-osd-4 2026-03-10T08:01:53.536 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.4.service' 2026-03-10T08:01:53.547 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:01:53.547 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-10T08:01:53.547 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-10T08:01:53.547 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.5 2026-03-10T08:01:53.762 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 08:01:53 vm03 systemd[1]: Stopping Ceph osd.5 for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:53.762 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 08:01:53 vm03 bash[32803]: debug 2026-03-10T08:01:53.641+0000 7f713ea0a640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T08:01:53.762 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 08:01:53 vm03 bash[32803]: debug 2026-03-10T08:01:53.641+0000 7f713ea0a640 -1 osd.5 788 *** Got signal Terminated *** 2026-03-10T08:01:53.762 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 08:01:53 vm03 bash[32803]: debug 2026-03-10T08:01:53.641+0000 7f713ea0a640 -1 osd.5 788 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:01:54.199 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:53 vm03 bash[38760]: debug 2026-03-10T08:01:53.801+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:01:54.512 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 08:01:54 vm03 bash[32803]: debug 2026-03-10T08:01:54.205+0000 7f713a822640 -1 osd.5 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:27.388735+0000 front 2026-03-10T08:01:27.388983+0000 (oldest deadline 2026-03-10T08:01:53.288528+0000) 2026-03-10T08:01:55.173 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:54 vm03 bash[38760]: debug 2026-03-10T08:01:54.817+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:01:55.511 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 08:01:55 vm03 bash[32803]: debug 2026-03-10T08:01:55.177+0000 7f713a822640 -1 osd.5 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:27.388735+0000 front 2026-03-10T08:01:27.388983+0000 (oldest deadline 2026-03-10T08:01:53.288528+0000) 2026-03-10T08:01:55.511 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:01:55 vm03 bash[44711]: debug 2026-03-10T08:01:55.421+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:01:56.213 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:55 vm03 bash[38760]: debug 2026-03-10T08:01:55.777+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:01:56.511 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 08:01:56 vm03 bash[32803]: debug 2026-03-10T08:01:56.217+0000 7f713a822640 -1 osd.5 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:27.388735+0000 front 2026-03-10T08:01:27.388983+0000 (oldest deadline 2026-03-10T08:01:53.288528+0000) 2026-03-10T08:01:56.511 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:01:56 vm03 bash[44711]: debug 2026-03-10T08:01:56.377+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:01:57.257 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:56 vm03 bash[38760]: debug 2026-03-10T08:01:56.813+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:01:57.511 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:01:57 vm03 bash[44711]: debug 2026-03-10T08:01:57.413+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:01:57.512 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 08:01:57 vm03 bash[32803]: debug 2026-03-10T08:01:57.261+0000 7f713a822640 -1 osd.5 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:27.388735+0000 front 2026-03-10T08:01:27.388983+0000 (oldest deadline 2026-03-10T08:01:53.288528+0000) 2026-03-10T08:01:58.262 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:57 vm03 bash[38760]: debug 2026-03-10T08:01:57.785+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:01:58.262 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:57 vm03 bash[38760]: debug 2026-03-10T08:01:57.785+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:35.570883+0000 front 2026-03-10T08:01:35.570778+0000 (oldest deadline 2026-03-10T08:01:57.270706+0000) 2026-03-10T08:01:58.686 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:01:58 vm03 bash[44711]: debug 2026-03-10T08:01:58.437+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:01:58.686 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 08:01:58 vm03 bash[32803]: debug 2026-03-10T08:01:58.269+0000 7f713a822640 -1 osd.5 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:27.388735+0000 front 2026-03-10T08:01:27.388983+0000 (oldest deadline 2026-03-10T08:01:53.288528+0000) 2026-03-10T08:01:58.987 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 08:01:58 vm03 bash[57255]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-osd-5 2026-03-10T08:01:58.987 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:58 vm03 bash[38760]: debug 2026-03-10T08:01:58.809+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:01:58.987 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:58 vm03 bash[38760]: debug 2026-03-10T08:01:58.809+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:35.570883+0000 front 2026-03-10T08:01:35.570778+0000 (oldest deadline 2026-03-10T08:01:57.270706+0000) 2026-03-10T08:01:59.013 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.5.service' 2026-03-10T08:01:59.023 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:01:59.024 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-10T08:01:59.024 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-10T08:01:59.024 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.6 2026-03-10T08:01:59.261 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:59 vm03 systemd[1]: Stopping Ceph osd.6 for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:01:59.262 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:59 vm03 bash[38760]: debug 2026-03-10T08:01:59.113+0000 7f1e3ca5c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T08:01:59.262 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:59 vm03 bash[38760]: debug 2026-03-10T08:01:59.113+0000 7f1e3ca5c640 -1 osd.6 788 *** Got signal Terminated *** 2026-03-10T08:01:59.262 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:59 vm03 bash[38760]: debug 2026-03-10T08:01:59.113+0000 7f1e3ca5c640 -1 osd.6 788 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:01:59.761 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:01:59 vm03 bash[44711]: debug 2026-03-10T08:01:59.417+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:02:00.262 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:59 vm03 bash[38760]: debug 2026-03-10T08:01:59.821+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:02:00.262 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:01:59 vm03 bash[38760]: debug 2026-03-10T08:01:59.821+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:35.570883+0000 front 2026-03-10T08:01:35.570778+0000 (oldest deadline 2026-03-10T08:01:57.270706+0000) 2026-03-10T08:02:00.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:00 vm03 bash[44711]: debug 2026-03-10T08:02:00.373+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:02:00.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:00 vm03 bash[44711]: debug 2026-03-10T08:02:00.373+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:34.495711+0000 front 2026-03-10T08:01:34.495926+0000 (oldest deadline 2026-03-10T08:01:59.795606+0000) 2026-03-10T08:02:01.261 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:02:00 vm03 bash[38760]: debug 2026-03-10T08:02:00.813+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:02:01.262 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:02:00 vm03 bash[38760]: debug 2026-03-10T08:02:00.813+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:35.570883+0000 front 2026-03-10T08:01:35.570778+0000 (oldest deadline 2026-03-10T08:01:57.270706+0000) 2026-03-10T08:02:01.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:01 vm03 bash[44711]: debug 2026-03-10T08:02:01.365+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:02:01.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:01 vm03 bash[44711]: debug 2026-03-10T08:02:01.365+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:34.495711+0000 front 2026-03-10T08:01:34.495926+0000 (oldest deadline 2026-03-10T08:01:59.795606+0000) 2026-03-10T08:02:02.261 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:02:01 vm03 bash[38760]: debug 2026-03-10T08:02:01.861+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:02:02.262 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:02:01 vm03 bash[38760]: debug 2026-03-10T08:02:01.861+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:35.570883+0000 front 2026-03-10T08:01:35.570778+0000 (oldest deadline 2026-03-10T08:01:57.270706+0000) 2026-03-10T08:02:02.761 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:02 vm03 bash[44711]: debug 2026-03-10T08:02:02.333+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:02:02.761 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:02 vm03 bash[44711]: debug 2026-03-10T08:02:02.333+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:34.495711+0000 front 2026-03-10T08:01:34.495926+0000 (oldest deadline 2026-03-10T08:01:59.795606+0000) 2026-03-10T08:02:03.261 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:02:02 vm03 bash[38760]: debug 2026-03-10T08:02:02.909+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:02:03.262 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:02:02 vm03 bash[38760]: debug 2026-03-10T08:02:02.909+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:35.570883+0000 front 2026-03-10T08:01:35.570778+0000 (oldest deadline 2026-03-10T08:01:57.270706+0000) 2026-03-10T08:02:03.761 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:03 vm03 bash[44711]: debug 2026-03-10T08:02:03.377+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:02:03.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:03 vm03 bash[44711]: debug 2026-03-10T08:02:03.377+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:34.495711+0000 front 2026-03-10T08:01:34.495926+0000 (oldest deadline 2026-03-10T08:01:59.795606+0000) 2026-03-10T08:02:04.243 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:02:03 vm03 bash[38760]: debug 2026-03-10T08:02:03.953+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:25.570583+0000 front 2026-03-10T08:01:25.570531+0000 (oldest deadline 2026-03-10T08:01:51.470167+0000) 2026-03-10T08:02:04.243 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:02:03 vm03 bash[38760]: debug 2026-03-10T08:02:03.953+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:35.570883+0000 front 2026-03-10T08:01:35.570778+0000 (oldest deadline 2026-03-10T08:01:57.270706+0000) 2026-03-10T08:02:04.243 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:02:03 vm03 bash[38760]: debug 2026-03-10T08:02:03.953+0000 7f1e39075640 -1 osd.6 788 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-10T08:01:37.271157+0000 front 2026-03-10T08:01:37.271002+0000 (oldest deadline 2026-03-10T08:02:03.170952+0000) 2026-03-10T08:02:04.243 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 08:02:04 vm03 bash[57434]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-osd-6 2026-03-10T08:02:04.472 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.6.service' 2026-03-10T08:02:04.482 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:02:04.482 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-10T08:02:04.482 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-10T08:02:04.482 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.7 2026-03-10T08:02:04.511 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:04 vm03 bash[44711]: debug 2026-03-10T08:02:04.365+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:02:04.511 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:04 vm03 bash[44711]: debug 2026-03-10T08:02:04.365+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:34.495711+0000 front 2026-03-10T08:01:34.495926+0000 (oldest deadline 2026-03-10T08:01:59.795606+0000) 2026-03-10T08:02:04.761 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:04 vm03 systemd[1]: Stopping Ceph osd.7 for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:02:04.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:04 vm03 bash[44711]: debug 2026-03-10T08:02:04.561+0000 7ff417429640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T08:02:04.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:04 vm03 bash[44711]: debug 2026-03-10T08:02:04.561+0000 7ff417429640 -1 osd.7 788 *** Got signal Terminated *** 2026-03-10T08:02:04.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:04 vm03 bash[44711]: debug 2026-03-10T08:02:04.561+0000 7ff417429640 -1 osd.7 788 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:02:05.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:05 vm03 bash[44711]: debug 2026-03-10T08:02:05.329+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:02:05.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:05 vm03 bash[44711]: debug 2026-03-10T08:02:05.329+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:34.495711+0000 front 2026-03-10T08:01:34.495926+0000 (oldest deadline 2026-03-10T08:01:59.795606+0000) 2026-03-10T08:02:05.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:05 vm03 bash[44711]: debug 2026-03-10T08:02:05.329+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-10T08:01:39.796185+0000 front 2026-03-10T08:01:39.796242+0000 (oldest deadline 2026-03-10T08:02:04.495887+0000) 2026-03-10T08:02:06.761 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:06 vm03 bash[44711]: debug 2026-03-10T08:02:06.333+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:02:06.761 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:06 vm03 bash[44711]: debug 2026-03-10T08:02:06.333+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:34.495711+0000 front 2026-03-10T08:01:34.495926+0000 (oldest deadline 2026-03-10T08:01:59.795606+0000) 2026-03-10T08:02:06.761 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:06 vm03 bash[44711]: debug 2026-03-10T08:02:06.333+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-10T08:01:39.796185+0000 front 2026-03-10T08:01:39.796242+0000 (oldest deadline 2026-03-10T08:02:04.495887+0000) 2026-03-10T08:02:07.761 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:07 vm03 bash[44711]: debug 2026-03-10T08:02:07.377+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:02:07.761 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:07 vm03 bash[44711]: debug 2026-03-10T08:02:07.377+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:34.495711+0000 front 2026-03-10T08:01:34.495926+0000 (oldest deadline 2026-03-10T08:01:59.795606+0000) 2026-03-10T08:02:07.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:07 vm03 bash[44711]: debug 2026-03-10T08:02:07.377+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-10T08:01:39.796185+0000 front 2026-03-10T08:01:39.796242+0000 (oldest deadline 2026-03-10T08:02:04.495887+0000) 2026-03-10T08:02:08.761 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:08 vm03 bash[44711]: debug 2026-03-10T08:02:08.421+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:02:08.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:08 vm03 bash[44711]: debug 2026-03-10T08:02:08.421+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:34.495711+0000 front 2026-03-10T08:01:34.495926+0000 (oldest deadline 2026-03-10T08:01:59.795606+0000) 2026-03-10T08:02:08.762 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:08 vm03 bash[44711]: debug 2026-03-10T08:02:08.421+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-10T08:01:39.796185+0000 front 2026-03-10T08:01:39.796242+0000 (oldest deadline 2026-03-10T08:02:04.495887+0000) 2026-03-10T08:02:09.686 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:09 vm03 bash[44711]: debug 2026-03-10T08:02:09.401+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-10T08:01:29.195524+0000 front 2026-03-10T08:01:29.195636+0000 (oldest deadline 2026-03-10T08:01:54.495353+0000) 2026-03-10T08:02:09.686 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:09 vm03 bash[44711]: debug 2026-03-10T08:02:09.401+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-10T08:01:34.495711+0000 front 2026-03-10T08:01:34.495926+0000 (oldest deadline 2026-03-10T08:01:59.795606+0000) 2026-03-10T08:02:09.686 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:09 vm03 bash[44711]: debug 2026-03-10T08:02:09.401+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-10T08:01:39.796185+0000 front 2026-03-10T08:01:39.796242+0000 (oldest deadline 2026-03-10T08:02:04.495887+0000) 2026-03-10T08:02:09.686 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:09 vm03 bash[44711]: debug 2026-03-10T08:02:09.401+0000 7ff413241640 -1 osd.7 788 heartbeat_check: no reply from 192.168.123.100:6830 osd.3 since back 2026-03-10T08:01:44.498772+0000 front 2026-03-10T08:01:44.496485+0000 (oldest deadline 2026-03-10T08:02:09.196105+0000) 2026-03-10T08:02:09.686 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 08:02:09 vm03 bash[57622]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-osd-7 2026-03-10T08:02:09.908 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@osd.7.service' 2026-03-10T08:02:09.918 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:02:09.918 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-10T08:02:09.918 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-10T08:02:09.918 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@rgw.foo.a 2026-03-10T08:02:10.376 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 08:02:09 vm00 systemd[1]: Stopping Ceph rgw.foo.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:02:10.376 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 08:02:09 vm00 bash[53569]: debug 2026-03-10T08:02:09.967+0000 7fb643b28640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T08:02:10.376 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 08:02:09 vm00 bash[53569]: debug 2026-03-10T08:02:09.967+0000 7fb647397980 -1 shutting down 2026-03-10T08:02:20.050 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@rgw.foo.a.service' 2026-03-10T08:02:20.060 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:02:20.060 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-10T08:02:20.060 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-10T08:02:20.060 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@prometheus.a 2026-03-10T08:02:20.155 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@prometheus.a.service' 2026-03-10T08:02:20.165 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:02:20.165 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-10T08:02:20.165 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm rm-cluster --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 --force --keep-logs 2026-03-10T08:02:20.248 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T08:02:25.062 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 08:02:24 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:25.063 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 08:02:24 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:25.334 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:25.334 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:25.334 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:25.334 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:25.590 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: Stopping Ceph alertmanager.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:02:25.590 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 08:02:25 vm00 bash[56723]: ts=2026-03-10T08:02:25.423Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T08:02:25.591 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 08:02:25 vm00 bash[133372]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-alertmanager-a 2026-03-10T08:02:25.591 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@alertmanager.a.service: Deactivated successfully. 2026-03-10T08:02:25.591 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: Stopped Ceph alertmanager.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T08:02:25.876 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:25.876 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:25.876 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: Stopping Ceph node-exporter.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:02:25.876 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 08:02:25 vm00 bash[133495]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-node-exporter-a 2026-03-10T08:02:25.876 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-10T08:02:25.876 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-10T08:02:25.876 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: Stopped Ceph node-exporter.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T08:02:26.152 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 08:02:25 vm00 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:27.487 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm rm-cluster --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 --force --keep-logs 2026-03-10T08:02:27.581 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T08:02:32.377 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:32 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:32.377 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:32 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:32.378 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:32 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:32.633 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:32 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:32.633 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:32 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:32.633 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:32 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:32.887 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:32 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:32.887 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:32 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:32.887 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:32 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:33.262 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:33 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:33.262 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:33 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:33.262 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:33 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:33.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:33 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:33.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:33 vm03 systemd[1]: Stopping Ceph iscsi.iscsi.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:02:33.762 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:33 vm03 bash[49156]: debug Shutdown received 2026-03-10T08:02:33.762 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:33 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:33.762 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:33 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:43.649 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:43 vm03 bash[58113]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-iscsi-iscsi-a 2026-03-10T08:02:43.649 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-10T08:02:43.649 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-10T08:02:43.649 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: Stopped Ceph iscsi.iscsi.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T08:02:43.920 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:43.920 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:43.920 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:43.920 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:43.920 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:43.920 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:43.920 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:44.171 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:44.171 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:43 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:44.171 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:44 vm03 systemd[1]: Stopping Ceph grafana.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:02:44.171 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:44 vm03 bash[51371]: logger=server t=2026-03-10T08:02:44.078254832Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-10T08:02:44.171 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:44 vm03 bash[51371]: logger=tracing t=2026-03-10T08:02:44.078492998Z level=info msg="Closing tracing" 2026-03-10T08:02:44.171 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:44 vm03 bash[51371]: logger=ticker t=2026-03-10T08:02:44.078898027Z level=info msg=stopped last_tick=2026-03-10T08:02:40Z 2026-03-10T08:02:44.171 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:44 vm03 bash[51371]: logger=grafana-apiserver t=2026-03-10T08:02:44.079202739Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-10T08:02:44.171 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:44 vm03 bash[58281]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-grafana-a 2026-03-10T08:02:44.437 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:44 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:44.437 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:44 vm03 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@grafana.a.service: Deactivated successfully. 2026-03-10T08:02:44.437 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:44 vm03 systemd[1]: Stopped Ceph grafana.a for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T08:02:44.437 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 08:02:44 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:44.708 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:44 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:44.708 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:44 vm03 systemd[1]: Stopping Ceph node-exporter.b for 534d9c8a-1c51-11f1-ac87-d1fb9a119953... 2026-03-10T08:02:45.012 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:44 vm03 bash[58471]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953-node-exporter-b 2026-03-10T08:02:45.012 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:44 vm03 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-10T08:02:45.012 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:44 vm03 systemd[1]: ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-10T08:02:45.012 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:44 vm03 systemd[1]: Stopped Ceph node-exporter.b for 534d9c8a-1c51-11f1-ac87-d1fb9a119953. 2026-03-10T08:02:45.326 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 08:02:45 vm03 systemd[1]: /etc/systemd/system/ceph-534d9c8a-1c51-11f1-ac87-d1fb9a119953@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T08:02:45.864 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T08:02:45.873 INFO:teuthology.orchestra.run.vm00.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-10T08:02:45.874 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:02:45.874 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T08:02:45.881 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T08:02:45.881 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/945/remote/vm00/crash 2026-03-10T08:02:45.881 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/crash -- . 2026-03-10T08:02:45.924 INFO:teuthology.orchestra.run.vm00.stderr:tar: /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/crash: Cannot open: No such file or directory 2026-03-10T08:02:45.924 INFO:teuthology.orchestra.run.vm00.stderr:tar: Error is not recoverable: exiting now 2026-03-10T08:02:45.925 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/945/remote/vm03/crash 2026-03-10T08:02:45.925 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/crash -- . 2026-03-10T08:02:45.932 INFO:teuthology.orchestra.run.vm03.stderr:tar: /var/lib/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/crash: Cannot open: No such file or directory 2026-03-10T08:02:45.933 INFO:teuthology.orchestra.run.vm03.stderr:tar: Error is not recoverable: exiting now 2026-03-10T08:02:45.933 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T08:02:45.933 DEBUG:teuthology.orchestra.run.vm00:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'reached quota' | egrep -v 'but it is still running' | egrep -v 'overall HEALTH_' | egrep -v '\(POOL_FULL\)' | egrep -v '\(SMALLER_PGP_NUM\)' | egrep -v '\(CACHE_POOL_NO_HIT_SET\)' | egrep -v '\(CACHE_POOL_NEAR_FULL\)' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v '\(PG_AVAILABILITY\)' | egrep -v '\(PG_DEGRADED\)' | egrep -v CEPHADM_STRAY_DAEMON | head -n 1 2026-03-10T08:02:45.986 INFO:tasks.cephadm:Compressing logs... 2026-03-10T08:02:45.986 DEBUG:teuthology.orchestra.run.vm00:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T08:02:46.028 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T08:02:46.036 INFO:teuthology.orchestra.run.vm03.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T08:02:46.036 INFO:teuthology.orchestra.run.vm00.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T08:02:46.037 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T08:02:46.038 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T08:02:46.038 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mgr.x.log 2026-03-10T08:02:46.039 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.log 2026-03-10T08:02:46.039 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.3.log/var/log/ceph/cephadm.log: 2026-03-10T08:02:46.043 INFO:teuthology.orchestra.run.vm00.stderr: 92.6% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T08:02:46.044 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.log 2026-03-10T08:02:46.045 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mgr.x.log: 93.1% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mgr.x.log.gz 2026-03-10T08:02:46.045 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mon.b.log 2026-03-10T08:02:46.045 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.log: 90.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T08:02:46.047 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.5.log 2026-03-10T08:02:46.048 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.3.log: /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mon.c.log 2026-03-10T08:02:46.049 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mon.b.log: 88.2% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.log.gz 2026-03-10T08:02:46.049 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.7.log 2026-03-10T08:02:46.053 INFO:teuthology.orchestra.run.vm00.stderr: 93.5%gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.1.log 2026-03-10T08:02:46.053 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mon.c.log: -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.log.gz 2026-03-10T08:02:46.055 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.6.log 2026-03-10T08:02:46.063 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.7.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.audit.log 2026-03-10T08:02:46.064 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mgr.y.log 2026-03-10T08:02:46.067 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-volume.log 2026-03-10T08:02:46.076 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mon.a.log 2026-03-10T08:02:46.076 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.audit.log: 92.4%gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.cephadm.log 2026-03-10T08:02:46.083 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.4.log 2026-03-10T08:02:46.084 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.2.log 2026-03-10T08:02:46.087 INFO:teuthology.orchestra.run.vm03.stderr: -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.audit.log.gz 2026-03-10T08:02:46.087 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.cephadm.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/tcmu-runner.log 2026-03-10T08:02:46.087 INFO:teuthology.orchestra.run.vm03.stderr: 80.0% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.cephadm.log.gz 2026-03-10T08:02:46.092 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.audit.log 2026-03-10T08:02:46.095 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-volume.log 2026-03-10T08:02:46.099 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-client.rgw.foo.a.log 2026-03-10T08:02:46.103 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.4.log: /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/tcmu-runner.log: 73.5% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/tcmu-runner.log.gz 2026-03-10T08:02:46.108 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.cephadm.log 2026-03-10T08:02:46.111 INFO:teuthology.orchestra.run.vm03.stderr: 95.8% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-volume.log.gz 2026-03-10T08:02:46.116 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-client.rgw.foo.a.log: gzip -5 --verbose -- /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.0.log 2026-03-10T08:02:46.119 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.cephadm.log: 88.6% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.cephadm.log.gz 2026-03-10T08:02:46.143 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.0.log: 95.3% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph.audit.log.gz 2026-03-10T08:02:46.163 INFO:teuthology.orchestra.run.vm00.stderr: 95.8% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-volume.log.gz 2026-03-10T08:02:46.243 INFO:teuthology.orchestra.run.vm00.stderr: 94.6% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-client.rgw.foo.a.log.gz 2026-03-10T08:02:47.059 INFO:teuthology.orchestra.run.vm00.stderr: 91.4% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mgr.y.log.gz 2026-03-10T08:02:48.008 INFO:teuthology.orchestra.run.vm03.stderr: 92.8% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mon.b.log.gz 2026-03-10T08:02:48.655 INFO:teuthology.orchestra.run.vm00.stderr: 92.6% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mon.c.log.gz 2026-03-10T08:02:50.699 INFO:teuthology.orchestra.run.vm00.stderr: 92.0% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-mon.a.log.gz 2026-03-10T08:02:56.651 INFO:teuthology.orchestra.run.vm03.stderr: 94.6% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.6.log.gz 2026-03-10T08:02:57.116 INFO:teuthology.orchestra.run.vm00.stderr: 94.6% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.2.log.gz 2026-03-10T08:02:57.227 INFO:teuthology.orchestra.run.vm03.stderr: 94.7% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.4.log.gz 2026-03-10T08:02:57.440 INFO:teuthology.orchestra.run.vm03.stderr: 94.7% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.5.log.gz 2026-03-10T08:02:57.481 INFO:teuthology.orchestra.run.vm03.stderr: 94.7% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.7.log.gz 2026-03-10T08:02:57.483 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-10T08:02:57.483 INFO:teuthology.orchestra.run.vm03.stderr:real 0m11.452s 2026-03-10T08:02:57.483 INFO:teuthology.orchestra.run.vm03.stderr:user 0m21.376s 2026-03-10T08:02:57.483 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m1.419s 2026-03-10T08:02:57.943 INFO:teuthology.orchestra.run.vm00.stderr: 94.7% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.0.log.gz 2026-03-10T08:02:58.028 INFO:teuthology.orchestra.run.vm00.stderr: 94.6% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.1.log.gz 2026-03-10T08:02:58.494 INFO:teuthology.orchestra.run.vm00.stderr: 94.6% -- replaced with /var/log/ceph/534d9c8a-1c51-11f1-ac87-d1fb9a119953/ceph-osd.3.log.gz 2026-03-10T08:02:58.495 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-10T08:02:58.495 INFO:teuthology.orchestra.run.vm00.stderr:real 0m12.465s 2026-03-10T08:02:58.495 INFO:teuthology.orchestra.run.vm00.stderr:user 0m22.728s 2026-03-10T08:02:58.495 INFO:teuthology.orchestra.run.vm00.stderr:sys 0m1.689s 2026-03-10T08:02:58.495 INFO:tasks.cephadm:Archiving logs... 2026-03-10T08:02:58.495 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/945/remote/vm00/log 2026-03-10T08:02:58.496 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T08:02:59.510 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/945/remote/vm03/log 2026-03-10T08:02:59.510 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T08:03:00.346 INFO:tasks.cephadm:Removing cluster... 2026-03-10T08:03:00.346 DEBUG:teuthology.orchestra.run.vm00:> sudo cephadm rm-cluster --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 --force 2026-03-10T08:03:00.438 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T08:03:01.764 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm rm-cluster --fsid 534d9c8a-1c51-11f1-ac87-d1fb9a119953 --force 2026-03-10T08:03:01.876 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 534d9c8a-1c51-11f1-ac87-d1fb9a119953 2026-03-10T08:03:03.185 INFO:tasks.cephadm:Teardown complete 2026-03-10T08:03:03.185 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T08:03:03.187 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T08:03:03.188 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T08:03:03.189 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T08:03:03.204 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-10T08:03:03.205 DEBUG:teuthology.orchestra.run.vm00:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-10T08:03:03.209 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-10T08:03:03.209 DEBUG:teuthology.orchestra.run.vm03:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-10T08:03:03.278 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:03.284 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:03.438 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:03.439 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:03.508 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:03.508 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:03.552 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:03.552 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T08:03:03.553 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T08:03:03.553 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:03.562 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T08:03:03.562 INFO:teuthology.orchestra.run.vm03.stdout: ceph* 2026-03-10T08:03:03.764 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:03.764 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T08:03:03.764 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T08:03:03.764 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:03.815 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T08:03:03.817 INFO:teuthology.orchestra.run.vm00.stdout: ceph* 2026-03-10T08:03:03.995 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T08:03:03.995 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-10T08:03:03.998 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T08:03:03.998 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-10T08:03:04.038 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-10T08:03:04.038 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-10T08:03:04.039 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:04.041 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:05.283 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:05.329 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:05.398 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:05.436 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:05.615 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:05.615 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:05.690 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:05.691 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:05.929 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:05.929 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T08:03:05.930 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T08:03:05.930 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:05.947 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T08:03:05.948 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm* cephadm* 2026-03-10T08:03:06.011 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:06.012 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T08:03:06.013 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T08:03:06.013 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:06.030 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T08:03:06.031 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm* cephadm* 2026-03-10T08:03:06.165 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T08:03:06.165 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-10T08:03:06.205 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-10T08:03:06.208 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:06.235 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T08:03:06.235 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-10T08:03:06.242 INFO:teuthology.orchestra.run.vm00.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:06.276 INFO:teuthology.orchestra.run.vm00.stdout:Looking for files to backup/remove ... 2026-03-10T08:03:06.277 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-10T08:03:06.277 INFO:teuthology.orchestra.run.vm00.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-10T08:03:06.280 INFO:teuthology.orchestra.run.vm00.stdout:Removing user `cephadm' ... 2026-03-10T08:03:06.280 INFO:teuthology.orchestra.run.vm00.stdout:Warning: group `nogroup' has no more members. 2026-03-10T08:03:06.280 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:06.294 INFO:teuthology.orchestra.run.vm00.stdout:Done. 2026-03-10T08:03:06.303 INFO:teuthology.orchestra.run.vm03.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:06.320 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:06.335 INFO:teuthology.orchestra.run.vm03.stdout:Looking for files to backup/remove ... 2026-03-10T08:03:06.337 INFO:teuthology.orchestra.run.vm03.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-10T08:03:06.340 INFO:teuthology.orchestra.run.vm03.stdout:Removing user `cephadm' ... 2026-03-10T08:03:06.340 INFO:teuthology.orchestra.run.vm03.stdout:Warning: group `nogroup' has no more members. 2026-03-10T08:03:06.355 INFO:teuthology.orchestra.run.vm03.stdout:Done. 2026-03-10T08:03:06.382 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:06.431 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T08:03:06.434 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:06.502 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T08:03:06.505 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:07.652 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:07.702 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:07.777 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:07.820 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:07.923 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:07.923 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:08.084 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:08.084 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:08.159 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:08.159 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T08:03:08.159 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T08:03:08.159 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:08.167 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T08:03:08.168 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds* 2026-03-10T08:03:08.347 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:08.347 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T08:03:08.348 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T08:03:08.348 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:08.367 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T08:03:08.368 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds* 2026-03-10T08:03:08.378 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T08:03:08.378 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-10T08:03:08.422 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T08:03:08.424 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:08.600 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T08:03:08.600 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-10T08:03:08.659 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T08:03:08.664 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:08.911 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:09.032 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T08:03:09.034 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:09.136 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:09.255 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T08:03:09.257 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:10.859 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:10.891 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:10.898 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:10.930 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:11.112 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:11.113 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:11.132 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:11.133 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:11.407 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:11.407 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev 2026-03-10T08:03:11.409 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:11.428 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T08:03:11.428 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-10T08:03:11.429 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents* 2026-03-10T08:03:11.455 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:11.456 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev 2026-03-10T08:03:11.457 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:11.470 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T08:03:11.470 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-10T08:03:11.471 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents* 2026-03-10T08:03:11.638 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-10T08:03:11.638 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 165 MB disk space will be freed. 2026-03-10T08:03:11.682 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T08:03:11.685 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-10T08:03:11.685 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 165 MB disk space will be freed. 2026-03-10T08:03:11.685 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:11.698 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:11.719 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T08:03:11.720 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:11.726 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:11.730 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:11.752 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:11.759 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:11.787 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:12.261 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T08:03:12.263 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:12.302 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T08:03:12.305 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:13.713 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:13.751 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:13.907 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:13.945 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:13.945 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:13.945 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:14.054 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:14.054 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:14.054 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T08:03:14.055 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:14.063 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T08:03:14.063 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-10T08:03:14.181 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:14.181 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:14.238 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T08:03:14.238 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 472 MB disk space will be freed. 2026-03-10T08:03:14.276 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T08:03:14.277 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:14.336 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:14.385 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:14.385 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:14.385 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T08:03:14.386 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:14.399 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T08:03:14.401 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-10T08:03:14.605 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T08:03:14.605 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 472 MB disk space will be freed. 2026-03-10T08:03:14.646 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T08:03:14.649 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:14.712 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:14.783 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:15.133 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:15.224 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:15.620 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:15.689 INFO:teuthology.orchestra.run.vm03.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:16.119 INFO:teuthology.orchestra.run.vm00.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:16.165 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:16.210 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:16.579 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:16.600 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:16.651 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:16.687 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T08:03:16.756 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-10T08:03:16.758 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:17.104 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:17.140 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T08:03:17.224 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-10T08:03:17.227 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:17.411 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:17.849 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:17.901 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:18.301 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:18.337 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:18.749 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:18.784 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:19.219 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:20.319 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:20.357 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:20.574 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:20.574 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T08:03:20.736 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:20.744 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T08:03:20.744 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse* 2026-03-10T08:03:20.909 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T08:03:20.909 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-10T08:03:20.962 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-10T08:03:20.965 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:21.055 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:21.093 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:21.350 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:21.351 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:21.440 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:21.559 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T08:03:21.563 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:21.630 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:21.630 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:21.630 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T08:03:21.630 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T08:03:21.630 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T08:03:21.630 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T08:03:21.630 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T08:03:21.631 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T08:03:21.631 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T08:03:21.631 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T08:03:21.631 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T08:03:21.631 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T08:03:21.631 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T08:03:21.631 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T08:03:21.631 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T08:03:21.631 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:21.631 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T08:03:21.631 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:21.649 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T08:03:21.650 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse* 2026-03-10T08:03:21.863 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T08:03:21.863 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-10T08:03:21.913 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-10T08:03:21.915 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:22.352 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:22.452 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T08:03:22.454 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:23.071 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:23.110 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:23.328 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:23.329 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:23.445 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-10T08:03:23.445 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:23.445 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:23.445 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T08:03:23.446 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:23.461 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:23.462 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:23.497 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:23.696 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:23.696 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:23.963 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-10T08:03:23.963 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:23.963 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:23.963 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T08:03:23.964 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T08:03:23.964 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T08:03:23.964 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T08:03:23.964 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T08:03:23.964 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T08:03:23.965 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T08:03:23.965 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T08:03:23.965 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T08:03:23.965 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T08:03:23.965 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T08:03:23.965 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T08:03:23.965 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T08:03:23.965 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:23.965 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T08:03:23.965 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:23.997 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:23.999 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:23.999 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:24.033 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:24.035 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:24.244 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:24.245 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:24.280 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:24.280 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout:Package 'radosgw' is not installed, so not removed 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T08:03:24.355 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T08:03:24.356 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T08:03:24.356 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T08:03:24.356 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:24.356 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T08:03:24.356 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:24.371 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:24.371 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:24.406 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:24.550 INFO:teuthology.orchestra.run.vm00.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-10T08:03:24.550 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:24.550 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:24.551 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T08:03:24.552 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:24.586 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:24.586 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:24.620 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:24.660 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:24.661 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:24.856 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T08:03:24.857 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:24.869 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:24.870 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:24.872 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T08:03:24.873 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T08:03:25.059 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:25.080 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-10T08:03:25.080 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-10T08:03:25.094 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:25.094 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:25.118 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T08:03:25.120 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:25.131 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:25.132 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:25.245 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:25.355 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:25.356 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:25.524 INFO:teuthology.orchestra.run.vm00.stdout:Package 'radosgw' is not installed, so not removed 2026-03-10T08:03:25.524 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:25.524 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:25.524 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T08:03:25.525 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:25.541 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:25.541 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:25.582 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:25.814 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:25.815 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:26.080 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:26.081 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:26.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:26.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:26.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:26.082 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T08:03:26.082 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:26.095 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T08:03:26.095 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-10T08:03:26.303 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-10T08:03:26.303 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-10T08:03:26.346 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T08:03:26.348 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:26.360 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:26.370 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:26.623 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:26.663 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:26.881 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:26.882 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:27.092 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-10T08:03:27.092 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:27.092 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:27.092 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:27.092 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T08:03:27.093 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:27.111 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:27.111 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:27.157 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:27.382 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:27.382 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:27.491 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:27.492 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:27.492 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:27.492 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:27.492 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:27.492 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:27.492 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T08:03:27.492 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:27.496 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:27.514 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:27.514 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:27.534 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:27.551 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:27.790 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:27.790 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:27.795 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:27.796 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:28.058 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:28.058 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:28.058 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:28.058 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:28.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:28.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:28.060 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T08:03:28.060 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:28.081 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T08:03:28.081 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd* 2026-03-10T08:03:28.087 INFO:teuthology.orchestra.run.vm00.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-10T08:03:28.088 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:28.088 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:28.088 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:28.088 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:28.089 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:28.090 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:28.090 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T08:03:28.090 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:28.129 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:28.130 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:28.164 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:28.262 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T08:03:28.262 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-10T08:03:28.297 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-10T08:03:28.299 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:28.402 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:28.403 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:28.624 INFO:teuthology.orchestra.run.vm00.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-10T08:03:28.624 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:28.624 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:28.624 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:28.624 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:28.625 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T08:03:28.626 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:28.642 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:28.642 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:28.677 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:28.863 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:28.863 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:28.978 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:28.978 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:28.978 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:28.978 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:28.978 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T08:03:28.979 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:28.987 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T08:03:28.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd* 2026-03-10T08:03:29.147 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T08:03:29.147 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-10T08:03:29.182 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-10T08:03:29.185 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:29.663 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:29.700 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:29.957 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:29.957 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:30.205 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:30.205 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:30.206 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:30.206 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:30.207 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:30.207 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:30.207 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:30.207 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:30.207 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:30.207 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:30.207 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:30.207 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:30.207 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:30.207 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:30.208 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:30.208 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:30.208 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:30.208 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:30.208 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T08:03:30.208 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:30.232 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T08:03:30.233 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev* libcephfs2* 2026-03-10T08:03:30.238 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:30.277 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:30.464 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T08:03:30.465 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-10T08:03:30.480 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:30.481 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:30.520 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-10T08:03:30.523 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:30.536 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:30.563 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T08:03:30.600 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:30.600 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:30.600 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:30.600 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T08:03:30.601 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:30.609 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T08:03:30.609 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-dev* libcephfs2* 2026-03-10T08:03:30.780 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T08:03:30.780 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-10T08:03:30.815 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-10T08:03:30.817 INFO:teuthology.orchestra.run.vm00.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:30.830 INFO:teuthology.orchestra.run.vm00.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:30.857 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T08:03:31.630 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:31.665 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:31.840 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:31.840 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:32.056 INFO:teuthology.orchestra.run.vm03.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-10T08:03:32.056 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:32.057 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:32.057 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:32.057 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:32.058 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:32.058 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:32.058 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T08:03:32.059 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:32.094 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:32.094 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:32.133 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:32.240 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:32.275 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:32.352 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:32.353 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T08:03:32.469 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:32.477 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T08:03:32.477 INFO:teuthology.orchestra.run.vm03.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-10T08:03:32.477 INFO:teuthology.orchestra.run.vm03.stdout: qemu-block-extra* rbd-fuse* 2026-03-10T08:03:32.534 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:32.535 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:32.637 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T08:03:32.637 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-10T08:03:32.671 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-10T08:03:32.673 INFO:teuthology.orchestra.run.vm03.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:32.685 INFO:teuthology.orchestra.run.vm03.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:32.696 INFO:teuthology.orchestra.run.vm03.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:32.706 INFO:teuthology.orchestra.run.vm03.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T08:03:32.836 INFO:teuthology.orchestra.run.vm00.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-10T08:03:32.836 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:32.836 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:32.836 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T08:03:32.836 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T08:03:32.837 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:32.853 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:32.853 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:32.890 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:33.075 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:33.076 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:33.146 INFO:teuthology.orchestra.run.vm03.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:33.158 INFO:teuthology.orchestra.run.vm03.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:33.169 INFO:teuthology.orchestra.run.vm03.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:33.191 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:33.191 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:33.191 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T08:03:33.191 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T08:03:33.192 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:33.198 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:33.200 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T08:03:33.200 INFO:teuthology.orchestra.run.vm00.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-10T08:03:33.200 INFO:teuthology.orchestra.run.vm00.stdout: qemu-block-extra* rbd-fuse* 2026-03-10T08:03:33.242 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T08:03:33.326 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T08:03:33.329 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T08:03:33.373 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T08:03:33.373 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-10T08:03:33.408 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-10T08:03:33.410 INFO:teuthology.orchestra.run.vm00.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:33.422 INFO:teuthology.orchestra.run.vm00.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:33.435 INFO:teuthology.orchestra.run.vm00.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:33.447 INFO:teuthology.orchestra.run.vm00.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T08:03:33.895 INFO:teuthology.orchestra.run.vm00.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:33.912 INFO:teuthology.orchestra.run.vm00.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:33.928 INFO:teuthology.orchestra.run.vm00.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:33.957 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:33.994 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T08:03:34.060 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T08:03:34.062 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T08:03:35.222 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:35.258 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:35.490 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:35.491 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:35.737 INFO:teuthology.orchestra.run.vm03.stdout:Package 'librbd1' is not installed, so not removed 2026-03-10T08:03:35.737 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:35.737 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:35.737 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T08:03:35.737 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T08:03:35.737 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T08:03:35.737 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T08:03:35.738 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:35.775 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:35.775 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:35.816 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:35.868 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:35.905 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:36.054 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:36.054 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:36.140 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:36.140 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:36.320 INFO:teuthology.orchestra.run.vm03.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-10T08:03:36.320 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:36.320 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:36.320 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T08:03:36.321 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T08:03:36.321 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T08:03:36.321 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T08:03:36.321 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T08:03:36.321 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:36.321 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:36.321 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T08:03:36.322 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:36.353 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:36.353 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:36.355 DEBUG:teuthology.orchestra.run.vm03:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-10T08:03:36.414 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-10T08:03:36.418 INFO:teuthology.orchestra.run.vm00.stdout:Package 'librbd1' is not installed, so not removed 2026-03-10T08:03:36.418 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:36.419 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:36.419 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T08:03:36.419 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T08:03:36.419 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:36.420 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:36.421 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T08:03:36.421 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T08:03:36.421 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:36.458 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:36.458 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:36.492 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:36.503 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:36.676 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:36.677 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:36.734 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T08:03:36.734 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T08:03:36.789 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T08:03:36.804 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T08:03:36.804 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:36.806 DEBUG:teuthology.orchestra.run.vm00:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-10T08:03:36.860 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-10T08:03:36.938 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:36.954 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T08:03:36.954 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:36.954 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T08:03:36.954 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T08:03:36.954 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:36.955 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:36.956 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:36.956 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T08:03:36.956 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T08:03:37.090 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T08:03:37.090 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T08:03:37.164 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-10T08:03:37.164 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 107 MB disk space will be freed. 2026-03-10T08:03:37.204 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T08:03:37.204 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T08:03:37.204 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T08:03:37.204 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T08:03:37.204 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T08:03:37.204 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T08:03:37.204 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T08:03:37.205 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T08:03:37.207 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T08:03:37.210 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:37.230 INFO:teuthology.orchestra.run.vm03.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-10T08:03:37.243 INFO:teuthology.orchestra.run.vm03.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-10T08:03:37.257 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T08:03:37.271 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T08:03:37.284 INFO:teuthology.orchestra.run.vm03.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T08:03:37.297 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T08:03:37.311 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T08:03:37.326 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T08:03:37.350 INFO:teuthology.orchestra.run.vm03.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T08:03:37.363 INFO:teuthology.orchestra.run.vm03.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T08:03:37.377 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T08:03:37.392 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T08:03:37.395 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-10T08:03:37.395 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 107 MB disk space will be freed. 2026-03-10T08:03:37.406 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T08:03:37.419 INFO:teuthology.orchestra.run.vm03.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T08:03:37.430 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T08:03:37.432 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:37.433 INFO:teuthology.orchestra.run.vm03.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-10T08:03:37.445 INFO:teuthology.orchestra.run.vm03.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T08:03:37.450 INFO:teuthology.orchestra.run.vm00.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-10T08:03:37.457 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T08:03:37.461 INFO:teuthology.orchestra.run.vm00.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-10T08:03:37.469 INFO:teuthology.orchestra.run.vm03.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-10T08:03:37.474 INFO:teuthology.orchestra.run.vm00.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T08:03:37.486 INFO:teuthology.orchestra.run.vm00.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T08:03:37.495 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T08:03:37.499 INFO:teuthology.orchestra.run.vm00.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T08:03:37.506 INFO:teuthology.orchestra.run.vm03.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-10T08:03:37.514 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T08:03:37.518 INFO:teuthology.orchestra.run.vm03.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T08:03:37.528 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T08:03:37.530 INFO:teuthology.orchestra.run.vm03.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T08:03:37.539 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T08:03:37.542 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T08:03:37.556 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-10T08:03:37.558 INFO:teuthology.orchestra.run.vm00.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T08:03:37.568 INFO:teuthology.orchestra.run.vm00.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T08:03:37.569 INFO:teuthology.orchestra.run.vm03.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T08:03:37.580 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T08:03:37.582 INFO:teuthology.orchestra.run.vm03.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T08:03:37.593 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T08:03:37.594 INFO:teuthology.orchestra.run.vm03.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-10T08:03:37.604 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T08:03:37.604 INFO:teuthology.orchestra.run.vm03.stdout:update-initramfs: deferring update (trigger activated) 2026-03-10T08:03:37.615 INFO:teuthology.orchestra.run.vm00.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T08:03:37.616 INFO:teuthology.orchestra.run.vm03.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-10T08:03:37.625 INFO:teuthology.orchestra.run.vm00.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-10T08:03:37.636 INFO:teuthology.orchestra.run.vm03.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-10T08:03:37.636 INFO:teuthology.orchestra.run.vm00.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T08:03:37.646 INFO:teuthology.orchestra.run.vm00.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T08:03:37.650 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-any (27ubuntu1) ... 2026-03-10T08:03:37.658 INFO:teuthology.orchestra.run.vm00.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-10T08:03:37.662 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-10T08:03:37.675 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T08:03:37.686 INFO:teuthology.orchestra.run.vm00.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T08:03:37.690 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-10T08:03:37.698 INFO:teuthology.orchestra.run.vm00.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-10T08:03:37.710 INFO:teuthology.orchestra.run.vm00.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T08:03:37.711 INFO:teuthology.orchestra.run.vm03.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T08:03:37.724 INFO:teuthology.orchestra.run.vm00.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T08:03:37.736 INFO:teuthology.orchestra.run.vm00.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T08:03:37.748 INFO:teuthology.orchestra.run.vm00.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-10T08:03:37.762 INFO:teuthology.orchestra.run.vm00.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T08:03:37.775 INFO:teuthology.orchestra.run.vm00.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T08:03:37.787 INFO:teuthology.orchestra.run.vm00.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-10T08:03:37.796 INFO:teuthology.orchestra.run.vm00.stdout:update-initramfs: deferring update (trigger activated) 2026-03-10T08:03:37.807 INFO:teuthology.orchestra.run.vm00.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-10T08:03:37.829 INFO:teuthology.orchestra.run.vm00.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-10T08:03:37.845 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-any (27ubuntu1) ... 2026-03-10T08:03:37.859 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-10T08:03:37.873 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T08:03:37.888 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-10T08:03:37.907 INFO:teuthology.orchestra.run.vm00.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T08:03:38.165 INFO:teuthology.orchestra.run.vm03.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T08:03:38.200 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T08:03:38.230 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T08:03:38.290 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-10T08:03:38.324 INFO:teuthology.orchestra.run.vm00.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T08:03:38.336 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-10T08:03:38.360 INFO:teuthology.orchestra.run.vm00.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T08:03:38.388 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T08:03:38.390 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-10T08:03:38.442 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T08:03:38.452 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T08:03:38.456 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-10T08:03:38.506 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T08:03:38.510 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-10T08:03:38.585 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-10T08:03:38.638 INFO:teuthology.orchestra.run.vm00.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T08:03:38.657 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T08:03:38.718 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T08:03:38.783 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-10T08:03:38.841 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-10T08:03:38.892 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:39.027 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-10T08:03:39.036 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:39.085 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-10T08:03:39.091 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-10T08:03:39.139 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:39.154 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T08:03:39.192 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T08:03:39.205 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-10T08:03:39.245 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-10T08:03:39.254 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-10T08:03:39.305 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T08:03:39.314 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-10T08:03:39.367 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-10T08:03:39.371 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-10T08:03:39.421 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-10T08:03:39.421 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-10T08:03:39.473 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-10T08:03:39.475 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-10T08:03:39.527 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T08:03:39.529 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-10T08:03:39.578 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-10T08:03:39.626 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-10T08:03:39.654 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T08:03:39.677 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T08:03:39.723 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-10T08:03:39.865 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T08:03:39.889 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T08:03:39.921 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-10T08:03:39.961 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-10T08:03:39.975 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T08:03:40.011 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T08:03:40.050 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-10T08:03:40.068 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-10T08:03:40.108 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-10T08:03:40.131 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T08:03:40.170 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-10T08:03:40.191 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-10T08:03:40.224 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T08:03:40.241 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-10T08:03:40.281 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-10T08:03:40.295 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-10T08:03:40.335 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T08:03:40.348 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T08:03:40.389 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rsa (4.8-1) ... 2026-03-10T08:03:40.409 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-10T08:03:40.448 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-10T08:03:40.467 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T08:03:40.500 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-10T08:03:40.525 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rsa (4.8-1) ... 2026-03-10T08:03:40.562 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-10T08:03:40.582 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-10T08:03:40.640 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T08:03:40.642 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-10T08:03:40.676 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T08:03:40.708 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-10T08:03:40.731 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-10T08:03:40.767 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T08:03:40.793 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T08:03:40.794 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T08:03:40.849 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-10T08:03:40.849 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T08:03:40.898 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T08:03:40.905 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T08:03:40.952 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T08:03:40.959 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-10T08:03:41.008 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T08:03:41.013 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T08:03:41.069 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-10T08:03:41.071 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-10T08:03:41.123 INFO:teuthology.orchestra.run.vm03.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-10T08:03:41.126 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T08:03:41.148 INFO:teuthology.orchestra.run.vm03.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T08:03:41.184 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-10T08:03:41.238 INFO:teuthology.orchestra.run.vm00.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-10T08:03:41.259 INFO:teuthology.orchestra.run.vm00.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T08:03:41.613 INFO:teuthology.orchestra.run.vm03.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-10T08:03:41.626 INFO:teuthology.orchestra.run.vm03.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-10T08:03:41.648 INFO:teuthology.orchestra.run.vm03.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-10T08:03:41.666 INFO:teuthology.orchestra.run.vm03.stdout:Removing zip (3.0-12build2) ... 2026-03-10T08:03:41.697 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T08:03:41.708 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:41.709 INFO:teuthology.orchestra.run.vm00.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-10T08:03:41.721 INFO:teuthology.orchestra.run.vm00.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-10T08:03:41.744 INFO:teuthology.orchestra.run.vm00.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-10T08:03:41.758 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T08:03:41.764 INFO:teuthology.orchestra.run.vm00.stdout:Removing zip (3.0-12build2) ... 2026-03-10T08:03:41.766 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-10T08:03:41.789 INFO:teuthology.orchestra.run.vm03.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-10T08:03:41.791 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T08:03:41.803 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T08:03:41.850 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T08:03:41.858 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-10T08:03:41.880 INFO:teuthology.orchestra.run.vm00.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-10T08:03:43.450 INFO:teuthology.orchestra.run.vm03.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-10T08:03:43.451 INFO:teuthology.orchestra.run.vm03.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-10T08:03:43.559 INFO:teuthology.orchestra.run.vm00.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-10T08:03:43.560 INFO:teuthology.orchestra.run.vm00.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-10T08:03:46.014 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:46.017 DEBUG:teuthology.parallel:result is None 2026-03-10T08:03:46.073 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T08:03:46.077 DEBUG:teuthology.parallel:result is None 2026-03-10T08:03:46.078 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm00.local 2026-03-10T08:03:46.079 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm03.local 2026-03-10T08:03:46.079 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-10T08:03:46.079 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-10T08:03:46.089 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-10T08:03:46.128 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-get update 2026-03-10T08:03:46.398 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://security.ubuntu.com/ubuntu jammy-security InRelease [129 kB] 2026-03-10T08:03:46.467 INFO:teuthology.orchestra.run.vm00.stdout:Get:1 https://security.ubuntu.com/ubuntu jammy-security InRelease [129 kB] 2026-03-10T08:03:46.620 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://security.ubuntu.com/ubuntu jammy-security/restricted amd64 c-n-f Metadata [680 B] 2026-03-10T08:03:46.691 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://security.ubuntu.com/ubuntu jammy-security/restricted amd64 c-n-f Metadata [680 B] 2026-03-10T08:03:46.706 INFO:teuthology.orchestra.run.vm00.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T08:03:46.714 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T08:03:46.816 INFO:teuthology.orchestra.run.vm00.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates InRelease [128 kB] 2026-03-10T08:03:46.844 INFO:teuthology.orchestra.run.vm03.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates InRelease [128 kB] 2026-03-10T08:03:47.212 INFO:teuthology.orchestra.run.vm00.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-backports InRelease [127 kB] 2026-03-10T08:03:47.302 INFO:teuthology.orchestra.run.vm03.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-backports InRelease [127 kB] 2026-03-10T08:03:47.386 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 c-n-f Metadata [676 B] 2026-03-10T08:03:47.517 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 c-n-f Metadata [676 B] 2026-03-10T08:03:49.691 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 386 kB in 1s (288 kB/s) 2026-03-10T08:03:49.733 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 386 kB in 1s (335 kB/s) 2026-03-10T08:03:50.593 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T08:03:50.607 DEBUG:teuthology.parallel:result is None 2026-03-10T08:03:50.616 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T08:03:50.629 DEBUG:teuthology.parallel:result is None 2026-03-10T08:03:50.629 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T08:03:50.632 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T08:03:50.632 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T08:03:50.633 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:-185.252.140.125 216.239.35.4 2 u 101 128 377 25.133 -10.423 3.207 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:-time.cloudflare 10.17.8.4 3 u 44 128 377 20.453 -7.258 1.972 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:-172-104-138-148 129.70.132.32 3 u 166 128 376 22.595 -8.999 2.442 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:+ernie.gerger-ne 213.172.96.14 3 u 38 128 377 31.883 -10.030 2.536 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:#ethel.0b.yt 217.144.138.234 3 u 48 128 377 20.297 -5.386 3.782 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:+178.215.228.24 189.97.54.122 2 u 37 128 377 21.906 -11.883 3.499 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:-static.215.156. 35.73.197.144 2 u 68 128 377 23.596 -9.445 2.311 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:-130.61.89.107 237.17.204.95 2 u 30 128 377 20.999 -8.524 1.933 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:*node-4.infogral 168.239.11.197 2 u 41 128 377 23.561 -9.540 2.265 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:+ns1.blazing.de 213.172.96.14 3 u 62 128 377 31.928 -10.848 3.235 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:#185.125.190.58 145.238.80.80 2 u 26 128 377 35.461 -8.902 6.376 2026-03-10T08:03:51.269 INFO:teuthology.orchestra.run.vm00.stdout:+185.125.190.56 79.243.60.50 2 u 22 128 377 35.326 -8.637 2.073 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:*178.215.228.24 189.97.54.122 2 u 102 256 377 21.965 -7.692 0.375 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:+172-104-138-148 129.70.132.32 3 u 110 256 377 22.644 -7.608 0.386 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:-node-4.infogral 168.239.11.197 2 u 66 256 377 23.540 -7.077 1.084 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:-185.252.140.125 216.239.35.4 2 u 124 256 377 25.148 -6.910 0.454 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:-ns1.blazing.de 213.172.96.14 3 u 61 256 377 31.939 -7.269 1.071 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:-time.cloudflare 10.163.8.4 3 u 112 256 377 20.437 -6.046 0.318 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:+ernie.gerger-ne 213.172.96.14 3 u 126 256 377 31.905 -7.343 0.315 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:#185.125.190.56 79.243.60.50 2 u 82 256 377 35.440 -8.275 0.429 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:-130.61.89.107 237.17.204.95 2 u 98 256 377 20.940 -6.874 0.466 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:-static.215.156. 35.73.197.144 2 u 167 256 377 23.599 -6.247 1.490 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:#185.125.190.57 194.121.207.249 2 u 100 256 377 32.040 -6.588 1.615 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:-ethel.0b.yt 217.144.138.234 3 u 109 256 377 20.297 -6.897 0.444 2026-03-10T08:03:51.332 INFO:teuthology.orchestra.run.vm03.stdout:#185.125.190.58 145.238.80.80 2 u 94 256 377 35.229 -8.814 0.286 2026-03-10T08:03:51.333 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T08:03:51.336 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T08:03:51.336 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T08:03:51.339 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T08:03:51.341 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T08:03:51.343 INFO:teuthology.task.internal:Duration was 2962.522217 seconds 2026-03-10T08:03:51.343 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T08:03:51.345 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T08:03:51.345 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T08:03:51.347 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T08:03:51.377 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T08:03:51.377 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-03-10T08:03:51.377 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T08:03:51.433 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-03-10T08:03:51.434 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T08:03:51.444 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T08:03:51.445 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T08:03:51.476 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T08:03:51.739 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T08:03:51.739 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T08:03:51.740 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T08:03:51.749 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T08:03:51.750 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T08:03:51.750 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T08:03:51.750 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T08:03:51.750 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T08:03:51.750 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T08:03:51.750 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T08:03:51.750 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T08:03:51.750 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0%/home/ubuntu/cephtest/archive/syslog/journalctl.log: -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T08:03:51.751 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T08:03:51.775 INFO:teuthology.orchestra.run.vm03.stderr: 93.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T08:03:51.791 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 95.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T08:03:51.792 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T08:03:51.795 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T08:03:51.795 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T08:03:51.844 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T08:03:51.853 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T08:03:51.856 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T08:03:51.888 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T08:03:51.897 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-03-10T08:03:51.901 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-03-10T08:03:51.909 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T08:03:51.954 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:03:51.954 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T08:03:51.957 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:03:51.957 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T08:03:51.961 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T08:03:51.961 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/945/remote/vm00 2026-03-10T08:03:51.961 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T08:03:52.007 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/945/remote/vm03 2026-03-10T08:03:52.007 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T08:03:52.018 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T08:03:52.018 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T08:03:52.048 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T08:03:52.062 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T08:03:52.065 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T08:03:52.066 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T08:03:52.068 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T08:03:52.069 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T08:03:52.096 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T08:03:52.100 INFO:teuthology.orchestra.run.vm00.stdout: 258079 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 08:03 /home/ubuntu/cephtest 2026-03-10T08:03:52.105 INFO:teuthology.orchestra.run.vm03.stdout: 258076 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 08:03 /home/ubuntu/cephtest 2026-03-10T08:03:52.106 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T08:03:52.112 INFO:teuthology.run:Summary data: description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} duration: 2962.5222165584564 flavor: default owner: kyr success: true 2026-03-10T08:03:52.112 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T08:03:52.136 INFO:teuthology.run:pass